Measurement error poses a threat to the validity of survey research. In the context of list experiments, Ahlquist (2017) introduces the notion of “top-biased” response error, in which a random fraction of respondents provide the maximal response regardless of their truthful answer to the sensitive question. Ahlquist conducts simulation studies based on this particular scenario and finds that the maximum likelihood (ML) regression estimator, proposed in Imai (2011) and further extended in Blair and Imai (2012), exhibits severe model misspecification bias when the prevalence of sensitive trait is low. Unfortunately, Ahlquist stops short of offering any solution to the general problem of measurement error in list experiments. We take up this challenge and provide new tools for diagnosing and mitigating measurement error in list experiments. First, we observe that top-biased error is unlikely for truly sensitive questions, as it implies that respondents are willing to admit having a sensitive trait even when they do not. Second, we show that the nonlinear least squares (NLS) regression estimator is robust to top-biased error. Third, we consider an alternative and more plausible form of response error, mentioned but not studied in Ahlquist (2017), in which a small fraction of respondents offer a random response to the list experiment. We show that both ML and NLS estimators are robust to such error. Fourth, we propose a descriptive analysis and a statistical test, which can be used to detect general model misspecification caused by misreporting. Fifth, we demonstrate how to directly model nonstrategic respondent error and how to build a more robust regression model. Finally, we reanalyze the empirical examples studied in Ahlquist (2017) and demonstrate that simple diagnostic tools could have avoided the problems identified in the article. We conclude this article with a set of practical recommendations for list experiments with possible measurement error.