When to Worry About Sensitivity Bias: Evidence from 30 Years of List Experiments

Abstract

Direct survey measures of sensitive beliefs, attitudes, and behaviors may generate biased prevalence estimates. “Social desirability bias” is often invoked as a catchall term to describe the various sources of measurement error associated with sensitive questions. We synthesize work in social psychology and political science on impression management and social desirability to develop a reference group theory of sensitivity bias encompassing both nonresponse and misreporting. We conduct a census of the published and unpublished list experiments conducted to date and compare the results with direct questions. Relative to list experimental estimates, we find that sensitivity biases are typically smaller than 10 percentage points and in some domains, approximately zero. We find that list experiments appear to deliver on their promise of approximately unbiased prevalence estimates. However, in some cases we find that they are unnecessary and are often conducted with samples that are too small. We conclude with specific recommendations for researchers choosing among measurement strategies when asking sensitive questions.

Date
Links