Gomila, Robin, Rebecca Littman, Graeme Blair, and Elizabeth Levy Paluck. 2017. “The Audio Check: A Method For Improving Data Quality and Detecting Data Fabrication.” Social Psychological and Personality Science In Press. PDF. Replication data. Abstract +/-
Blair, Graeme, Kosuke Imai, and Yang-Yang Zhou. 2015. “Design And Analysis of the Randomized Response Technique.” Journal of the American Statistical Association 110(511): 1304–19. PDF. Replication data. Abstract +/-
Blair, Graeme, Kosuke Imai, and Jason Lyall. 2014. “Comparing And Combining List and Endorsement Experiments: Evidence from Afghanistan.” American Journal of Political Science 58(4): 1043–63. PDF. Replication data. Abstract +/-
List and endorsement experiments are becoming increasingly popular among social scientists as indirect survey techniques for sensitive questions. When studying issues such as racial prejudice and support for militant groups, these survey methodologies may improve the validity of measurements by reducing non-response and social desirability biases. We develop a statistical test and multivariate regression models for comparing and combining the results from list and endorsement experiments. We demonstrate that when carefully designed and analyzed, the two survey experiments can produce substantively similar empirical findings. Such agreement is shown to be possible even when these experiments are applied to one of the most challenging research environments: contemporary Afghanistan. We find that both experiments uncover similar patterns of support for the International Security Assistance Force among Pashtun respondents. Our findings suggest that multiple measurement strategies can enhance the credibility of empirical conclusions. Open-source software is available for implementing the proposed methods.
Lyall, Jason, Graeme Blair, and Kosuke Imai. 2013. “Explaining Support For Combatants during Wartime: A Survey Experiment in Afghanistan.” American Political Science Review 107(4): 679–705. PDF. Replication data. Abstract +/-
Coverage in the Monkey Cage blog, the World Bank Development Impact blog, and the Huffington Post
How are civilian attitudes toward combatants affected by wartime victimization? Are these effects conditional on which combatant inflicted the harm? We investigate the determinants of wartime civilian attitudes towards combatants using a survey experiment across 204 villages in five Pashtun-dominated provinces of Afghanistan — the heart of the Taliban insurgency. We use endorsement experiments to indirectly elicit truthful answers to sensitive questions about support for different combatants. We demonstrate that civilian attitudes are asymmetric in nature. Harm inflicted by the International Security Assistance Force (ISAF) is met with reduced support for ISAF and increased support for the Taliban, but Taliban-inflicted harm does not translate into greater ISAF support. We combine a multistage sampling design with hierarchical modeling to estimate ISAF and Taliban support at the individual, village, and district levels, permitting a more fine-grained analysis of wartime attitudes than previously possible.
Blair, Graeme, Christine Fair, Neil Malhotra, and Jacob N. Shapiro. 2013. “Poverty And Support for Militant Politics: Evidence from Pakistan.” American Journal of Political Science 57(1): 30–48. PDF. Supporting materials. Replication data. Abstract +/-
The validity of empirical research often relies upon the accuracy of self-reported behavior and beliefs. Yet eliciting truthful answers in surveys is challenging, especially when studying sensitive issues such as racial prejudice, corruption, and support for militant groups. List experiments have attracted much attention recently as a potential solution to this measurement problem. Many researchers, however, have used a simple difference-in-means estimator, which prevents the efficient examination of multivariate relationships between respondents’ characteristics and their responses to sensitive items. Moreover, no systematic means exists to investigate the role of underlying assumptions. We fill these gaps by developing a set of new statistical methods for list experiments. We identify the commonly invoked assumptions, propose new multivariate regression estimators, and develop methods to detect and adjust for potential violations of key assumptions. For empirical illustration, we analyze list experiments concerning racial prejudice. Open-source software is made available to implement the proposed methodology.
Blair, Graeme, Rebecca Littman, and Elizabeth Levy Paluck. 2016. “Motivating The Adoption of New Community-Minded Behaviors: An Empirical Test in Nigeria.” Under review. Copy available by email. Design pre-registration.
When comparativists rely on survey data, they implicitly invoke an important assumption: that respondents answered truthfully. This assumption may be violated when there are incentives to conceal the truth. When there are incentives to conceal, our inferences about respondents (e.g. what proportion of them shared information with a militant) will be biased. What can we do about these misreporting and nonresponse biases? In what follows, I review four survey techniques used by comparativists to address incentives to conceal truthful responses. I first review survey administration practices designed to protect sensitive responses. For contexts in which these are insufficient, I review three experimental methods that can be used in addition that avoid soliciting exact answers to sensitive questions altogether. The experimental methods enable comparativists to ask survey questions that could not otherwise be asked due to ethical concerns and the risk of bias. However, these methods require additional assumptions that are often not testable, necessitating careful design and pilot testing. I conclude with a discussion of common critiques of the experimental techniques.