Graeme Blair bio photo

Assistant Professor
Political Science
UCLA

      E-mail     Google Scholar       Twitter

Peer-reviewed publications

Gomila, Robin, Rebecca Littman, Graeme Blair, and Elizabeth Levy Paluck. 2017. “The Audio Check: A Method For Improving Data Quality and Detecting Data Fabrication.” Social Psychological and Personality Science In Press. PDF. Replication data. Abstract +/-

Data quality and trust in the data collection process are critical concerns in survey research, particularly when surveyors are needed for reaching “diverse and inconvenient subject pools.” In response to irregularities in a smartphone-based pilot survey data collection in Nigeria, we developed an audio check method that unobtrusively recorded surveyors reading aloud questions to participants. We present evidence that this method detected wholesale data fabrication in 14% of our surveys, prevented further fabrication, and improved data quality through provision of regular feedback to surveyors. Using simulation, we demonstrate that undetected fabrication would have introduced significant bias in our analyses. The audio check performs well compared to more traditional methods of detecting fabrication, and a comparative cost–benefit analysis reveals a savings of more than US $1,500 per surveyor by relying on the audio check. The audio check is a viable tool for psychologists who work with survey teams.

Blair, Graeme, Kosuke Imai, and Yang-Yang Zhou. 2015. “Design And Analysis of the Randomized Response Technique.” Journal of the American Statistical Association 110(511): 1304–19. PDF. Replication data. Abstract +/-

About a half century ago, Warner (1965) proposed the randomized response method as a survey technique to reduce potential bias due to non-response and social desirability when asking questions about sensitive behaviors and beliefs. This survey methodology asks respondents to use a randomization device, such as a coin flip, whose outcome is unobserved by the enumerator. By introducing random noise, the method conceals individual responses and consequently protects respondent privacy. While numerous methodological advances have been made, we find surprising few applications of this promising survey technique. In this paper, we address this gap by (1) reviewing standard designs available to applied researchers, (2) developing various multivariate regression techniques for substantive analyses, (3) proposing power analyses to help improve research designs, (4) presenting new robust designs that are based on less stringent assumptions than those of the standard designs, and (5) making all described methods available through open-source software. We illustrate some of these methods with an original survey about militant groups in Nigeria.

Blair, Graeme, Kosuke Imai, and Jason Lyall. 2014. “Comparing And Combining List and Endorsement Experiments: Evidence from Afghanistan.” American Journal of Political Science 58(4): 1043–63. PDF. Replication data. Abstract +/-

Coverage in the World Bank Development Impact blog

List and endorsement experiments are becoming increasingly popular among social scientists as indirect survey techniques for sensitive questions. When studying issues such as racial prejudice and support for militant groups, these survey methodologies may improve the validity of measurements by reducing non-response and social desirability biases. We develop a statistical test and multivariate regression models for comparing and combining the results from list and endorsement experiments. We demonstrate that when carefully designed and analyzed, the two survey experiments can produce substantively similar empirical findings. Such agreement is shown to be possible even when these experiments are applied to one of the most challenging research environments: contemporary Afghanistan. We find that both experiments uncover similar patterns of support for the International Security Assistance Force among Pashtun respondents. Our findings suggest that multiple measurement strategies can enhance the credibility of empirical conclusions. Open-source software is available for implementing the proposed methods.

Lyall, Jason, Graeme Blair, and Kosuke Imai. 2013. “Explaining Support For Combatants during Wartime: A Survey Experiment in Afghanistan.” American Political Science Review 107(4): 679–705. PDF. Replication data. Abstract +/-

Winner of the Pi Sigma Alpha Award for the best paper delivered at the 2012 MPSA Conference. Read unabbreviated award version.
Coverage in the Monkey Cage blog, the World Bank Development Impact blog, and the Huffington Post

How are civilian attitudes toward combatants affected by wartime victimization? Are these effects conditional on which combatant inflicted the harm? We investigate the determinants of wartime civilian attitudes towards combatants using a survey experiment across 204 villages in five Pashtun-dominated provinces of Afghanistan — the heart of the Taliban insurgency. We use endorsement experiments to indirectly elicit truthful answers to sensitive questions about support for different combatants. We demonstrate that civilian attitudes are asymmetric in nature. Harm inflicted by the International Security Assistance Force (ISAF) is met with reduced support for ISAF and increased support for the Taliban, but Taliban-inflicted harm does not translate into greater ISAF support. We combine a multistage sampling design with hierarchical modeling to estimate ISAF and Taliban support at the individual, village, and district levels, permitting a more fine-grained analysis of wartime attitudes than previously possible.

Blair, Graeme, Christine Fair, Neil Malhotra, and Jacob N. Shapiro. 2013. “Poverty And Support for Militant Politics: Evidence from Pakistan.” American Journal of Political Science 57(1): 30–48. PDF. Supporting materials. Replication data. Abstract +/-

Policy debates on strategies to end extremist violence frequently cite poverty as a root cause of support for the perpetrating groups. There is little evidence to support this contention, particularly in the Pakistani case. Pakistan’s urban poor are more exposed to the negative externalities of militant violence, and may in fact be less supportive of the groups. To test these hypotheses we conducted a 6000-person, nationally representative survey of Pakistanis that measured affect towards four militant organizations. By applying a novel measurement strategy, we mitigate the item non-response and social desirability biases that plagued previous studies due to the sensitive nature of militancy. Contrary to expectations, poor Pakistanis dislike militants more than middle-class citizens. This dislike is strongest among the urban poor, particularly those in violent districts, suggesting that exposure to terrorist attacks reduces support for militants. Longstanding arguments tying support for violent organizations to income may require substantial revision.

Blair, Graeme, and Kosuke Imai. 2012. “Statistical Analysis of List Experiments.” Political Analysis 20(1): 47–77. PDF. Supporting materials. Replication data. Abstract +/-

Selected for "Greatest Hits” issue of Political Analysis of “eight papers, published in the last two years, that we believe are making important contemporary contributions to political methodology.”

The validity of empirical research often relies upon the accuracy of self-reported behavior and beliefs. Yet eliciting truthful answers in surveys is challenging, especially when studying sensitive issues such as racial prejudice, corruption, and support for militant groups. List experiments have attracted much attention recently as a potential solution to this measurement problem. Many researchers, however, have used a simple difference-in-means estimator, which prevents the efficient examination of multivariate relationships between respondents’ characteristics and their responses to sensitive items. Moreover, no systematic means exists to investigate the role of underlying assumptions. We fill these gaps by developing a set of new statistical methods for list experiments. We identify the commonly invoked assumptions, propose new multivariate regression estimators, and develop methods to detect and adjust for potential violations of key assumptions. For empirical illustration, we analyze list experiments concerning racial prejudice. Open-source software is made available to implement the proposed methodology.

Working papers

Blair, Graeme, Rebecca Littman, and Elizabeth Levy Paluck. 2016. “Motivating The Adoption of New Community-Minded Behaviors: An Empirical Test in Nigeria.” Under review. Copy available by email. Design pre-registration.

Blair, Graeme, Jasper Cooper, Alexander Coppock, and Macartan Humphreys. 2016. “Declaring And Diagnosing Research Designs.” PDF. Abstract +/-

The evaluation of research depends on assessments of the quality of underlying research designs. Surprisingly, however, there is no standard definition for what a design is. We provide a framework for formally characterizing the analytically relevant features of a research design. The approach to design declaration we describe requires defining population structures, a potential outcomes function, a sampling strategy, an assignment strategy, estimands, and an estimation strategy. Given a formal declaration of a design in code, Monte Carlo techniques can then be easily applied to a design in order to diagnose properties, such as power, bias, expected mean squared error, external validity with respect to some population, and other “diagnosands.” Declaring a design in computer code lays researchers’ assumptions bare and allows for clear communication with funders, journal editors, reviewers, and readers. Ex ante design declarations can be used to improve designs and facilitate preregistration, analysis, and ex post reconciliation of intended and actual analyses. Design declaration is also useful ex post however and can be used to describe and share designs as well as to facilitate reanalysis and critique. We provide an open-source software package, DeclareDesign, to implement the proposed approach.

Invited contributions

Blair, Graeme. 2015. “Survey Methods For Sensitive Topics.” APSA Comparative Politics Newsletter. PDF. Abstract +/-

Coverage in the World Bank Development Impact blog

When comparativists rely on survey data, they implicitly invoke an important assumption: that respondents answered truthfully. This assumption may be violated when there are incentives to conceal the truth. When there are incentives to conceal, our inferences about respondents (e.g. what proportion of them shared information with a militant) will be biased. What can we do about these misreporting and nonresponse biases? In what follows, I review four survey techniques used by comparativists to address incentives to conceal truthful responses. I first review survey administration practices designed to protect sensitive responses. For contexts in which these are insufficient, I review three experimental methods that can be used in addition that avoid soliciting exact answers to sensitive questions altogether. The experimental methods enable comparativists to ask survey questions that could not otherwise be asked due to ethical concerns and the risk of bias. However, these methods require additional assumptions that are often not testable, necessitating careful design and pilot testing. I conclude with a discussion of common critiques of the experimental techniques.