Assistant Professor of Political Science
EGAP community policing metaketa
Project on Resources and Governance
Do commodity price shocks cause armed conflict? Evidence from a meta-analysis, American Political Science Review (2020)
When to worry about sensitivity bias: Theory and evidence from 30 years of list experiments, American Political Science Review (2020)
I conduct substantive and methodological research motivated by two questions:
- What role do ordinary people play in producing and preventing crime and violence?
- How can we enable and encourage people to undertake costly, prosocial behaviors that prevent crime, mitigate the consequences of conflict, and improve governance?
I draw on ideas from social psychology and comparative politics, as well as insights from my field research, to construct new theories. I have conducted fieldwork in Nigeria, South Africa, and elsewhere.
To test these ideas, I develop and apply new methods and conduct field research and experiments.
My methodological work has led to a new way of learning about and improving research designs, as well as new designs and analysis strategies for survey experiments to study sensitive questions. I have published widely-used, award-winning software to implement these methods.
My research appears in two book manuscripts, one under advanced contract at Princeton University Press and one at Cambridge University Press. My papers have been published in leading journals such as the American Political Science Review, American Journal of Political Science, Journal of Politics, Journal of the American Statistical Association, Political Analysis, and Science Advances. I am grateful for funding from the Hewlett Foundation, the Arnold Foundation, the National Science Foundation, and the U.K. Department for International Development, among others.
I received my Ph.D. in political science in 2016 from Princeton University and was a pre/postdoctoral fellow at Columbia University with Evidence in Governance and Politics (EGAP) before joining UCLA.
Jump to: Substantive research - Methods research - Teaching - Prospective students
Conflict, crime, and governance research
Crime, insecurity, and community policing: Experiments on building trust. Under advanced contract, Cambridge University Press. Lead author with Fotini Christia, Jeremy Weinstein, Eric Arias, Emile Badran, Robert A. Blair, Ali Cheema, Thiemo Fetzer, Guy Grossman, Dotan Haim, Rebecca Hanson, Ali Hasanain, Ben Kachero, Dorothy Kronick, Benjamin Morse, Robert Muggah, Matthew Nanes, Tara Slough, Nico Ravanilla, Jacob N. Shapiro, Barbara Silva, Pedro C. L. Souza, Lily Tsai, and Anna Wilke. Book conference scheduled in March 2021.
How does conflict affect firms’ investment decisions? Past results are mixed: a third of studies we reviewed report null or mixed correlations; some suggest conflict increases investment. We rationalize these results, arguing that armed conflict has divergent effects depending on firms’ exposure to violence. Conflict can deter investment by disrupting production or raising uncertainty. Yet, conflict can encourage investment by hampering government oversight. We argue each mechanism operates over different geographic extents. We use data from the mining sector to test these claims and report three main results. Firms operating at conflict sites dramatically reduce investments. By contrast, firms operating in territory surrounding conflict, but at a remove from fighting, actually increase investment. Firms far from violence see a small negative effect. These divergent responses cannot be inferred from aggregate flows: we show conflict depresses aggregate investment, but this reflects responses among firms far from fighting.
“Do commodity price shocks cause armed conflict? Evidence from a meta-analysis.” 2020. Forthcoming, American Political Science Review. With Darin Christensen and Aaron Rudkin.
Abstract PDF Appendices Preanalysis plan
Scholars of the resource curse argue that reliance on primary commodities destabilizes governments: price fluctuations generate windfalls or periods of austerity that provoke or intensify conflict. 350 quantitative studies test this claim, but prominent results point in different directions, making it difficult to discern which results reliably hold across contexts. We conduct a meta-analysis of 46 natural experiments that use difference-in-difference designs to estimate the causal effect of international commodity price changes on armed conflict. We show commodity price changes, on average, do not change conflict risks. However, this overall effect comprises cross-cutting effects by commodity type. In line with theory, we find price increases in labor-intensive agricultural commodities reduce conflict, while increases in the price of oil, a capital-intensive commodity, provoke conflict. We also find that prices changes for lootable artisanal minerals provoke conflict. Our meta-analysis consolidates existing evidence, but also highlights gaps for future research to fill.
“Motivating the adoption of new community-minded behaviors: An empirical test in Nigeria.” Science Advances, 2019. With Rebecca Littman and Elizabeth Levy Paluck.
Abstract PDF Project Policy brief Replication Preanalysis plan Appendices
Social scientists have long sought to explain why people donate resources for the good of a community. Less experiment in Nigeria, we tested two campaigns that encouraged people to try reporting corruption by text message. Psychological theories about how to shift perceived norms and how to reduce barriers to action drove the design of each campaign. The first, a film featuring actors reporting corruption, and the second, a mass text message reducing the effort required to report, caused a total of 1181 people in 106 communities to text, including 241 people who sent concrete corruption reports. Psychological theories of social norms and behavior change can illuminate the early stages of the evolution of cooperation and collective action, when adoption is still relatively rare.
“Explaining support for combatants during wartime: A survey experiment in Afghanistan.” American Political Science Review, 2013. With Jason Lyall and Kosuke Imai.
Abstract PDF MPSA Pi Sigma Alpha Award Replication
How are civilian attitudes toward combatants affected by wartime victimization? Are these effects conditional on which combatant inflicted the harm? We investigate the determinants of wartime civilian attitudes towards combatants using a survey experiment across 204 villages in five Pashtun-dominated provinces of Afghanistan—the heart of the Taliban insurgency. We use endorsement experiments to indirectly elicit truthful answers to sensitive questions about support for different combatants. We demonstrate that civilian attitudes are asymmetric in nature. Harm inflicted by the International Security Assistance Force (ISAF) is met with reduced support for ISAF and increased support for the Taliban, but Taliban-inflicted harm does not translate into greater ISAF support. We combine a multistage sampling design with hierarchical modeling to estimate ISAF and Taliban support at the individual, village, and district levels, permitting a more fine-grained analysis of wartime attitudes than previously possible.
“Poverty and support for militant politics: Evidence from Pakistan.” American Journal of Political Science, 2013. With Christine Fair, Neil Malhotra, and Jacob N. Shapiro.
Abstract PDF Replication Appendices
Policy debates on strategies to end extremist violence frequently cite poverty as a root cause of support for the perpetrating groups. There is little evidence to support this contention, particularly in the Pakistani case. Pakistan’s urban poor are more exposed to the negative externalities of militant violence and may in fact be less supportive of the groups. To test these hypotheses we conducted a 6,000-person, nationally representative survey of Pakistanis that measured affect toward four militant organizations. By applying a novel measurement strategy, we mitigate the item nonresponse and social desirability biases that plagued previous studies due to the sensitive nature of militancy. Contrary to expectations, poor Pakistanis dislike militants more than middle-class citizens. This dislike is strongest among the urban poor, particularly those in violent districts, suggesting that exposure to terrorist attacks reduces support for militants. Long-standing arguments tying support for violent organizations to income may require substantial revision.
Under review and in progress
“Does Community Policing Build Trust in Police and Reduce Crime? Evidence from Six Coordinated Field Experiments in the Global South.” 2020. Lead author with Jeremy Weinstein, Fotini Christia, Eric Arias, Emile Badran, Robert A. Blair, Ali Cheema, Thiemo Fetzer, Guy Grossman, Dotan Haim, Rebecca Hanson, Ali Hasanain, Ben Kachero, Dorothy Kronick, Benjamin Morse, Robert Muggah, Matthew Nanes, Tara Slough, Nico Ravanilla, Jacob N. Shapiro, Barbara Silva, Pedro C. L. Souza, Lily Tsai, and Anna Wilke. Under review. Available upon request.
Metaketa project Preanalysis plan
“Religious leaders can change minds and shift norms.” 2020. With Mohammed Bukar, Rebecca Littman, Elizabeth Nugent, Rebecca Wolfe, Benjamin Crisman, Anthony Etim, and Chad Hazlett. Under review.
“Demonstrating genuine change increases victim’s willingness to reconcile with transgressors: Experimental evidence from Nigeria.” 2020. With Rebecca Littman, Rebecca Wolfe, Mohammed Bukar, Jiyoung Kim, Yunusa Aina, Yetcha Ajimi Badu, Fatima Abba Kurama, and Ahmed Umar Lawan.
Research design and methodology research
Research design: Declaration, diagnosis, redesign. Under advance contract, Princeton University Press. With Alexander Coppock and Macartan Humphreys. Book conference held December 2020.
“When to worry about sensitivity bias: A social reference theory and evidence from 30 years of list experiments.” American Political Science Review, 2020. With Alexander Coppock and Margaret Moor.
Abstract PDF Replication Appendices
Eliciting honest answers to sensitive questions is frustrated if subjects withhold the truth for fear that others will judge or punish them. The resulting bias is commonly referred to as social desirability bias, a subset of what we label sensitivity bias. We make three contributions. First, we propose a social reference theory of sensitivity bias to structure expectations about survey responses on sensitive topics. Second, we explore the bias-variance trade-off inherent in the choice between direct and indirect measurement technologies. Third, to estimate the extent of sensitivity bias, we meta-analyze the set of published and unpublished list experiments (a.k.a., the item count technique) conducted to date and compare the results with direct questions. We find that sensitivity biases are typically smaller than 10 percentage points and in some domains are approximately zero.
Researchers need to select high-quality research designs and communicate those designs clearly to readers. Both tasks are difficult. We provide a framework for formally “declaring” the analytically relevant features of a research design in a demonstrably complete manner, with applications to qualitative, quantitative, and mixed methods research. The approach to design declaration we describe requires defining a model of the world (M), an inquiry (I), a data strategy (D), and an answer strategy (A). Declaration of these features in code provides sufficient information for researchers and readers to use Monte Carlo techniques to diagnose properties such as power, bias, accuracy of qualitative causal inferences, and other “diagnosands.” Ex ante declarations can be used to improve designs and facilitate preregistration, analysis, and reconciliation of intended and actual analyses. Ex post declarations are useful for describing, sharing, reanalyzing, and critiquing existing designs. We provide open-source software, DeclareDesign, to implement the proposed approach.
Measurement error threatens the validity of survey research, especially when studying sensitive questions. Although list experiments can help discourage deliberate misreporting, they may also suer from nonstrategic measurement error due to flawed implementation and respondents’ inattention. Such error runs against the assumptions of the standard maximum likelihood regression (MLreg) estimator for list experiments and can result in misleading inferences, especially when the underlying sensitive trait is rare. We address this problem by providing new tools for diagnosing and mitigating measurement error in list experiments. First, we demonstrate that the nonlinear least squares regression (NLSreg) estimator proposed in Imai (2011) is robust to nonstrategic measurement error. Second, we oer a general model misspecification test to gauge the divergence of the MLreg and NLSreg estimates. Third, we show how to model measurement error directly, proposing new estimators that preserve the statistical eiciency of MLreg while improving robustness. Last, we revisit empirical studies shown to exhibit nonstrategic measurement error, and demonstrate that our tools readily diagnose and mitigate the bias. We conclude this article with a number of practical recommendations for applied researchers. The proposed methods are implemented through an open-source software package.
About a half century ago, in 1965, Warner proposed the randomized response method as a survey technique to reduce potential bias due to nonresponse and social desirability when asking questions about sensitive behaviors and beliefs. This method asks respondents to use a randomization device, such as a coin flip, whose outcome is unobserved by the interviewer. By introducing random noise, the method conceals individual responses and protects respondent privacy. While numerous methodological advances have been made, we find surprisingly few applications of this promising survey technique. In this article, we address this gap by (1) reviewing standard designs available to applied researchers, (2) developing various multivariate regression techniques for substantive analyses, (3) proposing power analyses to help improve research designs, (4) presenting new robust designs that are based on less stringent assumptions than those of the standard designs, and (5) making all described methods available through open-source software. We illustrate some of these methods with an original survey about militant groups in Nigeria.
List and endorsement experiments are becoming increasingly popular among social scientists as indirect survey techniques for sensitive questions. When studying issues such as racial prejudice and support for militant groups, these survey methodologies may improve the validity of measurements by reducing nonresponse and social desirability biases. We develop a statistical test and multivariate regression models for comparing and combining the results from list and endorsement experiments. We demonstrate that when carefully designed and analyzed, the two survey experiments can produce substantively similar empirical findings. Such agreement is shown to be possible even when these experiments are applied to one of the most challenging research environments: contemporary Afghanistan. We find that both experiments uncover similar patterns of support for the International Security Assistance Force (ISAF) among Pashtun respondents. Our findings suggest that multiple measurement strategies can enhance the credibility of empirical conclusions. Open-source software is available for implementing the proposed methods.
The validity of empirical research often relies upon the accuracy of self-reported behavior and beliefs. Yet eliciting truthful answers in surveys is challenging, especially when studying sensitive issues such as racial prejudice, corruption, and support for militant groups. List experiments have attracted much attention recently as a potential solution to this measurement problem. Many researchers, however, have used a simple difference-in-means estimator, which prevents the efficient examination of multivariate relationships between respondents’ characteristics and their responses to sensitive items. Moreover, no systematic means exists to investigate the role of underlying assumptions. We fill these gaps by developing a set of new statistical methods for list experiments. We identify the commonly invoked assumptions, propose new multivariate regression estimators, and develop methods to detect and adjust for potential violations of key assumptions. For empirical illustration, we analyze list experiments concerning racial prejudice. Open-source software is made available to implement the proposed methodology.
In an effort to assess the generalizability of treatment effects across contexts, scholars (or teams of scholars) are increasingly conducting experiments around the same research questions in multiple country and subnational contexts. In this chapter, we categorize recent and ongoing efforts to conduct cross-context experiments into three types: “uncoordinated,” “coordinated, sequential,” and “coordinated, simultaneous.” We discuss some practical trade-offs across these types, arguing that coordinated cross-context designs offer the most promise for meta-analyses. We then draw attention to four areas in which the current approaches arguably all fall short in facilitating cumulative learning about treatment effects and treatment effect heterogeneity across contexts. We conclude by proposing some ways forward to continue improving our approach to learning about generalizability across contexts.
“Survey methods for sensitive topics.” APSA Comparative Politics Newsletter.
“fabricatr: Imagine Your Data Before You Collect It.” R package. With Jasper Cooper, Alexander Coppock, Macartan Humphreys, Aaron Rudkin, and Neal Fultz.
Web ~29,000 downloads
“list: Statistical Methods for the Item Count Technique and List Experiment.” R package. With Kosuke Imai.
Web ~41,000 downloads
“rr: Statistical Methods for the Randomized Response Technique.” R package. With Yang-Yang Zhou and Kosuke Imai.
Web ~19,000 downloads
I actively advise and collaborate with graduate students in the political science Ph.D. program at UCLA. I meet weekly in the Politics of Order and Development Lab with all of my students. If you think you might be a good fit for the lab, send me an email and we can set up a time to meet. I'm also always happy to talk to other UCLA Ph.D. students even if I'm not advising you (sign up for office hours). Email me if you can't find a time.
Prospective graduate students: You can find information about applying to the UCLA Ph.D. program here. Our department, like most in political science, does not admit students to work with specific faculty. Admissions decisions are made by a committee, which I am not currently sitting on. However, you are welcome to mention my name in your personal statement in order to ensure it is sent to me during the admission process. Following the example of Betsy Paluck, I no longer have personal conversations with prospective students, in order to avoid favoring students who have received advice to connect with faculty or who have connections with my colleagues. If you are admitted, I will be eager to talk about working with you at UCLA.
In preparing your application, I encourage you to read Jessica Calarco's excellent Field Guide to Grad School, the advice from Chris Blattman and Macartan Humphreys. Josh Kertzer has compiled a set of excellent additional writing on applying to social science Ph.D. programs.
POL SCI 50: Comparative Politics (undergraduate lecture). Winter 2021. Draft syllabus
POL SCI 200E: Experimental Design for Social Science (Ph.D. seminar). Next Spring 2022. Syllabus
POL SCI 292: Research Design (Ph.D. seminar). Next Winter 2022.
POL SCI 240a/b: Comparative Politics Field Seminar (Ph.D. seminar). Fall-Winter 2019-20. Syllabus
Improving Designs in the Social Sciences (Ph.D. workshop), 2016-2018 (Co-convener)
Politics of Order and Development Lab (Ph.D. workshop), 2018- (Co-convener)