To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Vietnam draft lottery exposed millions of men to risk of induction at a time when the Vietnam War was becoming increasingly unpopular. We study the long-term effects of draft risk on political attitudes and behaviors of men who were eligible for the draft in 1969–1971. Our 2014–2016 surveys of men who were eligible for the Vietnam draft lotteries reveal no appreciable effect of draft risk across a wide range of political attitudes. These findings are bolstered by analysis of a vast voter registration database, which shows no differences in voting rates or tendency to register with the Democratic or Republican parties. The pattern of weak long-term effects is in line with studies showing that the long-term economic effects of Vietnam draft risk dissipated over time and offers a counterweight to influential observational studies that report long-term persistence in the effects of early experiences on political attitudes.
This paper evaluates the state of contact hypothesis research from a policy perspective. Building on Pettigrew and Tropp's (2006) influential meta-analysis, we assemble all intergroup contact studies that feature random assignment and delayed outcome measures, of which there are 27 in total, nearly two-thirds of which were published following the original review. We find the evidence from this updated dataset to be consistent with Pettigrew and Tropp's (2006) conclusion that contact “typically reduces prejudice.” At the same time, our meta-analysis suggests that contact's effects vary, with interventions directed at ethnic or racial prejudice generating substantially weaker effects. Moreover, our inventory of relevant studies reveals important gaps, most notably the absence of studies addressing adults' racial or ethnic prejudices, an important limitation for both theory and policy. We also call attention to the lack of research that systematically investigates the scope conditions suggested by Allport (1954) under which contact is most influential. We conclude that these gaps in contact research must be addressed empirically before this hypothesis can reliably guide policy.
This study evaluates the turnout effects of direct mail sent in advance of the 2014 New Hampshire Senate election. Registered Republican women were sent up to 10 mailings from a conservative advocacy group that encouraged participation in the upcoming election. We find that mail raises turnout, but no gains are achieved beyond five mailers. This finding is shown to be consistent with other experiments that have sent large quantities of mail. We interpret these results in light of marketing research on repetitive messaging.
Missing outcome data plague many randomized experiments. Common solutions rely on ignorability assumptions that may not be credible in all applications. We propose a method for confronting missing outcome data that makes fairly weak assumptions but can still yield informative bounds on the average treatment effect. Our approach is based on a combination of the double sampling design and nonparametric worst-case bounds. We derive a worst-case bounds estimator under double sampling and provide analytic expressions for variance estimators and confidence intervals. We also propose a method for covariate adjustment using poststratification and a sensitivity analysis for nonignorable missingness. Finally, we illustrate the utility of our approach using Monte Carlo simulations and a placebo-controlled randomized field experiment on the effects of persuasion on social attitudes with survey-based outcome measures.
Regression discontinuity (RD) designs enable researchers to estimate causal effects using observational data. These causal effects are identified at the point of discontinuity that distinguishes those observations that do or do not receive the treatment. One challenge in applying RD in practice is that data may be sparse in the immediate vicinity of the discontinuity. Expanding the analysis to observations outside this immediate vicinity may improve the statistical precision with which treatment effects are estimated, but including more distant observations also increases the risk of bias. Model specification is another source of uncertainty; as the bandwidth around the cutoff point expands, linear approximations may break down, requiring more flexible functional forms. Using data from a large randomized experiment conducted by Gerber, Green, and Larimer (2008), this study attempts to recover an experimental benchmark using RD and assesses the uncertainty introduced by various aspects of model and bandwidth selection. More generally, we demonstrate how experimental benchmarks can be used to gauge and improve the reliability of RD analyses.
The debate about the cost-effectiveness of randomized field experimentation ignores one of the most important potential uses of experimental data. This article defines and illustrates “downstream” experimental analysis—that is, analysis of the indirect effects of experimental interventions. We argue that downstream analysis may be as valuable as conventional analysis, perhaps even more so in the case of laboratory experimentation.
Field experiments on voter mobilization enable researchers to test theoretical propositions while at the same time addressing practical questions that confront campaigns. This confluence of interests has led to increasing collaboration between researchers and campaign organizations, which in turn has produced a rapid accumulation of experiments on voting. This new evidence base makes possible translational works such as Get Out the Vote: How to Increase Voter Turnout that synthesize the burgeoning research literature and convey its conclusions to campaign practitioners. However, as political groups develop their own in-house capacity to conduct experiments whose results remain proprietary and may be reported selectively, the accumulation of an unbiased, public knowledge base is threatened. We discuss these challenges and the ways in which research that focuses on practical concerns may nonetheless speak to enduring theoretical questions.
Across the social sciences, growing concerns about research transparency have led to calls for pre-analysis plans (PAPs) that specify in advance how researchers intend to analyze the data they are about to gather. PAPs promote transparency and credibility by helping readers distinguish between exploratory and confirmatory analyses. However, PAPs are time-consuming to write and may fail to anticipate contingencies that arise in the course of data collection. This article proposes the use of “standard operating procedures” (SOPs)—default practices to guide decisions when issues arise that were not anticipated in the PAP. We offer an example of an SOP that can be adapted by other researchers seeking a safety net to support their PAPs.
We report the results of a field experiment conducted in New York City during the 2013 election cycle, examining the impact of nonpartisan messages on donations from small contributors. Using information from voter registration and campaign finance records, we built a forecasting model to identify voters with an above-average probability of donating. A random sample of these voters received one of four messages asking them to donate to a candidate of their choice. Half of these treatments reminded voters that New York City's campaign finance program matches small donations with public funds. Candidates’ financial disclosures to the city's Campaign Finance Board reveal that only the message mentioning policy (in generic terms) increased donations. Surprisingly, reminding voters that matching funds multiplied the value of their contribution had no effect. Our experiment sheds light on the motivations of donors and represents the first attempt to assess nonpartisan appeals to contribute.
A small but growing social science literature examines the correspondence between experimental results obtained in lab and field settings. This article reviews this literature and reanalyzes a set of recent experiments carried out in parallel in both the lab and field. Using a standardized format that calls attention to both the experimental estimates and the statistical uncertainty surrounding them, the study analyzes the overall pattern of lab-field correspondence, which is found to be quite strong (Spearman's ρ = 0.73). Recognizing that this correlation may be distorted by the ad hoc manner in which lab-field comparisons are constructed (as well as the selective manner in which results are reported and published), the article concludes by suggesting directions for future research, stressing in particular the need for more systematic investigation of treatment effect heterogeneity.
Laboratory experiments, survey experiments and field experiments occupy a central and growing place in the discipline of political science. The Cambridge Handbook of Experimental Political Science is the first text to provide a comprehensive overview of how experimental research is transforming the field. Some chapters explain and define core concepts in experimental design and analysis. Other chapters provide an intellectual history of the experimental movement. Throughout the book, leading scholars review groundbreaking research and explain, in personal terms, the growing influence of experimental political science. The Cambridge Handbook of Experimental Political Science provides a collection of insights that can be found nowhere else. Its topics are of interest not just to researchers who are conducting experiments today, but also to researchers who think that experiments can help them make new and important discoveries in political science and beyond.
The experimental study of politics has grown explosively in the past two decades. Part of that explosion takes the form of a dramatic increase in the number of published articles that use experiments. Perhaps less evident, and arguably more important, experimentalists are exploring topics that would have been unimaginable only a few years ago. Laboratory researchers have studied topics ranging from the effects of media exposure (Iyengar and Kinder 1987) to the conditions under which groups solve collective action problems (Ostrom, Walker, and Gardner 1992), and, at times, have identified empirical anomalies that produced new theoretical insights (McKelvey and Palfrey 1992). Some survey experimenters have developed experimental techniques to measure prejudice (Kuklinski, Cobb, and Gilens 1997) and its effects on support for policies such as welfare or affirmative action (Sniderman and Piazza 1995); others have explored the ways in which framing, information, and decision cues influence voters' policy preferences and support for public officials (Druckman 2004; Tomz 2007). And although the initial wave of field experiments focused on the effects of campaign communications on turnout and voters' preferences (Eldersveld 1956; Gerber and Green 2000; Wantchekon 2003), researchers increasingly use field experiments and natural experiments to study phenomena as varied as election fraud (Hyde 2009), representation (Butler and Nickerson 2009), counterinsurgency (Lyall 2009), and interpersonal communication (Nickerson 2008).