We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Policies that promote the common good may be politically infeasible if legislators representing ‘losing’ constituencies are punished for failing to promote their district's welfare. We investigate how varying the local and aggregate returns to a policy affects voter support for their incumbent. In our first study, we find that an incumbent who favours a welfare-enhancing policy enjoys a discontinuous jump in support when their district moves from losing to at least breaking even, while the additional incremental political returns for the district doing better than breaking even are modest. This feature of voter response, which we replicate, has significant implications for legislative politics generally and, in particular, how to construct politically feasible social welfare-enhancing policies. In a second study, we investigate the robustness of this finding in a competitive environment in which a challenger can call attention to a legislator's absolute and relative performance in delivering resources to their district.
Political campaigns frequently emphasize the material stakes at play in election outcomes to motivate participation. However, field-experimental academic work has given greater attention to other aspects of voters' decisions to participate despite theoretical models of turnout and substantial observational work signaling that a contest's perceived importance affects the propensity to vote. We identify two classes of treatments that may increase the material incentive to participate and test these messages in a large-scale placebo-controlled field experiment in which approximately 24,500 treatment letters were delivered during Connecticut's 2013 municipal elections. We find some evidence that these messages are effective in increasing participation, as well as that some of them may be more effective than typical nonpartisan get-out-the-vote appeals. While these results remain somewhat preliminary, our findings have important implications for our understanding of how voters decide whether to participate and how best to mobilize citizens who would otherwise sit out elections.
Do small wording differences in message-based behavioral interventions have outsized effects on behavior? An influential initial study, examining this question in the domain of political behavior using two small-scale field experiments, argues that subtle linguistic cues in voter mobilization messages describing someone as a voter (noun) instead of one who votes (verb) dramatically increases turnout rates by activating a person's social identity as a voter. Two subsequent large-scale replication field experiments challenged this claim, finding no effect even in electorally competitive settings. However, these experiments may not have reproduced the psychological context needed to motivate behavioral change because they did not occur in highly competitive and highly salient electoral contexts. Addressing this major criticism, we conduct a large-scale, preregistered replication field experiment in the 2016 presidential election. We find no evidence that noun wording increases turnout compared to verb wording in this highly salient electoral context, even in competitive states.
Doubts about the integrity of ballot secrecy persist and depress political participation among the American public. Prior experiments have shown that official communications directly addressing these doubts increase turnout among recently registered voters who had not previously voted, but evaluations of similar messages sent by nongovernmental campaigns have yielded only suggestive effects. We build on past research and analyze two large-scale field experiments where a private nonpartisan nonprofit group sought to increase turnout by emphasizing ballot secrecy assurances alongside a reminder to vote in a direct mail voter mobilization campaign during the 2014 midterm election. Our main finding is that a private group’s mailing increases turnout by about 1 percentage point among recently registered nonvoters. This finding is precisely estimated and robust across state political contexts, but is not statistically distinguishable from the effect of a standard voter mobilization appeal. Implications and directions for future research are discussed.
Missing outcome data plague many randomized experiments. Common solutions rely on ignorability assumptions that may not be credible in all applications. We propose a method for confronting missing outcome data that makes fairly weak assumptions but can still yield informative bounds on the average treatment effect. Our approach is based on a combination of the double sampling design and nonparametric worst-case bounds. We derive a worst-case bounds estimator under double sampling and provide analytic expressions for variance estimators and confidence intervals. We also propose a method for covariate adjustment using poststratification and a sensitivity analysis for nonignorable missingness. Finally, we illustrate the utility of our approach using Monte Carlo simulations and a placebo-controlled randomized field experiment on the effects of persuasion on social attitudes with survey-based outcome measures.
If the publication decisions of journals are a function of the statistical significance of research findings, the published literature may suffer from “publication bias.” This paper describes a method for detecting publication bias. We point out that to achieve statistical significance, the effect size must be larger in small samples. If publications tend to be biased against statistically insignificant results, we should observe that the effect size diminishes as sample sizes increase. This proposition is tested and confirmed using the experimental literature on voter mobilization.
Randomized experiments commonly compare subjects receiving a treatment to subjects receiving a placebo. An alternative design, frequently used in field experimentation, compares subjects assigned to an untreated baseline group to subjects assigned to a treatment group, adjusting statistically for the fact that some members of the treatment group may fail to receive the treatment. This article shows the potential advantages of a three-group design (baseline, placebo, and treatment). We present a maximum likelihood estimator of the treatment effect for this three-group design and illustrate its use with a field experiment that gauges the effect of prerecorded phone calls on voter turnout. The three-group design offers efficiency advantages over two-group designs while at the same time guarding against unanticipated placebo effects (which would undermine the placebo-treatment comparison) and unexpectedly low rates of compliance with the treatment assignment (which would undermine the baseline-treatment comparison).
In the social sciences, randomized experimentation is the optimal research design for establishing causation. However, for a number of practical reasons, researchers are sometimes unable to conduct experiments and must rely on observational data. In an effort to develop estimators that can approximate experimental results using observational data, scholars have given increasing attention to matching. In this article, we test the performance of matching by gauging the success with which matching approximates experimental results. The voter mobilization experiment presented here comprises a large number of observations (60,000 randomly assigned to the treatment group and nearly two million assigned to the control group) and a rich set of covariates. This study is analyzed in two ways. The first method, instrumental variables estimation, takes advantage of random assignment in order to produce consistent estimates. The second method, matching estimation, ignores random assignment and analyzes the data as though they were nonexperimental. Matching is found to produce biased results in this application because even a rich set of covariates is insufficient to control for preexisting differences between the treatment and control group. Matching, in fact, produces estimates that are no more accurate than those generated by ordinary least squares regression. The experimental findings show that brief paid get-out-the-vote phone calls do not increase turnout, while matching and regression show a large and significant effect.
Regression discontinuity (RD) designs enable researchers to estimate causal effects using observational data. These causal effects are identified at the point of discontinuity that distinguishes those observations that do or do not receive the treatment. One challenge in applying RD in practice is that data may be sparse in the immediate vicinity of the discontinuity. Expanding the analysis to observations outside this immediate vicinity may improve the statistical precision with which treatment effects are estimated, but including more distant observations also increases the risk of bias. Model specification is another source of uncertainty; as the bandwidth around the cutoff point expands, linear approximations may break down, requiring more flexible functional forms. Using data from a large randomized experiment conducted by Gerber, Green, and Larimer (2008), this study attempts to recover an experimental benchmark using RD and assesses the uncertainty introduced by various aspects of model and bandwidth selection. More generally, we demonstrate how experimental benchmarks can be used to gauge and improve the reliability of RD analyses.
The debate about the cost-effectiveness of randomized field experimentation ignores one of the most important potential uses of experimental data. This article defines and illustrates “downstream” experimental analysis—that is, analysis of the indirect effects of experimental interventions. We argue that downstream analysis may be as valuable as conventional analysis, perhaps even more so in the case of laboratory experimentation.