Hostname: page-component-77f85d65b8-hzqq2 Total loading time: 0 Render date: 2026-03-28T21:04:08.683Z Has data issue: false hasContentIssue false

How Experiments Help Campaigns Persuade Voters: Evidence from a Large Archive of Campaigns’ Own Experiments

Published online by Cambridge University Press:  08 February 2024

LUKE HEWITT*
Affiliation:
Stanford University, United States
DAVID BROOCKMAN*
Affiliation:
University of California, Berkeley, United States
ALEXANDER COPPOCK*
Affiliation:
Yale University, United States
BEN M. TAPPIN*
Affiliation:
Royal Holloway, University of London, United Kingdom
JAMES SLEZAK*
Affiliation:
Swayable, United States
VALERIE COFFMAN*
Affiliation:
Swayable, United States
NATHANIEL LUBIN*
Affiliation:
Cornell Tech, United States, and Incite Studio, United States
MOHAMMAD HAMIDIAN*
Affiliation:
Swayable, United States
*
Luke Hewitt, Senior Research Fellow, Polarization and Social Change Lab, Stanford University, lbh@stanford.edu.
Corresponding author: David Broockman, Associate Professor, Travers Department of Political Science, University of California, Berkeley, United States, dbroockman@berkeley.edu.
Alexander Coppock, Associate Professor on Term, Department of Political Science, Yale University, United States, alex.coppock@yale.edu.
Ben M. Tappin, Research Fellow, Department of Psychology, Royal Holloway, University of London, United Kingdom, benmtappin@googlemail.com.
James Slezak, Co-Founder, Swayable, United States, james@swayable.com.
Valerie Coffman, Co-Founder, Swayable, United States, valerie@swayable.com.
Nathaniel Lubin, Visiting Fellow, Digital Life Initiative, Cornell Tech, United States, and Founder and CEO, Incite Studio, United States, nate@incite.studio.
Mohammad Hamidian, Senior Experimental Scientist, Swayable, United States, mhh32@cornell.edu.
Rights & Permissions [Opens in a new window]

Abstract

Political campaigns increasingly conduct experiments to learn how to persuade voters. Little research has considered the implications of this trend for elections or democracy. To probe these implications, we analyze a unique archive of 146 advertising experiments conducted by US campaigns in 2018 and 2020 using the platform Swayable. This archive includes 617 advertisements produced by 51 campaigns and tested with over 500,000 respondents. Importantly, we analyze the complete archive, avoiding publication bias. We find small but meaningful variation in the persuasive effects of advertisements. In addition, we find that common theories about what makes advertising persuasive have limited and context-dependent power to predict persuasiveness. These findings indicate that experiments can compound money’s influence in elections: it is difficult to predict ex ante which ads persuade, experiments help campaigns do so, but the gains from these findings principally accrue to campaigns well-financed enough to deploy these ads at scale.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of American Political Science Association
Figure 0

Table 1. Summary of All Three Datasets

Figure 1

Table 2. Tests for (Residual) Heterogeneity

Figure 2

Figure 1. Treatment Effect Estimates by Outcome and Time to ElectionNote: Left: Unpooled treatment effect estimates on vote choice and candidate favorability, arranged chronologically by date of study. Within each column, each point shows the ATE for a unique treatment, with 95% confidence intervals. Right: Meta-analytic estimate of mean across all treatments ($ \widehat{\mu} $), with standard errors. 95% confidence intervals are plotted but are too narrow to be visible. For full model specifications, see Dataverse Appendix Table DA4.

Figure 3

Figure 2. Estimated Distribution of ATEs in Each Set of Experiments, after Accounting for Measurement NoiseNote: The figure shows the estimated distribution of true average treatment effects in each set of experiments (“Estimated distribution”), after accounting for sampling variability. The estimated treatment effects of each individual ad (“Raw estimates”) are plotted in gray. The meta-analytic estimate for the standard deviation in ATEs ($ \widehat{\tau} $) is given with 95% confidence intervals. For full model specifications, see Dataverse Appendix Table DA4.

Figure 4

Figure 3. t-Statistics for All Pre-Registered Meta-RegressionsNote: Each row corresponds with one hypothesis and each column corresponds with one dataset. The cells record the t-statistics on the meta-regressions testing each hypothesis in each dataset, which also maps to the cell colors, which range from purple (most positive values), to white (zero), to orange (most negative values). Full model specifications are provided in Dataverse Appendix DA2.

Figure 5

Table 3. Table of Values Used in Simulations of Ad-Testing Impact in Figure 4. Values Are Based on Assumptions about a Typical Competitive US Senate Campaign

Figure 6

Figure 4. The Estimated Returns to ExperimentationNote: See Appendix C of the Supplementary Material for further details on estimation.

Supplementary material: File

Hewitt et al. supplementary material

Hewitt et al. supplementary material
Download Hewitt et al. supplementary material(File)
File 955.3 KB
Submit a response

Comments

No Comments have been published for this article.