We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Scholars of political behavior increasingly embed experimental designs in opinion surveys by randomly assigning respondents alternative versions of questionnaire items. Such experiments have major advantages: they are simple to implement and they dodge some of the difficulties of making inferences from conventional survey data. But survey experiments are no panacea. We identify problems of inference associated with typical uses of survey experiments in political science and highlight a range of difficulties, some of which have straightforward solutions within the survey-experimental approach and some of which can be dealt with only by exercising greater caution in interpreting findings and bringing to bear alternative strategies of research.
Laboratory experiments, survey experiments and field experiments occupy a central and growing place in the discipline of political science. The Cambridge Handbook of Experimental Political Science is the first text to provide a comprehensive overview of how experimental research is transforming the field. Some chapters explain and define core concepts in experimental design and analysis. Other chapters provide an intellectual history of the experimental movement. Throughout the book, leading scholars review groundbreaking research and explain, in personal terms, the growing influence of experimental political science. The Cambridge Handbook of Experimental Political Science provides a collection of insights that can be found nowhere else. Its topics are of interest not just to researchers who are conducting experiments today, but also to researchers who think that experiments can help them make new and important discoveries in political science and beyond.
Citizens and Politics: Perspectives from Political Psychology brings together some of the research on citizen decision making. It addresses the questions of citizen political competence from different political psychology perspectives. Some of the authors in this volume look to affect and emotions to determine how people reach political judgements, others to human cognition and reasoning. Still others focus on perceptions or basic political attitudes such as political ideology. Several demonstrate the impact of values on policy preferences. The collection features chapters from some of the most talented political scientists in the field.
The experimental study of politics has grown explosively in the past two decades. Part of that explosion takes the form of a dramatic increase in the number of published articles that use experiments. Perhaps less evident, and arguably more important, experimentalists are exploring topics that would have been unimaginable only a few years ago. Laboratory researchers have studied topics ranging from the effects of media exposure (Iyengar and Kinder 1987) to the conditions under which groups solve collective action problems (Ostrom, Walker, and Gardner 1992), and, at times, have identified empirical anomalies that produced new theoretical insights (McKelvey and Palfrey 1992). Some survey experimenters have developed experimental techniques to measure prejudice (Kuklinski, Cobb, and Gilens 1997) and its effects on support for policies such as welfare or affirmative action (Sniderman and Piazza 1995); others have explored the ways in which framing, information, and decision cues influence voters' policy preferences and support for public officials (Druckman 2004; Tomz 2007). And although the initial wave of field experiments focused on the effects of campaign communications on turnout and voters' preferences (Eldersveld 1956; Gerber and Green 2000; Wantchekon 2003), researchers increasingly use field experiments and natural experiments to study phenomena as varied as election fraud (Hyde 2009), representation (Butler and Nickerson 2009), counterinsurgency (Lyall 2009), and interpersonal communication (Nickerson 2008).
Within the prevailing Fisher-Neyman-Rubin framework of causal inference, causal effects are defined as comparisons of potential outcomes under different treatments. In most contexts, it is impossible or impractical to observe multiple outcomes (realizations of the variable of interest) for any given unit. Given this fundamental problem of causality (Holland 1986), experimentalists approximate the hypothetical treatment effect by comparing averages of groups or, sometimes, averages of differences of matched cases. Hence, they often use (Ȳ|t = 1) − (Ȳ|t = 0) to estimate E[(Yi|t = 1) − (Yi|t = 0)], labeling the former quantity the treatment effect or, more accurately, the average treatment effect.
The rationale for substituting group averages originates in the logic of the random assignment experiment: each unit has different potential outcomes; units are randomly assigned to one treatment or another; and, in expectation, control and treatment groups should be identically distributed. To make causal inferences in this manner requires that one unit's outcomes not be affected by another unit's treatment assignment. This requirement has come to be known as the stable unit treatment value assumption.
Until recently, experimenters have reported average treatment effects as a matter of routine. Unfortunately, this difference of averages often masks as much as it reveals. Most crucially, it ignores heterogeneity in treatment effects, whereby the treatment affects (or would affect if it were actually experienced) some units differently from others.
In his 1909 American Political Science Association presidential address, A. Lawrence Lowell (1910) advised the fledgling discipline against following the model of the natural sciences: “We are limited by the impossibility of experiment. Politics is an observational, not an experimental science…” (7). The lopsided ratio of observational to experimental studies in political science, over the one hundred years since Lowell's statement, arguably affirms his assessment. The next hundred years are likely to be different. The number and influence of experimental studies are growing rapidly as political scientists discover ways of using experimental techniques to illuminate political phenomena.
The growing interest in experimentation reflects the increasing value that the discipline places on causal inference and empirically guided theoretical refinement. Experiments facilitate causal inference through the transparency and content of their procedures, most notably the random assignment of observations (a.k.a. subjects or experimental participants) to treatment and control groups. Experiments also guide theoretical development by providing a means for pinpointing the effects of institutional rules, preference configurations, and other contextual factors that might be difficult to assess using other forms of inference. Most of all, experiments guide theory by providing stubborn facts – that is, reliable information about cause and effect that inspires and constrains theory.