To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Social scientists often compare survey responses before and after important events to test how those events impact respondent beliefs, attitudes, and preferences. This article offers a formal analysis of such pre-event/post-event survey comparisons, including designs that seek to reduce bias using quota sampling, rolling cross-sections, and panels. Our analysis distinguishes major sources of bias and clarifies the comparative strengths and weaknesses of each approach. We then introduce a modified panel design—the dual randomized survey—to reduce bias in cases where asking respondents to complete the same survey twice could impact their Wave 2 responses. Our formalization of bias and novel research design improve scholars’ ability to study the causal impact of events through surveys.
In this note, we offer a cautionary tale on the dangers of drawing inferences from low-quality online survey datasets. We reanalyze and replicate a survey experiment studying the effect of acquiescence bias on estimates of conspiratorial beliefs and political misinformation. Correcting a minor data coding error yields a puzzling result: respondents with a postgraduate education appear to be the most prone to acquiescence bias. We conduct two preregistered replication studies to better understand this finding. In our first replication, conducted using the same survey platform as the original study, we find a nearly identical set of results. But in our second replication, conducted with a larger and higher-quality survey panel, this apparent effect disappears. We conclude that the observed relationship was an artifact of inattentive and fraudulent responses in the original survey panel, and that attention checks alone do not fully resolve the problem. This demonstrates how “survey trolls” and inattentive respondents on low-quality survey platforms can generate spurious and theoretically confusing results.
This study examines Hispanic formal volunteering and the cultural, social, and community context factors that affect their decision to volunteer. Using data from three surveys in the United States, the study finds that religious attendance, cultural background, and education are the most consistent and significant predictors of Hispanic formal volunteering. Religious attendance has a stronger positive impact on Hispanic volunteering than on non-Hispanics. The impacts of income, social resources, and community characteristics on Hispanics’ volunteering vary by surveys. Secular organizations serving children and youth and religious organizations are the favorite organizations for Hispanic volunteers.
Individuals seek food assistance from both government and nongovernmental services. The theory of voluntary sector failure asserts that governments act where nonprofits fail. How, if at all, does this theoretical approach provide insight into basic food provision through food pantries? We conducted a single embedded case study of one network of food pantry programs (n = 298) in one county of approximately 3 million people in the southwest of the USA. We used a survey (n = 137) to measure programmatic and organizational capacity and, then, categorized capacity measures into the four failure categories identified in the theory of voluntary sector failure. We found moderate to strong evidence of failure in all of the categories; however, we also argue that these data do not necessarily support the theory of voluntary sector failure. Two key nuances are identified.
What shapes attitudes toward wartime negotiation? Does exposure to violence lead citizens to take a hard-line approach to any peace settlements? Or does it make them more open to peace to make the violence stop? To answer these questions, we conducted a series of surveys and survey experiments in Ukraine in July 2022 and May 2023. First, using a series of survey experiments, we show that Ukrainians are flexible on certain issues, but others are considered red lines and not up for negotiation. Second, in the short-term, we find that exposure to violence does not turn Ukrainians against negotiations with Russia, in some cases, it makes them more amenable. Finally, over a longer duration of the war, we find that support for a negotiated solution drops. Our evidence suggests this drop is linked to exposure to violence and to beliefs about the war’s future course.
The partisan gap in economic perceptions flipped unusually dramatically after the 2024 U.S. presidential election: following the Republican victory, Democrats (Republicans) suddenly rated the economy much more negatively (positively). Was the resulting partisan difference a case of expressive responding, wherein surveys exaggerate partisan bias in measures of economic perceptions? In April 2025, I fielded a panel survey experiment that asked survey respondents to guess then-unpublished measures of economic growth, inflation, and unemployment in the current month or quarter (Prolific, N = 2,831). Randomly selected respondents were offered $2 per correct answer. Partisan bias did not shrink as a result, suggesting genuine differences in economic perceptions. Two measures of response effort (response time and looking up answers) increase, suggesting that misreporting does not fully explain the effects of pay-for-correct treatments.
Emotions and their sociopolitical impact have received increasing scholarly attention. However, it remains largely unclear whether emotional expression within surveys is subject to social desirability bias. By drawing on impression management theory and the disclosure decision model, I argue that emotional expression is likely prone to social desirability bias in interviewer-administered survey modes and test my hypotheses on mixed-mode ANES data. The findings demonstrate that respondents significantly underreport negative emotions—anger and fear—when interviewed face-to-face as compared to online. Furthermore, positive emotions, such as hope and pride, are not exempt from biased reporting related to interview mode. These results highlight the risks of estimating emotions and their salience by either relying on interviewer administration or combining survey modes.
Voters’ issue preferences are key determinants of vote choice, making it essential to reduce measurement error in responses to issue questions in surveys. This study uses a MultiTrait MultiError approach to assess the data quality of issue questions by separating four sources of variation: trait, acquiescence, method, and random error. The questions generally achieved moderate data quality, with 76% on average representing valid variance. Random error made up the largest proportion of error (23%). Error due to method and acquiescence was small. We found that 5-point scales are generally better than 11-point scales, while answers by respondents with lower political sophistication achieved lower data quality. The findings indicate a need to focus on decreasing random error when studying issue positions.
Travel accounts provide both benefits and challenges to survey archaeologists. This article presents a case study, generated by the Vayots Dzor Silk Road Survey, which aims to reconstruct the medieval (tenth to fifteenth centuries ad) landscape of Vayots Dzor in the Republic of Armenia, ‘excavating’ literary accounts of its landscape. Knowledge of this region in the Middle Ages is dominated by a core text written in the thirteenth century by Bishop Step’anos Orbelyan. From the mid-nineteenth century onwards, the region was visited by travellers who found links between the places they visited, the inscriptions they recorded, and the events and locations attested in Orbelyan’s text. Through examples from the site list of the Vayots Dzor Silk Road Survey, the authors explore how these and other sources accumulate, creating local knowledge about places that inform archaeologists and heritage professionals. They argue for reflection on the ways that local memory, archaeology, and the physical landscape inform complex makings of place.
Panel surveys and phone-based data collection are essential for survey research and are often used together due to the practical advantages of conducting repeated interviews over the phone. These tools are particularly critical for research in dynamic or high-risk settings, as highlighted by researchers’ responses to the COVID-19 pandemic. However, preventing high attrition is a major challenge in panel surveys. Current solutions in political science focus on statistical fixes to address attrition ex-post but often overlook a preferable solution: minimizing attrition in the first place. Building on a review of political science panel studies and established best practices, we propose a framework to reduce attrition and introduce an online platform to facilitate the logistics of survey implementation. The web application semi-automates survey call scheduling and enumerator workflows, helping to reduce panel attrition, improve data quality, and minimize enumerator errors. Using this framework in a panel study of Syrian refugees in Lebanon, we maintained participant retention at 63 percent four and a half years after the baseline survey. We provide guidelines for researchers to report panel studies transparently and describe their designs in detail.
Survey research is a method commonly used to understand what members of a population think, feel, and do. This chapter uses the total survey error perspective and the fitness for use perspective to explore how biasing and variable errors occur in surveys. Coverage error and sample frames, nonprobability samples and web panels, sampling error, nonresponse rates and nonresponse bias, and sources of measurement error are discussed. Different pretesting methods and modes of data collection commonly used in surveys are described. The chapter concludes that survey research is a tool that social psychologists may use to improve the generalizability of studies, to evaluate how different populations react to different experimental conditions, and to understand patterns in outcomes that may vary over time, place, or people.
As survey experiments have become increasingly common in political science, some scholars have questioned whether inferences about the real world can be drawn from experiments involving hypothetical, text-based scenarios. In response to this criticism, some researchers recommended using realistic, context-heavy vignettes while others argue that abstract vignettes do not generate substantially different results. We contribute to this debate by evaluating whether incorporating contextually realistic graphics into survey experiment vignettes affects experimental outcomes. We field three original experiments that vary whether respondents are shown a realistic graphic or a plain text description during an international crisis. In our experiments, varying whether respondents are shown realistic graphics or plain text descriptions generally yields little difference in outcomes. Our findings have implications for survey methodology and experiments in political science – researchers may not need to invest the time to develop contextually realistic graphics when designing experiments.
This paper surveys what we have learned on financial literacy and its relation to financial behavior from data collected in the Dutch Central Bank (DNB) Household Survey, a project done in collaboration with academics. A pioneering survey fielded in 2005 included an extensive set of financial literacy questions and questions that can serve as instruments for financial literacy in regression analyses to assess the causal effect of financial literacy on behavior. We describe how this survey spurred a series of research papers demonstrating the crucial role of financial literacy in stock market participation, retirement planning, and wealth accumulation. This inspired various follow-up studies and experiments based on new data collections in the DNB Household Survey. Researchers worldwide have used these data for innovative studies, and other surveys have included similar questions. This case study exemplifies the essential role of data in empirical research, showing how innovative data collections can inspire new research initiatives and significantly contribute to our understanding of household financial decision-making.
Scholars often use monetary incentives to boost participation rates in online surveys. This technique follows existing literature from western countries, which suggests egoistic incentives effectively boost survey participation. Positing that incentives’ effectiveness vary by country context, we tested this proposition through an experiment in Australia, India, and the USA. We compared three types of monetary lotteries to narrative and altruistic appeals. We find that egoistic rewards are most effective in the USA and to some extent, in Australia. In India, respondents are just as responsive to altruistic incentives as to egoistic incentives. Results from an adapted dictator game corroborate these patterns. Our results caution scholars against exporting survey participation incentives to areas where they have not been tested.
Scholars and policymakers warn that with rising affective polarization, politicians will find support from the public and permission from military professionals to use military force to selectively crack down on political opponents. We test these claims by conducting parallel survey experiments among the US public and mid-career military officers. We ask about two hypothetical scenarios of domestic partisan unrest, randomly assigning the partisan identity of protesters. Surprisingly, we find widespread public support for deploying the military and no significant partisanship effects. Meanwhile, military officers were very resistant to deploying the military, with nearly 75 percent opposed in any scenario. In short, there is little evidence that public polarization threatens to escalate domestic disputes, and strong evidence for military opposition.
What are the consequences of including a “don't know” (DK) response option to attitudinal survey questions? Existing research, based on traditional survey modes, argues that it reduces the effective sample size without improving the quality of responses. We contend that it can have important effects not only on estimates of aggregate public opinion, but also on estimates of opinion differences between subgroups of the population who have different levels of political information. Through a pre-registered online survey experiment conducted in the United States, we find that the DK response option has consequences for opinion estimates in the present day, where most organizations rely on online panels, but mainly for respondents with low levels of political information and on low salience issues. These findings imply that the exclusion of a DK option can matter, with implications for assessments of preference differences and our understanding of their impacts on politics and policy.
Among the greatest challenges facing scholars of public opinion are the potential biases associated with survey item nonresponse and preference falsification. This difficulty has led researchers to utilize nonresponse rates to gauge the degree of preference falsification across regimes. This article addresses the use of survey nonresponse rates to proxy for preference falsification. A simulation analysis exploring the expression of preferences under varying degrees of repression was conducted to examine the viability of using nonresponse rates to regime assessment questions. The simulation demonstrates that nonresponse rates to regime assessment questions and indices based on nonresponse rates are not viable proxies for preference falsification. An empirical examination of survey data supports the results of the simulation analysis.
When surveyed, clear majorities express concern about inequality and view the government as responsible for addressing it. Scholars often interpret this view as popular support for redistribution. We question this interpretation, contending that many people have little grasp of what reducing inequality actually entails, and that this disconnect masks important variation in preferences over concrete policies. Using original survey and experimental US data, we provide systematic evidence in line with these conjectures. Furthermore, when asked about more concrete redistributive measures, support for government action changes significantly and aligns more closely with people's self-interest. These findings have implications for how egalitarian policies can be effectively communicated to the public, as well as methodological implications for the study of preferences on redistribution.
Most public opinion research in China uses direct questions to measure support for the Chinese Communist Party (CCP) and government policies. These direct question surveys routinely find that over 90 per cent of Chinese citizens support the government. From this, scholars conclude that the CCP enjoys genuine legitimacy. In this paper, we present results from two survey experiments in contemporary China that make clear that citizens conceal their opposition to the CCP for fear of repression. When respondents are asked directly, we find, like other scholars, approval ratings for the CCP that exceed 90 per cent. When respondents are asked in the form of list experiments, which confer a greater sense of anonymity, CCP support hovers between 50 per cent and 70 per cent. This represents an upper bound, however, since list experiments may not fully mitigate incentives for preference falsification. The list experiments also suggest that fear of government repression discourages some 40 per cent of Chinese citizens from participating in anti-regime protests. Most broadly, this paper suggests that scholars should stop using direct question surveys to measure political opinions in China.
Granular temporal and spatial scale observations of conservation practices are essential for identifying changes in the production systems that improve soil health and water quality and inform long-term agricultural research and adaptive policy development. In this study, we demonstrate an innovative use of farmer practice survey data and what can be uniquely known from a detailed survey that targets specific farm groups with a regional focus over multiple consecutive years. Using three years of survey data (n = 3914 respondents), we describe prevailing crop rotation, tillage, and cover crop practice use in four Midwestern US states. Like national metrics, the results confirm dominant practices across the landscape, including corn-soybean rotation, little use of continuous no-till, and the limited use of cover crops. Our detailed regional survey further reveals differences by state for no-till and cover crop adoption rates that were not captured in federal datasets. For example, 66% of sampled acreage in the Midwest has corn and soybean rotation, with Illinois having the highest rate (72%) and Michigan the lowest (41%). In 2018, 20% of the corn acreage and 38% of the soybean acreage were in no-till, and 13% of the corn acres and 9% of the soybean acres were planted with a cover crop. Cover crop adoption rates fluctuate from year to year. Results demonstrate the value of a farmer survey at state scales over multiple years in complementing federal statistics and monitoring state and yearly differences in practice adoption. Agricultural policies and industry heavily depend on accurate and timely information that reflects spatial and temporal dynamics. We recommend building an agricultural information exchange and workforce that integrates diverse data sources with complementary strengths to provide a greater understanding of agricultural management practices that provide baseline data for prevailing practices.