Hostname: page-component-68c7f8b79f-lqrcg Total loading time: 0 Render date: 2026-01-12T09:14:24.607Z Has data issue: false hasContentIssue false

Survey Quality and Acquiescence Bias: A Cautionary Tale

Published online by Cambridge University Press:  12 January 2026

Andrés Cruz
Affiliation:
Department of Government, University of Texas at Austin , USA
Adam Bouyamourn
Affiliation:
Department of Politics, Princeton University , USA
Joseph T. Ornstein*
Affiliation:
Department of Political Science, University of Georgia , USA
*
Corresponding author: Joseph T. Ornstein; Email: jornstein@uga.edu
Rights & Permissions [Opens in a new window]

Abstract

In this note, we offer a cautionary tale on the dangers of drawing inferences from low-quality online survey datasets. We reanalyze and replicate a survey experiment studying the effect of acquiescence bias on estimates of conspiratorial beliefs and political misinformation. Correcting a minor data coding error yields a puzzling result: respondents with a postgraduate education appear to be the most prone to acquiescence bias. We conduct two preregistered replication studies to better understand this finding. In our first replication, conducted using the same survey platform as the original study, we find a nearly identical set of results. But in our second replication, conducted with a larger and higher-quality survey panel, this apparent effect disappears. We conclude that the observed relationship was an artifact of inattentive and fraudulent responses in the original survey panel, and that attention checks alone do not fully resolve the problem. This demonstrates how “survey trolls” and inattentive respondents on low-quality survey platforms can generate spurious and theoretically confusing results.

Information

Type
Letter
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of The Society for Political Methodology

1 Introduction

In a series of survey experiments, Hill and Roberts (Reference Hill and Roberts2023) show that acquiescence bias—the tendency of survey respondents to answer True, Agree, or Yes more often than False, Disagree, or No—artificially inflates survey-based estimates of conspiratorial beliefs and misinformation among the mass public. By randomly assigning their respondents to opposite-keyed versions of a set of yes–no questions, the authors show that the magnitude of acquiescence bias can increase the estimated prevalence of some conspiratorial beliefs by upward of 50%. In addition, they show that some groups of respondents are more prone to this bias than others: the effect of question wording was largest among very conservative and very liberal respondents, younger respondents, and those with the lowest education and numeracy. Taken together, these findings suggest that acquiescence bias not only causes researchers to overstate the prevalence of conspiratorial beliefs, but also to mis-estimate the association between conspiratorial beliefs and various demographic characteristics.

Reexamining the study’s data, we reproduce its main findings—acquiescence bias has real and significant effects on estimates of conspiratorial beliefs and misinformation. However, we identify a minor coding error in the original analysis that, once corrected, reverses the estimated association between acquiescence bias and education. Specifically, we find that the eight-category education variable contains several respondents coded as $-3105$ (“None of the above”) by the Lucid survey platform in the 2020 survey wave. Although this error affects a relatively small number of respondents (14 out of 2,055 respondents in that survey wave), these outliers have an outsized influence on the paper’s estimated associations because the covariate is coded as a continuous variable. Once those 14 respondents are dropped, the resulting estimates reveal that acquiescence bias is largest among more educated respondents, particularly those with a postgraduate education.

This result is surprising in light of decades of survey research. Over 60 years ago, Campbell et al. (Reference Campbell, Converse, Miller and Stokes1960) warned that research on authoritarian personalities was likely to be undermined by the higher rate of acquiescence among those with less education. A wide range of survey experiments have corroborated this theory, finding that the most highly educated respondents are the least susceptible to question wording effects and acquiescence bias (Narayan and Krosnick Reference Narayan and Krosnick1996; Schuman and Presser Reference Schuman and Presser1981). To the extent that acquiescence bias is driven by inattentiveness, one would expect it to be most prevalent among less-educated respondents, who are on average less attentive to question wording (Berinsky, Margolis, and Sances Reference Berinsky, Margolis and Sances2014; Ziegler Reference Ziegler2022).

After corresponding with the authors of the original study, they promptly issued a corrigendum correcting the error (Hill and Roberts Reference Hill and Roberts2025), expressing surprise at the reversed association between education and acquiescence bias, and urging further study. We designed and preregisteredFootnote 1 two new survey experiments to test several competing theories that could explain the result.

Theory 1: Low-Quality and Fraudulent Responses. The most straightforward explanation is that our surprising result is being driven by “survey trolls” (Lopez and Hillygus Reference Lopez and Hillygus2018), a subset of respondents who are more likely to fraudulently report low-prevalence characteristics (e.g., having a postgraduate degree). The original experiments were conducted on the Lucid survey platform, which in recent years has seen a significant rise in low-quality survey responses (Ternovski and Orr Reference Ternovski and Orr2022). If respondents who misrepresent their educational attainment are more likely to satisfice when responding to other survey questions, it could easily explain the observed association between education and acquiescence bias. To test this hypothesis, we replicate the experiment on both the original platformFootnote 2 and a higher-quality survey panel (Bovitz–Forthright). The Bovitz panel exhibits significantly higher levels of response quality and representativeness (Stagnaro et al. Reference Stagnaro, Druckman, Berinsky, Arechar, Willer and Rand2024), and we are able to validate each respondent’s reported education against their demographic profile maintained by the company.

Theory 2: Deference to Researchers among the Highly Educated. Perhaps the most highly-educated respondents are more likely to make “miseducated guesses” (Graham Reference Graham2023) when responding to survey questions. If more educated respondents are, on average, more politically knowledgeable, then they would be more likely to have heard about and remember the controversies and conspiracies described in the survey. The original Hill and Roberts (Reference Hill and Roberts2023) experiment was administered four years after the relevant controversies occurred, so respondents who did remember them would have been unlikely to have detailed knowledge about their specifics (“Did Clinton say that half of Trump’s supporters were in a basket of deplorables, or some other fraction?”, “Did Trump refuse to say he’d concede in the second debate or the third debate?”). As a result, they may be more susceptible to question wording effects, deferring to the investigators when faced with a specific true/false statement. To test this hypothesis, we include in our replication studies a new set of questions about more recent and high-salience conspiracy theories that circulated during the 2024 presidential election. We should expect to observe a much weaker relationship between education and acquiescence bias for these questions, since the most informed respondents would be better able to recall the correct answer. We also administer a four-item political knowledge scale to assess the relationship between political knowledge and acquiescence, independent of education.

Theory 3: Response Option Effects. A third explanation for the pattern is that more educated respondents are less likely to admit that they are uncertain when presented with political knowledge questions (Jessee Reference Jessee2017; Mondak Reference Mondak1999). This could make them more prone to acquiescence bias than less educated respondents who more frequently responded “Not sure” in the original survey experiment. In the original study, respondents with a high school education or less responded “Not Sure” roughly 45% of the time, compared to 32% of respondents with a postgraduate education. To test this theory, we randomly assign respondents to receive “Not sure” as a response option in our replication studies, and estimate the effect of this design choice on rates of acquiescence bias.

The results we present below offer resounding support for Theory 1. In the larger, higher-quality Bovitz panel, average rates of acquiescence bias are roughly three times smaller, and there is no observed association between education and acquiescence. By comparison, our replication on the Lucid/Cint platform yielded nearly identical results as in the original paper. When asking respondents about recent, high-salience conspiracy theories (e.g., Haitian immigrants eating pets in Springfield OH), the highest rates of acquiescence bias were among respondents on the Lucid/Cint platform with self-reported postgraduate degrees. This evidence strongly suggests that the initial result was driven by inattentive and fraudulent responses, and that attention checks alone are insufficient to weed out these responses. We take this as a cautionary tale about the dangers of over-interpreting conditional effect estimates from low-quality survey platforms, particularly for low-prevalence characteristics.

2 Data and Analysis

The survey experiment was first conducted in December 2020 by Hill and Roberts (Reference Hill and Roberts2023) on a nationally representative sample of respondents using the online survey platform Lucid. Respondents were asked a series of True/False questions about several actual or “fake news” events that occurred during the 2016 U.S. presidential election, drawn from Allcott and Gentzkow (Reference Allcott and Gentzkow2017). For each question, respondents were randomly assigned to either a positive-keyed version of the claim (e.g., “The Clinton Foundation bought $137 million in illegal arms.” and “Pope Francis endorsed Donald Trump.”) or a negative-keyed version of the same claim (e.g., “The Clinton Foundation DID NOT buy millions of dollars worth of illegal arms.” and “Pope Francis DID NOT endorse Donald Trump.”). Because question wording is randomized, the difference between the share of respondents endorsing the positive-keyed version of the claim and the share of respondents rejecting the negative-keyed version of the claim provides a credible estimate of the magnitude of acquiescence bias.

In our reanalysis and replications, we adhere as closely as possible to the data analysis procedures from the original paper, estimating a linear-probability model predicting agreement with the positive-keyed version of each claim.Footnote 3 The magnitude of acquiescence bias is estimated by the marginal effect of the binary variable Pos keyed. Interaction terms capture differences in acquiescence bias by respondent characteristic. In our reanalysis, we drop the 14 respondents whose educational attainment is coded as $-3105$ , and recode education as a categorical variable.

Figure 1 illustrates a few questions driving the puzzling relationship between education and acquiescence in the original survey experiment, plotting the percentage of respondents who endorsed four different “fake news” claims, broken down by educational attainment and question wording (see Figure B.2 in the Supplementary Material for the other claims). Strikingly, highly-educated respondents were the most likely to endorse these false or conspiratorial statements, regardless of the statement’s partisan alignment. Acquiescence bias—the gap between red and black points—is consistently largest for the respondents with self-reported postgraduate degrees. When responding to positive-keyed versions of the claims, these respondents were more likely than any other group to endorse both Democratic-aligned misinformation (Mike Pence called Michelle Obama vulgar, Ireland announced that it was accepting American asylum applications) and Republican-aligned misinformation (the Clinton Foundation bought $137 million in illegal arms, the Pope endorsed Donald Trump). The fact that these groups exhibit the largest acquiescence bias regardless of the partisan alignment of the claim suggests that this relationship is not being driven by differences in partisanship.

Figure 1 Percent of respondents who agree with each false claim, broken down by question wording and educational attainment.

Note: Vertical bars denote 95% confidence intervals.

We make four modifications to the survey experiment in our replication studies. First, we add eight new questions about events (actual and conspiratorial) from the 2024 presidential election campaign.Footnote 4 Second, we include a four-item measure of political knowledge, asking respondents to identify the majority party in both houses of Congress, the office held by JD Vance, and the office held by John Roberts. Third, we randomize at the respondent level whether “Not sure” is included as a response option. Finally, we include a mock vignette attention check (Kane, Velez, and Barabas Reference Kane, Velez and Barabas2023),Footnote 5 but do not exclude respondents who fail the check, so we can test the relationship between inattention and acquiescence bias.Footnote 6 This new survey experiment was fielded in August 2025 on the Lucid/Cint platform $(n = 2,193)$ and the Bovitz–Forthright panel $(n=3,036)$ .

Table 1 reports effect estimates from all three surveys. In both the original survey (H&R) and the direct replication (Lucid/Cint), estimated acquiescence bias has a similar magnitude, between 10 and 15 percentage points on average (columns 1A and 1B), and there is a significant positive association between acquiescence bias and education (columns 2A and 2B). But in the larger, high-quality replication (Bovitz), the estimated average rate of acquiescence bias is roughly 2–3 times smaller (about 5 percentage points), and there is no association between education and acquiescence.

Table 1 Original results and replications

Note: * $p <$ 0.1, ** $p <$ 0.05, and *** $p <$ 0.01.

Table 2 examines the relationship between education and acquiescence bias, broken down by how respondents performed on the mock vignette attention check. Notably, the puzzling positive association is present in all Lucid/Cint subsamples (1A–1C). Though passing all attention checks yields a smaller estimate (1C), it is still positive and statistically significant (at the 10% level). Meanwhile, none of the Bovitz subsamples (2A–2C) exhibit the positive association. All in all, this suggests that attention checks, while useful, cannot fully correct the issues of a low-quality sample.

Table 2 Acquiescence bias and education, by attention check status and survey platform

Note: * $p <$ 0.1, ** $p <$ 0.05, and *** $p <$ 0.01.

Figure 2 Percent of respondents who expressed agreement with the false claim regarding Springfield OH, broken down by question wording, educational attainment, and survey panel. Note: Vertical bars denote 95% confidence intervals.

We find little support for the second theory (highly educated respondents exhibit more deference to researchers), as illustrated in Figure 2. One of our new questions asked respondents whether they agreed with the statement “The city manager of Springfield OH confirmed receiving credible reports of pets being eaten.” This was in reference to a highly-salient piece of misinformation that circulated widely during the 2024 U.S. presidential election. The negative-keyed version of the question was “The city manager of Springfield OH denied receiving credible reports of pets being eaten” (emphasis added). In the Bovitz panel, the percent of respondents expressing agreement with the false claim declines with education, from roughly 35% of respondents with a high school education or less to roughly 25% of respondents with a postgraduate degree. There are no significant differences in agreement across question wording conditions, regardless of education. In the Lucid/Cint panel, however, there is a nearly 25 percentage point difference across treatment conditions, but only for respondents who self-reported having a postgraduate education. We also find no correlation between acquiescence bias and our four-item political knowledge inventory (see Table C.2 in the Supplementary Material). This is not the pattern of responses we would expect if the result was due to respondents deferring to the researchers on specific factual claims.

Finally, we find some evidence that response options affect acquiescence bias, but no evidence that it is responsible for its association with education. Table C.3 in the Supplementary Material breaks down estimated acquiescence bias by survey panel, education, and treatment condition. The results from the Bovitz panel suggest that question wording effects are strongest when “Not sure” is included as a response option, but there is no positive association between education and acquiescence, regardless of which response options are shown to the respondents.

3 Discussion

Taken together, these results strongly support the theory that the positive association between education and acquiescence bias was an artifact of low-quality and fraudulent survey responses. When conducting the experiment on a large, high-quality survey panel, there is in fact no observed relationship between education and acquiescence bias. This highlights the importance of implementing best practices to ensure response quality and truthfulness in survey experiments. And it suggests that scholars should be cautious about over-interpreting conditional effect estimates in low-quality survey panels, particularly for low-prevalence attributes where survey respondents might have a financial incentive to make false claims about their identity (Bell and Gift Reference Bell and Gift2023).

Echoing Hill and Roberts (Reference Hill and Roberts2023), we advise researchers avoid wording questions in a way that induces acquiescence bias, or, if an Agree/Disagree format is deemed necessary, to randomize assignment to positive- and negative-keyed versions of the statement. Approaches like the 2020 ANES survey instrument—which lists both positive- and negative-keyed versions of each belief and asks respondents to select which they believe is more likely to be true—are preferable in most cases. Researchers interested in measuring conspiracy beliefs should give respondents an explicit choice between conspiratorial and conventional explanations, as in Clifford, Kim, and Sullivan (Reference Clifford, Kim and Sullivan2019). Such designs should be paired with careful screening for disengaged survey takers, since random responses can also inflate estimates of controversial or conspiratorial beliefs (Westwood et al. Reference Westwood, Grimmer, Tyler and Nall2022).

Acknowledgments

Thanks to Derek Mikola and Kevin Esterling for organizing the 2024 Polmeth Replication Games, from which this project originated, to Seth Hill and Molly Roberts for their prompt and thorough responses to our questions, and to the anonymous reviewers for their guidance on the replication effort.

Data Availability Statement

Replication materials for this article are available at Cruz, Bouyamourn, and Ornstein (Reference Cruz, Bouyamourn and Ornstein2025). A preservation copy of the same code and data can also be accessed via Dataverse at https://doi.org/10.7910/DVN/ZF32UF.

Competing Interest

The authors declare no competing interest.

Supplementary Material

For supplementary material accompanying this paper, please visit https://doi.org/10.1017/pan.2025.10030.

Footnotes

Edited by: Daniel J. Hopkins and Brandon M. Stewart

1 See AsPredicted no. 239215.

2 In December 2021, the Lucid platform was acquired by Cint Group; throughout the article, we label our replication on this platform “Cint” or “Lucid/Cint.”

3 As in the original analysis, we code agreement as 1, disagreement as 0, and “Not sure” as 0.5.

4 See Section A of the Supplementary Material for the full statements in positive and negative form. Like the originals, our new statements are equally distributed between Republican- and Democratic-aligned, and between true and false.

5 In particular, we use the “stadium licenses” scenario from Kane et al. (Reference Kane, Velez and Barabas2023, pp. SM 6–Reference Graham7). It includes a short prompt about a local policy topic and three multiple-choice questions.

6 Following our preregistration, we do not exclude any respondents when analyzing our experiments. Note that Hill and Roberts (Reference Hill and Roberts2023) exclude non-partisans from their main analyses. For consistency, we do the same when replicating their work in Tables 1 and 2. As shown in Table C.1 in the Supplementary Material, our main replication results (with respect to acquiescence bias and its relationship with education) also hold when including all respondents in the original sample.

References

Allcott, H., and Gentzkow, M.. 2017. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31 (2): 211236.10.1257/jep.31.2.211CrossRefGoogle Scholar
Bell, A. M., and Gift, T.. 2023. “Fraud in Online Surveys: Evidence from a Nonprobability, Subpopulation Sample.” Journal of Experimental Political Science 10 (1): 148153.10.1017/XPS.2022.8CrossRefGoogle Scholar
Berinsky, A. J., Margolis, M. F., and Sances, M. W.. 2014. “Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys.” American Journal of Political Science 58 (3): 739753.10.1111/ajps.12081CrossRefGoogle Scholar
Campbell, A., Converse, P. E., Miller, W. E., and Stokes, D. E.. 1960. The American Voter. Oxford: John Wiley.Google Scholar
Clifford, S., Kim, Y., and Sullivan, B. W.. 2019. “An Improved Question Format for Measuring Conspiracy Beliefs.” Public Opinion Quarterly 83 (4): 690722.10.1093/poq/nfz049CrossRefGoogle Scholar
Cruz, A., Bouyamourn, A., and Ornstein, J. T. (2025). “Replication Data for: Survey Quality and Acquiescence Bias: A Cautionary Tale.” https://doi.org/10.7910/DVN/ZF32UF CrossRefGoogle Scholar
Graham, M. H. 2023. “Measuring Misperceptions?American Political Science Review 117 (1): 80102.10.1017/S0003055422000387CrossRefGoogle Scholar
Hill, S. J., and Roberts, M. E.. 2023. “Acquiescence Bias Inflates Estimates of Conspiratorial Beliefs and Political Misperceptions.” Political Analysis 31 (4): 575590.10.1017/pan.2022.28CrossRefGoogle Scholar
Hill, S. J., and Roberts, M. E.. 2025. “Acquiescence Bias Inflates Estimates of Conspiratorial Beliefs and Political Misperceptions.” Political Analysis 33 (2): 178180.10.1017/pan.2024.27CrossRefGoogle Scholar
Jessee, S. A. 2017. “‘Don’t Know’ Responses, Personality, and the Measurement of Political Knowledge.” Political Science Research and Methods 5 (4): 711731.10.1017/psrm.2015.23CrossRefGoogle Scholar
Kane, J. V., Velez, Y. R., and Barabas, J.. 2023. “Analyze the Attentive and Bypass Bias: Mock Vignette Checks in Survey Experiments.” Political Science Research and Methods 11 (2): 293310.10.1017/psrm.2023.3CrossRefGoogle Scholar
Lopez, J., and Hillygus, D. S. (2018). “Why So Serious?: Survey Trolls and Misinformation.” Working Paper.10.2139/ssrn.3131087CrossRefGoogle Scholar
Mondak, J. J. 1999. “Reconsidering the Measurement of Political Knowledge.” Political Analysis 8 (1): 5782.10.1093/oxfordjournals.pan.a029805CrossRefGoogle Scholar
Narayan, S., and Krosnick, J. A.. 1996. “Education Moderates Some Response Effects in Attitude Measurement.” Public Opinion Quarterly 60 (1): 58.10.1086/297739CrossRefGoogle Scholar
Schuman, H., and Presser, S.. 1981. Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. Academic Press, San Diego.Google Scholar
Stagnaro, M. N., Druckman, J., Berinsky, A. J., Arechar, A. A., Willer, R., and Rand, D. G. (2024). “Representativeness Versus Response Quality: Assessing Nine Opt-In Online Survey Samples.” PsyArXiv Preprint.Google Scholar
Ternovski, J., and Orr, L.. 2022. “A Note on Increases in Inattentive Online Survey-Takers since 2020.” Journal of Quantitative Description: Digital Media 2: 135.Google Scholar
Westwood, S. J., Grimmer, J., Tyler, M., and Nall, C.. 2022. “Current Research Overstates American Support for Political Violence.” Proceedings of the National Academy of Sciences 119 (12): e2116870119.10.1073/pnas.2116870119CrossRefGoogle ScholarPubMed
Ziegler, J. 2022. “A Text-as-Data Approach for Using Open-Ended Responses as Manipulation Checks.” Political Analysis 30 (2): 289297.10.1017/pan.2021.2CrossRefGoogle Scholar
Figure 0

Figure 1 Percent of respondents who agree with each false claim, broken down by question wording and educational attainment.Note: Vertical bars denote 95% confidence intervals.

Figure 1

Table 1 Original results and replications

Figure 2

Table 2 Acquiescence bias and education, by attention check status and survey platform

Figure 3

Figure 2 Percent of respondents who expressed agreement with the false claim regarding Springfield OH, broken down by question wording, educational attainment, and survey panel. Note: Vertical bars denote 95% confidence intervals.

Supplementary material: File

Cruz et al. supplementary material

Cruz et al. supplementary material
Download Cruz et al. supplementary material(File)
File 833.7 KB
Supplementary material: Link
Link