Hostname: page-component-76fb5796d-skm99 Total loading time: 0 Render date: 2024-04-28T02:45:36.063Z Has data issue: false hasContentIssue false

Analyze the attentive and bypass bias: mock vignette checks in survey experiments

Published online by Cambridge University Press:  03 February 2023

John V. Kane*
Affiliation:
Center for Global Affairs, New York University, New York, USA
Yamil R. Velez
Affiliation:
Department of Political Science, Columbia University, New York, USA
Jason Barabas
Affiliation:
Department of Government, Dartmouth College, Hanover, USA
*
*Corresponding author. Email: jvk221@nyu.edu

Abstract

Respondent inattentiveness threatens to undermine causal inferences in survey-based experiments. Unfortunately, existing attention checks may induce bias while diagnosing potential problems. As an alternative, we propose “mock vignette checks” (MVCs), which are objective questions that follow short policy-related passages. Importantly, all subjects view the same vignette before the focal experiment, resulting in a common set of pre-treatment attentiveness measures. Thus, interacting MVCs with treatment indicators permits unbiased hypothesis tests despite substantial inattentiveness. In replications of several experiments with national samples, we find that MVC performance is significantly predictive of stronger treatment effects, and slightly outperforms rival measures of attentiveness, without significantly altering treatment effects. Finally, the MVCs tested here are reliable, interchangeable, and largely uncorrelated with political and socio-demographic variables.

Type
Original Article
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of the European Political Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aarøe, L and Petersen, MB (2014) Crowding out culture: Scandinavians and Americans agree on social welfare in the face of deservingness cues. The Journal of Politics 76, 684697.CrossRefGoogle Scholar
Acharya, A, Blackwell, M and Sen, M (2016) Explaining causal findings without bias: detecting and assessing direct effects. American Political Science Review 110, 512529.CrossRefGoogle Scholar
Alvarez, RM, Atkeson, LR, Levin, I and Li, Y (2019) Paying attention to inattentive survey respondents. Political Analysis 27, 145162.CrossRefGoogle Scholar
Anduiza, E and Galais, C (2017) Answering without reading: IMCs and strong satisficing in online surveys. International Journal of Public Opinion Research 29, 497519.Google Scholar
Aronow, PM, Baron, J and Pinson, L (2019) A note on dropping experimental subjects who fail a manipulation check. Political Analysis 27, 572589.CrossRefGoogle Scholar
Bailey, MA (2021) Real Stats: Using Econometrics for Political Science and Public Policy, 1st Edn. New York, NY: Oxford University Press.Google Scholar
Berinsky, AJ, Margolis, MF and Sances, MW (2014) Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys. American Journal of Political Science 58, 739753.CrossRefGoogle Scholar
Clifford, S and Jerit, J (2014) Is there a cost to convenience? An experimental comparison of data quality in laboratory and online studies. Journal of Experimental Political Science 1, 120131.CrossRefGoogle Scholar
Clifford, S and Jerit, J (2015) Do attempts to improve respondent attention increase social desirability bias? Public Opinion Quarterly 79, 790802.CrossRefGoogle Scholar
Coppock, A (2019) Avoiding post-treatment bias in audit experiments. Journal of Experimental Political Science 6, 14.CrossRefGoogle Scholar
Coppock, A, Leeper, TJ and Mullinix, K (2018) Generalizability of heterogeneous treatment effect estimates across samples. Proceedings of the National Academy of Sciences 115, 1244112446.CrossRefGoogle ScholarPubMed
Druckman, JN (2021) Experimental Thinking: A Primer on Social Science Experiments. New York, NY: Cambridge University Press. Available at https://faculty.wcas.northwestern.edu/~jnd260/pub/Druckman%20Experimental%20Thinking%20Fall%202020%20Submitted.pdf.Google Scholar
Gerber, AS and Green, DP (2012) Field Experiments: Design, Analysis, and Interpretation. New York: W. W. Norton & Company.Google Scholar
Gross, K (2008) Framing persuasive appeals: episodic and thematic framing, emotional response, and policy opinion. Political Psychology 29, 169192.CrossRefGoogle Scholar
Harden, JJ, Sokhey, AE and Runge, KL (2019) Accounting for noncompliance in survey experiments. Journal of Experimental Political Science 6, 199202.CrossRefGoogle Scholar
Hauser, DJ and Schwarz, N (2015) It's a trap! Instructional manipulation checks prompt systematic thinking on ‘tricky’ tasks. SAGE Open 5, 16.CrossRefGoogle Scholar
Hauser, DJ and Schwarz, N (2016) Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods 48, 400407.CrossRefGoogle ScholarPubMed
Kane, JV and Barabas, J (2019) No harm in checking: using factual manipulation checks to assess attentiveness in experiments. American Journal of Political Science 63, 234249.CrossRefGoogle Scholar
Krosnick, JA, Narayan, S and Smith, W (1996) Satisficing in surveys: initial evidence. Advances in Survey Research 1996, 2944.Google Scholar
Montgomery, JM, Nyhan, B and Torres, M (2018) How conditioning on post-treatment variables can ruin your experiment and what to do about it. American Journal of Political Science 62, 760775.CrossRefGoogle Scholar
Mullinix, KJ, Leeper, TJ, Druckman, JN and Freese, J (2015) The generalizability of survey experiments. Journal of Experimental Political Science 2, 109138.CrossRefGoogle Scholar
Mutz, DC (2011) Population-Based Survey Experiments. Princeton: Princeton University Press.Google Scholar
Nelson, TE, Clawson, RA and Oxley, ZM (1997) Media framing of a civil liberties conflict and its effect on tolerance. The American Political Science Review 91, 567.CrossRefGoogle Scholar
Niessen, A, Susan, M, Rob, RM and Tendeiro, JN (2016) Detecting careless respondents in web-based questionnaires: which method to use? Journal of Research in Personality 63, 111.CrossRefGoogle Scholar
Oppenheimer, DM, Meyvis, T and Davidenko, N (2009) Instructional manipulation checks: detecting satisficing to increase statistical power. Journal of Experimental Social Psychology 45, 867872.CrossRefGoogle Scholar
Shadish, WR, Cook, TD and Campbell, DT (2002) Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston, MA, USA: Houghton, Mifflin and Company.Google Scholar
Steiner, PM, Atzmüller, C and Su, D (2016) Designing valid and reliable vignette experiments for survey research: a case study on the fair gender income gap. Journal of Methods and Measurement in the Social Sciences 7, 5294.Google Scholar
Thomas, KA and Clifford, S (2017) Validity and mechanical Turk: an assessment of exclusion methods and interactive experiments. Computers in Human Behavior 77, 184197.CrossRefGoogle Scholar
Transue, JE, Lee, DJ and Aldrich, JH (2009) Treatment spillover effects across survey experiments. Political Analysis 17, 143161.CrossRefGoogle Scholar
Valentino, NA, Soroka, SN, Iyengar, S, Aalberg, T, Duch, R, Fraile, M, Hahn, KS, Hansen, KM, Harell, A, Helbling, M, Jackman, SD and Kobayashi, T (2019) Economic and cultural drivers of immigrant support worldwide. British Journal of Political Science 49, 12011226.CrossRefGoogle Scholar
Vraga, E, Bode, L and Troller-Renfree, S (2016) Beyond self-reports: using eye tracking to measure topic and style differences in attention to social media content. Communication Methods and Measures 10, 149164.CrossRefGoogle Scholar
Wood, D, Harms, PD, Lowman, GH and DeSimone, JA (2017) Response speed and response consistency as mutually validating indicators of data quality in online samples. Social Psychological and Personality Science 8, 454464.CrossRefGoogle Scholar
Zwaan, RA, Pecher, D, Paolacci, G, Bouwmeester, S, Verkoeijen, P, Dijkstra, K and Zeelenberg, R (2018) Participant Nonnaiveté and the reproducibility of cognitive psychology. Psychonomic Bulletin & Review 25, 19681972.CrossRefGoogle ScholarPubMed
Supplementary material: Link

Kane et al. Dataset

Link
Supplementary material: File

Kane et al. supplementary material

Kane et al. supplementary material 1

Download Kane et al. supplementary material(File)
File 4.7 MB
Supplementary material: PDF

Kane et al. supplementary material

Kane et al. supplementary material 2

Download Kane et al. supplementary material(PDF)
PDF 1.9 MB