Hostname: page-component-89b8bd64d-n8gtw Total loading time: 0 Render date: 2026-05-07T17:23:54.042Z Has data issue: false hasContentIssue false

Analyze the attentive and bypass bias: mock vignette checks in survey experiments

Published online by Cambridge University Press:  03 February 2023

John V. Kane*
Affiliation:
Center for Global Affairs, New York University, New York, USA
Yamil R. Velez
Affiliation:
Department of Political Science, Columbia University, New York, USA
Jason Barabas
Affiliation:
Department of Government, Dartmouth College, Hanover, USA
*
*Corresponding author. Email: jvk221@nyu.edu
Rights & Permissions [Opens in a new window]

Abstract

Respondent inattentiveness threatens to undermine causal inferences in survey-based experiments. Unfortunately, existing attention checks may induce bias while diagnosing potential problems. As an alternative, we propose “mock vignette checks” (MVCs), which are objective questions that follow short policy-related passages. Importantly, all subjects view the same vignette before the focal experiment, resulting in a common set of pre-treatment attentiveness measures. Thus, interacting MVCs with treatment indicators permits unbiased hypothesis tests despite substantial inattentiveness. In replications of several experiments with national samples, we find that MVC performance is significantly predictive of stronger treatment effects, and slightly outperforms rival measures of attentiveness, without significantly altering treatment effects. Finally, the MVCs tested here are reliable, interchangeable, and largely uncorrelated with political and socio-demographic variables.

Information

Type
Original Article
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of the European Political Science Association
Figure 0

Table 1. Overview of samples, MVs, and experiments

Figure 1

Figure 1. Implementation of MVs in each study.Notes: Design used in the MTurk 1, Qualtrics, MTurk 2, and NORC studies. Respondents in the Lucid study participated in this process twice. Each box represents a different screen viewed by respondents. Timers were used on each screen to record the amount of time (in milliseconds) respondents spent on each screen. All studies featured an experiment with two conditions.

Figure 2

Table 2. Overview of samples, MVs, and experiments (Lucid study)

Figure 3

Table 3. Example MV and MVCs (scientific publishing)

Figure 4

Figure 2. MVC performance associated with larger treatment effects.Notes: Figure displays treatment effect estimates for “Student Loan Forgiveness” experiment (top panel) and “Welfare Deservingness” experiment across performance on the MVC scale (95 percent CIs shown). Top (bottom) panel shows that the negative (positive) effect observed in original experiment grows larger in magnitude as MVC performance increases. Histogram represents the percent of the sample correctly answering × MVCs. Total N = 744 (NORC) and 804 (MTurk Study 2).

Figure 5

Table 4. Conditional effect of treatment on outcome across MVC passage rates

Figure 6

Figure 3. CATE estimates across experiments (by MV featured).Notes: CATEs across the number of correct MVCs for each MV–experiment pair. Points represent CATE estimates (95 percent CIs shown). Histogram represents the percent of the sample correctly answering × MVCs.

Supplementary material: Link

Kane et al. Dataset

Link
Supplementary material: File

Kane et al. supplementary material

Kane et al. supplementary material 1

Download Kane et al. supplementary material(File)
File 4.7 MB
Supplementary material: PDF

Kane et al. supplementary material

Kane et al. supplementary material 2

Download Kane et al. supplementary material(PDF)
PDF 1.9 MB