Hostname: page-component-848d4c4894-wzw2p Total loading time: 0 Render date: 2024-06-01T22:20:27.588Z Has data issue: false hasContentIssue false

Using screeners to measure respondent attention on self-administered surveys: Which items and how many?

Published online by Cambridge University Press:  12 November 2019

Adam J. Berinsky
Affiliation:
Department of Political Science, MIT, Cambridge, USA
Michele F. Margolis*
Affiliation:
Department of Political Science, University of Pennsylvania, Philadelphia, USA
Michael W. Sances
Affiliation:
Department of Political Science, Temple University, Philadelphia, USA
Christopher Warshaw
Affiliation:
Department of Political Science, George Washington University, Washington, USA
*
*Corresponding author. Email: mmargo@sas.upenn.edu

Abstract

Inattentive respondents introduce noise into data sets, weakening correlations between items and increasing the likelihood of null findings. “Screeners” have been proposed as a way to identify inattentive respondents, but questions remain regarding their implementation. First, what is the optimal number of Screeners for identifying inattentive respondents? Second, what types of Screener questions best capture inattention? In this paper, we address both of these questions. Using item-response theory to aggregate individual Screeners we find that four Screeners are sufficient to identify inattentive respondents. Moreover, two grid and two multiple choice questions work well. Our findings have relevance for applied survey research in political science and other disciplines. Most importantly, our recommendations enable the standardization of Screeners on future surveys.

Type
Research Note
Copyright
Copyright © The European Political Science Association 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alvarez, RM, Atkeson, LR, Levin, I and Li, Y (2019) Paying attention to inattentive survey respondents. Political Analysis 27(2), 145162.CrossRefGoogle Scholar
Ansolabehere, S, Rodden, J and Snyder, JM Jr (2008) The strength of issues: using multiple measures to gauge preference stability, ideological constraint, and issue voting. American Political Science Review 102, 215232.CrossRefGoogle Scholar
Berinsky, AJ, Margolis, MF and Sances, MW (2014) Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys. American Journal of Political Science 58, 739753.CrossRefGoogle Scholar
Berinsky, AJ, Margolis, MF and Sances, MW (2016) Can we turn shirkers into workers? Journal of Experimental Social Psychology 66, 2028.CrossRefGoogle Scholar
Bimbaum, A (1968) Some latent trait models and their use in inferring an examinee's ability. In Lord, FM, Novick, MR and Birnbaum, A (eds), Statistical Theories of Mental Test Scores. Oxford, England: Addison-Wesley, pp. 395479.Google Scholar
Clinton, J, Jackman, S and Rivers, D (2004) The statistical analysis of roll call data. American Political Science Review 98, 355370.CrossRefGoogle Scholar
Fox, J-P (2010) Bayesian Item Response Modeling: Theory and Applications. New York: Springer (PDF ebook).CrossRefGoogle Scholar
Hauser, DJ and Schwarz, N (2015) It's a trap! Instructional manipulation checks prompt systematic thinking on ‘tricky’ tasks. Sage Open 5(2), 16.CrossRefGoogle Scholar
Jackman, S (2009) Bayesian Analysis for the Social Sciences. Hoboken, NJ: Wiley.CrossRefGoogle Scholar
Jackman, S (2010) pscl: Classes and methods for R. Developed in the Political Science Computational Laboratory, Stanford University. Department of Political Science, Stanford University, Stanford, CA. R package version 1.03. 5. http://www.pscl.stanford.edu/.Google Scholar
Kaplan, D (2004) The Sage Handbook of Quantitative Methodology for the Social Sciences. Thousand Oaks, CA: Sage.CrossRefGoogle Scholar
Kung, FY, Kwok, N and Brown, DJ (2018) Are attention check questions a threat to scale validity? Applied Psychology 67(2), 264283.CrossRefGoogle Scholar
Meade, AW and Craig, SB (2012) Identifying careless responses in survey data. Psychological Methods 17, 437.CrossRefGoogle ScholarPubMed
Montgomery, JM and Cutler, J (2013) Computerized adaptive testing for public opinion surveys. Political Analysis 21, 172192.CrossRefGoogle Scholar
Oppenheimer, DM, Meyvis, T and Davidenko, N (2009) Instructional manipulation checks: detecting satisficing to increase statistical power. Journal of Experimental Social Psychology 45, 867872.CrossRefGoogle Scholar
Tausanovitch, C and Warshaw, C (2012) How should we choose survey questions to measure citizens' policy preferences? Working paper, Department of Political Science, Stanford University. Available at http://www.chriswarshaw.com/papers/MeasuringPreferences_Feb142012.pdf.Google Scholar
Treier, S and Hillygus, DS (2009) The nature of political ideology in the contemporary electorate. Public Opinion Quarterly 73, 679703.CrossRefGoogle Scholar
Tversky, A and Kahneman, D (1981) The framing of decisions and the psychology of choice. Science 211(4481), 453458.CrossRefGoogle ScholarPubMed
van der Linden, WJ (1998) Bayesian item selection criteria for adaptive testing. Psychometrika 63, 201216.CrossRefGoogle Scholar
Van der Linden, WJ (2005) Linear Models for Optimal Test Design. New York: Springer Science & Business Media.CrossRefGoogle Scholar
Supplementary material: PDF

Berinsky et al. supplementary material

Online Appendix

Download Berinsky et al. supplementary material(PDF)
PDF 333.6 KB