Hostname: page-component-848d4c4894-ndmmz Total loading time: 0 Render date: 2024-05-19T06:34:18.362Z Has data issue: false hasContentIssue false

How Dropping Subjects Who Failed Manipulation Checks Can Bias Your Results: An Illustrative Case

Published online by Cambridge University Press:  24 November 2022

Simon Varaine*
Affiliation:
PACTE, Université Grenoble Alpes, CNRS, Sciences Po Grenoble (School of Political Studies), 38000 Grenoble, France

Abstract

Manipulations checks are postexperimental measures widely used to verify that subjects understood the treatment. Some researchers drop subjects who failed manipulation checks in order to limit the analyses to attentive subjects. This short report offers a novel illustration on how this practice may bias experimental results: in the present case, through confirming a hypothesis that is likely false. In a survey experiment, subjects were primed with a fictional news story depicting an economic decline versus prosperity. Subjects were then asked whether the news story depicted an economic decline or prosperity. Results indicate that responses to this manipulation check captured subjects’ preexisting beliefs about the economic situation. As a consequence, dropping subjects who failed the manipulation check mixes the effects of preexisting and induced beliefs, increasing the risk of false positive findings. Researchers should avoid dropping subjects based on posttreatment measures and rely on pretreatment measures of attentiveness.

Type
Short Report
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of American Political Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This article has earned badges for transparent research practices: Open Data and Open Materials. For details see the Data Availability Statement.

References

Aronow, P. M., Baron, J., and Pinson, L. 2019. A Note on Dropping Experimental Subjects Who Fail a Manipulation Check. Political Analysis 27, 572–89.CrossRefGoogle Scholar
Berinsky, A. J., Margolis, M. F., and Sances, M. W. 2014. Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-administered Surveys. American Journal of Political Science 58, 739–53.CrossRefGoogle Scholar
Börger, T. 2016. Are Fast Responses More Random? Testing the Effect of Response Time on Scale in an Online Choice Experiment. Environmental and Resource Economics 65, 389413.CrossRefGoogle Scholar
Hauser, D. J. and Schwarz, N. 2015. It’sa Trap! Instructional Manipulation Checks Prompt Systematic Thinking on “Tricky” Tasks. Sage Open 5, 2158244015584617.10.1177/2158244015584617CrossRefGoogle Scholar
Kane, J. V. and Barabas, J. 2019. No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments. American Journal of Political Science 63, 234–49.CrossRefGoogle Scholar
Kane, J. V., Velez, Y. R., and Barabas, J. 2020. Analyze the Attentive & Bypass Bias: Mock Vignette Checks in Survey Experiments. APSA Preprints. doi: 10.33774/apsa-2020-96t72-v2.CrossRefGoogle Scholar
Montgomery, J. M., Nyhan, B., and Torres, M. 2018. How Conditioning on Posttreatment Variables can Ruin Your Experiment and What to do About it. American Journal of Political Science 62, 760–75.CrossRefGoogle Scholar
Mutz, D. C. and Pemantle, R. 2015. Standards for Experimental Research: Encouraging a Better Understanding of Experimental Methods. Journal of Experimental Political Science 2, 192215.CrossRefGoogle Scholar
Oppenheimer, D. M., Meyvis, T., and Davidenko, N. 2009. Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power. Journal of Experimental Social Psychology 45, 867872.CrossRefGoogle Scholar
Read, B., Wolters, L., and Berinsky, A. J. 2021. Racing the Clock: Using Response Time as a Proxy for Attentiveness on Self-Administered Surveys. Political Analysis, 120. doi: 10.1017/pan.2021.32.Google Scholar
Sansone, C., Morf, C. C., and Panter, A. T. 2003. The Sage Handbook of Methods in Social Psychology. Thousand Oaks: Sage Publications.Google Scholar
Stenner, K. 2005. The Authoritarian Dynamic. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Varaine, S. 2022. Replication Data for: How dropping subjects who failed manipulation checks can bias your experimental results. An illustrative case. Harvard Dataverse. doi: 10.7910/DVN/7DXBGG.CrossRefGoogle Scholar
Wood, D., Harms, P. D., Lowman, G. H., and DeSimone, J. A. 2017. Response Speed and Response Consistency as Mutually Validating Indicators of Data Quality in Online Samples. Social Psychological and Personality Science 8, 454464.CrossRefGoogle Scholar
Supplementary material: Link

Varaine Dataset

Link