Skip to main content Accessibility help
×
Home
Hostname: page-component-5c569c448b-gctlb Total loading time: 0.216 Render date: 2022-07-01T08:36:22.383Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "useNewApi": true } hasContentIssue true

A Bias-Corrected Estimator for the Crosswise Model with Inattentive Respondents

Published online by Cambridge University Press:  17 December 2021

Yuki Atsusaka*
Affiliation:
Ph.D. Candidate, Department of Political Science, Rice University, 6100 Main Street, Houston, TX 77005, USA. E-mail: atsusaka@rice.edu, URL: https://atsusaka.org
Randolph T. Stevenson
Affiliation:
Radoslav Tsanoff Professor of Public Affair, Department of Political Science, Rice University, 6100 Main St, Houston, TX 77005, USA. E-mail: randystevenson@rice.edu, URL: https://www.randystevenson.com
*
Corresponding author Yuki Atsusaka

Abstract

The crosswise model is an increasingly popular survey technique to elicit candid answers from respondents on sensitive questions. Recent studies, however, point out that in the presence of inattentive respondents, the conventional estimator of the prevalence of a sensitive attribute is biased toward 0.5. To remedy this problem, we propose a simple design-based bias correction using an anchor question that has a sensitive item with known prevalence. We demonstrate that we can easily estimate and correct for the bias arising from inattentive respondents without measuring individual-level attentiveness. We also offer several useful extensions of our estimator, including a sensitivity analysis for the conventional estimator, a strategy for weighting, a framework for multivariate regressions in which a latent sensitive trait is used as an outcome or a predictor, and tools for power analysis and parameter selection. Our method can be easily implemented through our open-source software cWise.

Type
Article
Copyright
© The Author(s) 2021. Published by Cambridge University Press on behalf of the Society for Political Methodology

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Edited by Jeff Gill

References

Ahlquist, J. S. 2018. “List Experiment Design, Non-Strategic Respondent Error, and Item Count Technique Estimators.” Political Analysis 26 (1): 3453.CrossRefGoogle Scholar
Alvarez, R. M., Atkeson, L. R., Levin, I., and Li, Y.. 2019. “Paying Attention to Inattentive Survey Respondents.” Political Analysis 27 (2): 145162.CrossRefGoogle Scholar
Aronow, P. M., Baron, J., and Pinson, L.. 2019. “A Note on Dropping Experimental Subjects Who Fail a Manipulation Check.” Political Analysis 27 (4): 572589.CrossRefGoogle Scholar
Atsusaka, Y., and Stevenson, R. T.. 2021. “Replication Data for: A Bias-Corrected Estimator for the Crosswise Model with Inattentive Respondents.” https://doi.org/10.7910/DVN/AHWMIL, Harvard Dataverse, V1, UNF:6:MzsjbSWLP+p82SZ/w8bjcQ== [fileUNF].Google Scholar
Berinsky, A. J., Margolis, M. F., and Sances, M. W.. 2014. “Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys.” American Journal of Political Science 58 (3): 739753.CrossRefGoogle Scholar
Blair, G., Chou, W., and Imai, K.. 2019. “List Experiments with Measurement Error.” Political Analysis 27 (4): 455480.CrossRefGoogle Scholar
Blair, G., Coppock, A., and Moor, M.. 2020. “When to Worry About Sensitivity Bias: A Social Reference Theory and Evidence from 30 Years of List Experiments.” American Political Science Review 114 (4): 12971315.CrossRefGoogle Scholar
Blair, G., Imai, K., and Lyall, J.. 2014. “Comparing and Combining List and Endorsement Experiments: Evidence from Afghanistan.” American Journal of Political Science 58 (4): 10431063.CrossRefGoogle Scholar
Blair, G., Imai, K., and Zhou, Y.-Y.. 2015. “Design and Analysis of the Randomized Response Technique.” Journal of the American Statistical Association 110 (511): 13041319.CrossRefGoogle Scholar
Bowyer, B. T., and Rogowski, J. C.. 2017. “Mode Matters: Evaluating Response Comparability in a Mixed-Mode Survey.” Political Science Research and Methods 5 (2): 295.CrossRefGoogle Scholar
Chaudhuri, A. 2012. “Unbiased Estimation of a Sensitive Proportion in General Sampling by Three Nonrandomized.” Journal of Statistical Theory and Practice 6 (2): 376381.CrossRefGoogle Scholar
Corbacho, A., Gingerich, D. W., Oliveros, V., and Ruiz-Vega, M.. 2016. “Corruption as a Self-Fulfilling Prophecy: Evidence from a Survey Experiment in Costa Rica.” American Journal of Political Science 60 (4): 10771092.CrossRefGoogle Scholar
Cruz, C. 2019. “Social Networks and the Targeting of Vote Buying.” Comparative Political Studies 52 (3): 382411.CrossRefGoogle Scholar
Enzmann, D. 2017. “Chapter 10: Die Anwendbarkeit Des Crosswise-Modells Zur Prüfung Kultureller Unter Schiede Sozial erwünschten Antwortverhaltens.” In Methodische Probleme von Mixed-Mode-Ansätzen in der Umfrageforschung, edited by Eifler, S. and Faulbaum, F., 239277. Wiesbaden: Springer VS.CrossRefGoogle Scholar
Evans, G., and King, G.. 2021. “Statistically Valid Inferences from Differentially Private Data Releases, with Application to the Facebook URLs Dataset.” Political Analysis, to appear.Google Scholar
Franco, A., Malhotra, N., Simonovits, G., and Zigerell, L.. 2017. “Developing Standards for Post-Hoc Weighting in Population-Based Survey Experiments.” Journal of Experimental Political Science 4 (2): 161172.CrossRefGoogle Scholar
Frye, T., Gehlbach, S., Marquardt, K. L., and Reuter, O. J.. 2017. “Is Putin’s Popularity Real?Post-Soviet Affairs 33 (1): 115.CrossRefGoogle Scholar
Galesic, M., Tourangeau, R., Couper, M. P., and Conrad, F. G.. 2008. “Eye-Tracking Data: New Insights on Response Order Effects and Other Cognitive Shortcuts in Survey Responding.” Public Opinion Quarterly 72 (5): 892913.CrossRefGoogle ScholarPubMed
Gingerich, D. W., Oliveros, V., Corbacho, A., and Ruiz-Vega, M.. 2016. “When to Protect? Using the Crosswise Model to Integrate Protected and Direct Responses in Surveys of Sensitive Behavior.” Political Analysis 24 (2): 132156.CrossRefGoogle Scholar
Heck, D. W., Hoffmann, A., and Moshagen, M.. 2018. “Detecting Nonadherence Without Loss in Efficiency: A Simple Extension of the Crosswise Model.” Behavior Research Methods 50 (5): 18951905.CrossRefGoogle ScholarPubMed
Hoffmann, A., Diedenhofen, B., Verschuere, B., and Musch, J.. 2015. “A Strong Validation of the Crosswise Model Using Experimentally-Induced Cheating Behavior.” Experimental Psychology 62 (6): 403414.CrossRefGoogle Scholar
Hoffmann, A., Meisters, J., and Musch, J.. 2020. “On the Validity of Non-Randomized Response Techniques: An Experimental Comparison of the Crosswise Model and the Triangular Model.” Behavior Research Methods 52: 17681782.CrossRefGoogle ScholarPubMed
Hoffmann, A., and Musch, J.. 2016. “Assessing the Validity of Two Indirect Questioning Techniques: A Stochastic Lie Detector Versus the Crosswise Model.” Behavior Research Methods 48 (3): 10321046.CrossRefGoogle ScholarPubMed
Hoffmann, A., and Musch, J.. 2019. “Prejudice Against Women Leaders: Insights from an Indirect Questioning Approach.” Sex Roles 80 (11): 681692.CrossRefGoogle Scholar
Höglinger, M., and Diekmann, A.. 2017. “Uncovering a Blind Spot in Sensitive Question Research: False Positives Undermine the Crosswise-Model RRT.” Political Analysis 25 (1): 131137.CrossRefGoogle Scholar
Höglinger, M., and Jann, B.. 2018. “More Is Not Always Better: An Experimental Individual-Level Validation of the Randomized Response Technique and the Crosswise Model.” PLoS One 13 (8): e0201770.CrossRefGoogle Scholar
Höglinger, M., Jann, B., and Diekmann, A.. 2016. “Sensitive Questions in Online Surveys: An Experimental Evaluation of Different Implementations of the Randomized Response Technique and the Crosswise Model.” Survey Research Methods 10 (3): 171187.Google Scholar
Horvitz, D. G., and Thompson, D. J.. 1952. “A Generalization of Sampling Without Replacement from a Finite Universe.” Journal of the American Statistical Association 47 (260): 663685.CrossRefGoogle Scholar
Jann, B., Jerke, J., and Krumpal, I.. 2012. “Asking Sensitive Questions Using the Crosswise Model: An Experimental Survey Measuring Plagiarism.” Public Opinion Quarterly 76 (1): 3249.CrossRefGoogle Scholar
Jerke, J., Johann, D., Rauhut, H., and Thomas, K.. 2019. “Too Sophisticated Even for Highly Educated Survey Respondents? A Qualitative Assessment of Indirect Question Formats for Sensitive Questions.” Survey Research Methods 13 (3): 319351.Google Scholar
Jerke, J., Johann, D., Rauhut, H., Thomas, K., and Velicu, A.. 2020. “Handle with Care: Implementation of the List Experiment and Crosswise Model in a Large-Scale Survey on Academic Misconduct.” Field Methods. https://doi.org/10.1177/1525822X20985629 Google Scholar
John, L. K., Loewenstein, G., Acquisti, A., and Vosgerau, J.. 2018. “When and Why Randomized Response Techniques (Fail to) Elicit the Truth.” Organizational Behavior and Human Decision Processes 148: 101123.CrossRefGoogle Scholar
Kane, J. G., Craig, S. C., and Wald, K. D.. 2004. “Religion and Presidential Politics in Florida: A List Experiment.” Social Science Quarterly 85 (2): 281293.CrossRefGoogle Scholar
Korndörfer, M., Krumpal, I., and Schmukle, S. C.. 2014. “Measuring and Explaining Tax Evasion: Improving Self Reports Using the Crosswise Model.” Journal of Economic Psychology 45: 1832.CrossRefGoogle Scholar
Krosnick, J. A. 1991. “Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys.” Applied Cognitive Psychology 5 (3): 213236.CrossRefGoogle Scholar
Kuhn, P. M., and Vivyan, N.. 2018. “Reducing Turnout Misreporting in Online Surveys.” Public Opinion Quarterly 82 (2): 300321.CrossRefGoogle Scholar
Kuklinski, J. H., Cobb, M. D., and Gilens, M.. 1997. “Racial Attitudes and the ‘New South’.” The Journal of Politics 59 (2): 323349.CrossRefGoogle Scholar
Kundt, T. 2014. “Applying ‘Benford’s Law’ to the Crosswise Model: Findings from an Online Survey on Tax Evasion.” Available at SSRN 2487069.CrossRefGoogle Scholar
Lehrer, R., Juhl, S., and Gschwend, T.. 2019. “The Wisdom of Crowds Design for Sensitive Survey Questions.” Electoral Studies 57: 99109.CrossRefGoogle Scholar
Lyall, J., Blair, G., and Imai, K.. 2013. “Explaining Support for Combatants During Wartime: A Survey Experiment in Afghanistan.” American Political Science Review 107 (4): 679705.CrossRefGoogle Scholar
Malhotra, N., and Krosnick, J. A.. 2007. “The Effect of Survey Mode and Sampling on Inferences About Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys with Nonprobability Samples.” Political Analysis 15 (3): 286323.CrossRefGoogle Scholar
Meisters, J., Hoffmann, A., and Musch, J.. 2020a. “Can Detailed Instructions and Comprehension Checks Increase the Validity of Crosswise Model Estimates?PLoS One 15 (6): e0235403.CrossRefGoogle Scholar
Meisters, J., Hoffmann, A., and Musch, J.. 2020b. “Controlling Social Desirability Bias: An Experimental Investigation of the Extended Crosswise Model.” PLoS One 15 (12): e0243384.CrossRefGoogle Scholar
Mercer, A., Lau, A., and Kennedy, C.. 2018. “For Weighting Online Opt-In Samples, What Matters Most.” Pew Research Center.Google Scholar
Nasirian, M., et al. 2018. “Does Crosswise Method Cause Overestimation? An Example to Estimate the Frequency of Symptoms Associated with Sexually Transmitted Infections in General Population: A Cross Sectional Study.” Health Scope 7 (3): e55357.CrossRefGoogle Scholar
Oliveros, V., and Gingerich, D. W.. 2020. “Lying About Corruption in Surveys: Evidence from a Joint Response Model.” International Journal of Public Opinion Research 32 (2): 384395.CrossRefGoogle Scholar
Oppenheimer, D. M., Meyvis, T., and Davidenko, N.. 2009. “Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power.” Journal of Experimental Social Psychology 45 (4): 867872.CrossRefGoogle Scholar
Quatember, A. 2019. “A Discussion of the Two Different Aspects of Privacy Protection in Indirect Questioning Designs.” Quality & Quantity 53 (1): 269282.CrossRefGoogle Scholar
Reiber, F., Schnuerch, M., and Ulrich, R.. 2020. “Improving the Efficiency of Surveys with Randomized Response Models: A Sequential Approach Based on Curtailed Sampling.” Psychological Methods. https://doi.org/10.1037/met0000353 CrossRefGoogle Scholar
Rosenfeld, B., Imai, K., and Shapiro, J. N.. 2016. “An Empirical Validation Study of Popular Survey Methodologies for Sensitive Questions.” American Journal of Political Science 60 (3): 783802.CrossRefGoogle Scholar
Schnapp, P. 2019. “Sensitive Question Techniques and Careless Responding: Adjusting the Crosswise Model for Random Answers.” methods, data, analyses 13 (2): 13. https://doi.org/10.12758/mda.2019.03 Google Scholar
Vakilian, K., Mousavi, S. A., and Keramat, A.. 2014. “Estimation of Sexual Behavior in the 18-to-24-Years-Old Iranian Youth Based on a Crosswise Model Study.” BMC Research Notes 7 (1): 28.CrossRefGoogle ScholarPubMed
Walzenbach, S., and Hinz, T.. 2019. “Pouring Water into Wine: Revisiting the Advantages of the Crosswise Model for Asking Sensitive Questions.” Survey Methods: Insights from the Field, 116. https://doi.org/10.13094/SMIF-2019-00002 Google Scholar
Warner, S. L. 1965. “Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias.” Journal of the American Statistical Association 60 (309): 6369.CrossRefGoogle ScholarPubMed
Waubert de Puiseau, B., Hoffmann, A., and Musch, J.. 2017. “How Indirect Questioning Techniques May Promote Democracy: A Preelection Polling Experiment.” Basic and Applied Social Psychology 39 (4): 209217.CrossRefGoogle Scholar
Yu, J.-W., Tian, G.-L., and Tang, M.-L.. 2008. “Two New Models for Survey Sampling with Sensitive Characteristic: Design and Analysis.” Metrika 67 (3): 251.CrossRefGoogle Scholar
Supplementary material: Link

Atsusaka and Stevenson Dataset

Link
Supplementary material: PDF

Atsusaka and Stevenson supplementary material

Atsusaka and Stevenson supplementary material

Download Atsusaka and Stevenson supplementary material(PDF)
PDF 573 KB

Save article to Kindle

To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

A Bias-Corrected Estimator for the Crosswise Model with Inattentive Respondents
Available formats
×

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.

A Bias-Corrected Estimator for the Crosswise Model with Inattentive Respondents
Available formats
×

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.

A Bias-Corrected Estimator for the Crosswise Model with Inattentive Respondents
Available formats
×
×

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *