Skip to main content Accessibility help
×
Home

When to Protect? Using the Crosswise Model to Integrate Protected and Direct Responses in Surveys of Sensitive Behavior

  • Daniel W. Gingerich (a1), Virginia Oliveros (a2), Ana Corbacho (a3) and Mauricio Ruiz-Vega (a3)

Abstract

Sensitive survey techniques (SSTs) are frequently used to study sensitive behaviors. However, existing strategies for employing SSTs lead to highly variable prevalence estimates and do not permit analysts to address the question of whether the use of an SST is actually necessary. The current article presents a survey questioning strategy and corresponding statistical framework that fills this gap. By jointly analyzing survey responses generated by an SST (the crosswise model) along with direct responses about the sensitive behavior, the article's framework addresses the question of whether the use of an SST is required to study a given sensitive behavior, provides an efficient estimate of the prevalence of the sensitive behavior, and, in its extended form, efficiently estimates how individual characteristics relate to the likelihood of engaging in the behavior. The utility of the approach is demonstrated through an examination of gender differences in proclivities towards corruption in Costa Rica.

Copyright

Corresponding author

Footnotes

Hide All

Authors' note: This article was prepared when Ana Corbacho was Sector Economic Advisor and Daniel Gingerich and Virginia Oliveros were visiting scholars at the Inter-American Development Bank. The data, code, and any additional materials required to replicate all analyses in this article are available on the Political Analysis Dataverse of Harvard University's Dataverse Network at: http://dx.doi.org/10.7910/DVN/2AIHQF. Supplementary materials for this article are available on the Political Analysis Web site.

Footnotes

References

Hide All
Aronow, P. M., Coppock, A., Crawford, F. W., and Green, D. P. 2015. Combining list experiment and direct question estimates of sensitive behavior prevalence. Journal of Survey Statistics and Methodology. doi:10.1093/jssam/smu023.
Azfar, O., and Murrell, P. 2009. Identifying reticent respondents: Assessing the quality of survey data on corruption and values. Economic Development and Cultural Change 57(2): 387411.
Blair, G., and Imai, K. 2012. Statistical analysis of list experiments. Political Analysis 20(1): 4777.
Blair, G., Imai, K., and Lyall, J. 2014. Comparing and combining list and endorsement experiments: Evidence from Afghanistan. American Journal of Political Science 58(4): 1043–63.
Blair, G., Imai, K., and Zhou, Y.-Y. 2015. Statistical analysis of the randomized response technique. Journal of the American Statistical Association 110(511): 1304–19.
Böckenholt, U., and van der Heijden, P. G. M. 2007. Item randomized-response models for measuring noncompliance: Risk-return perceptions, social influences, and self-protective responses. Psychometrika 72(2): 245–62.
Böckenholt, U., Barlas, S., and van der Heijden, P. G. M. 2009. Do randomized-response designs eliminate response biases? An empirical study of non-compliance behavior. Journal of Applied Econometrics, Special Issue: New Econometric Models in Marketing 24(3): 377–92.
Bourke, P. D., and Moran, M. A. 1988. Estimating proportions from randomized response data using the EM algorithm. Journal of the American Statistical Association 83(404): 964–68.
Corstange, D. 2009. Sensitive questions, truthful answers? Modeling the list experiment with LISTIT. Political Analysis 17:5463.
de Jong, M. G., Pieters, R., and Fox, J. P. 2010. Reducing social desirability bias through item randomized response: An application to measure underreported desires. Journal of Marketing Research 47(1): 1427.
de Jong, M. G., Pieters, R., and Stremersch, S. 2012. Analysis of sensitive questions across cultures: An application of multigroup item randomized response theory to sexual attitudes and behavior. Journal of Personality and Social Psychology 103(3): 543–64.
Dempster, A. P., Laird, N. M., and Rubin, D. B. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B 39(1): 138.
Dollar, D., Fisman, R., and Gatti, R. 2001. Are women really the “fairer” sex? Corruption and women in government. Journal of Economic Behavior & Organization 46(4): 423–29.
Edgell, S. E., Himmelfarb, S., and Duchan, K. L. 1982. Validity of forced responses in a randomized response model. Sociological Methods & Research 11(1): 89100.
Esarey, Justin, and Chirillo, Gina. 2013. “Fairer sex” or purity myth? Corruption, gender, and institutional context. Politics and Gender 9(4): 390413.
Fox, J. P. 2005. Randomized item response theory models. Journal of Educational and Behavioral statistics 30(2): 189212.
Fox, J. P., and Wyrick, C. 2008. A mixed effects randomized item response model. Journal of Educational and Behavioral Statistics 33(4): 389415.
Fox, J. P., and Meijer, R. R. 2008. Using item response theory to obtain individual information from randomized response data: An application using cheating data. Applied Psychological Measurement 32(8): 595610.
Fox, J. P., Avetisyan, M., and Palen, J. 2013. Mixture randomized item-response modeling: A smoking behavior validation study. Statistics in Medicine 32(27): 4821–37.
Franzen, A., and Pointner, S. 2012. Anonymity in the dictator game revisited. Journal of Economic Behavior & Organization 81(1): 7481.
Gilens, M., Sniderman, P. M., and Kuklinski, J. H. 1998. Affirmative action and the politics of realignment. British Journal of Political Science 28(1): 159–83.
Gingerich, D. W. 2013. Political institutions and party-directed corruption in South America: Stealing for the team. Cambridge: Cambridge University Press.
Gingerich, D. W 2010. Understanding off-the-books politics: Conducting inference on the determinants of sensitive behavior with randomized response surveys. Political Analysis 18:349–80.
Gingerich, D. W., Oliveros, V., Corbacho, A., and Ruiz-Vega, M. 2015. Replication files for “When to protect? Using the crosswise model to integrate protected and direct responses in surveys of sensitive behavior.” Available on the Political Analysis Dataverse of Harvard University's Dataverse Network. http://dx.doi.org/10.7910/DVN/2AIHQF.
Glynn, A. N. 2013. What can we learn with statistical truth serum? Design and analysis of the list experiment. Public Opinion Quarterly 77(S1): 159–72.
Goetz, A. M. 2007. Political cleaners: Women as the new anti-corruption force? Development and Change 38:87105.
Gonzalez-Ocantos, E., De Jonge, C. K., Meléndez, C., Osorio, J., and Nickerson, D. W. 2012. Vote buying and social desirability bias: Experimental evidence from Nicaragua. American Journal of Political Science 56(1): 202–17.
Imai, K. 2011. Multivariate regression analysis for the item count technique. Journal of the American Statistical Association 106:407–16.
Jann, B., Jerke, J., and Krumpal, I. 2012. Asking sensitive questions using the crosswise model an experimental survey measuring plagiarism. Public Opinion Quarterly 76(1): 3249.
Kraay, A., and Murrell, P. 2013. Misunderestimating corruption. Policy Research Working Paper 6488, World Bank, Washington, DC.
Krumpal, I. 2012. Estimating the prevalence of xenophobia and anti-Semitism in Germany: A comparison of randomized response and direct questioning. Social Science Research 41(6): 1387–403.
Kuklinski, J. H., Sniderman, P. M., Knight, K., Piazza, T., Tetlock, P. E., Lawrence, G. R., and Mellers, B. 1997. Racial prejudice and attitudes toward affirmative action. American Journal of Political Science 41(2): 402–19.
Lamb, C. W. Jr. and Stem, D. E. Jr 1978. An empirical validation of the randomized response technique. Journal of Marketing Research 15(4): 616–21.
Lara, D., García, S. G., Ellertson, C., Camlin, C., and Suarez, J. 2006. The measure of induced abortion levels in Mexico using randomized response technique. Sociological Methods and Research 35:279–30.
Lensvelt-Mulders, G. J. L., Hox, J. J., van der Heijden, P. G. M., and Maas, C. J. M. 2005. Meta-analysis of randomized response research: Thirty years of validation. Sociological Methods and Research 33:319–48.
Lensvelt-Mulders, G. J. L., van der Heijden, P. G. M., Laudy, O., and van Gils, G. 2006. A validation of a computer-assisted randomized response survey to estimate the prevalence of fraud in social security. Journal of the Royal Statistical Society Series A 169(Part 2): 305–18.
List, J. A., Berrens, R. P., Bohara, A. K., and Kerkvliet, J. 2004. Examining the role of social isolation on stated preferences. American Economic Review 94:741–52.
Magaloni, B., Díaz-Cayeros, A., Romero, V., and Matanock, A. 2012. The enemy at home: Exploring the social roots of criminal organizations in Mexico. Unpublished paper.
Malesky, E. J., Gueorguiev, D. D., and Jensen, N. M. 2015. Monopoly money: Foreign investment and bribery in Vietnam, a survey experiment. American Journal of Political Science 59(2): 419–39.
Miller, J. D. 1984. A new survey technique for studying deviant behavior. PhD thesis, George Washington University, Department of Sociology.
Rosenfeld, B., Imai, K., and Shapiro, J. 2014. An empirical validation study of popular survey methodologies for sensitive questions. Unpublished manuscript, Princeton University.
Sung, H.-E. 2003. Fairer sex or fairer system? Gender and corruption revisited. Social Forces 82(2): 703–23.
Swamy, Anand, Knack, Young Lee, Stephen, and Azfar, Omar. 2001. Gender and corruption. Journal of Development Economics 64:2555.
Tan, M. T., Tian, G. L., and Tang, M. L. 2009. Sample surveys with sensitive questions: A nonrandomized response approach. American Statistician 63(1): 916.
Torgler, B., and Valev, N. T. 2010. Gender and public attitudes toward corruption and tax evasion. Contemporary Economic Policy 28(4): 554–68.
Tracy, P. E., and Fox, J. A. 1981. The validity of randomized response for sensitive measurements. American Sociological Review 46:187200.
van der Heijden, P. G. M., van Gils, G., Bouts, J., and Hox, J. J. 2000. A comparison of randomized response, computer assisted self interview and face-to-face direct questioning: Eliciting sensitive information in the context of welfare and unemployment benefit fraud. Sociological Methods & Research 28:505–37.
Warner, S. L. 1965. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association 60:63–9.
Yu, J. W., Tian, G. L., and Tang, M. L. 2008. Two new models for survey sampling with sensitive characteristic: Design and analysis. Metrika 67(3): 251–63.
MathJax
MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org.
Type Description Title
PDF
Supplementary materials

Gingerich et al. supplementary material
Appendix

 PDF (154 KB)
154 KB

Metrics

Altmetric attention score

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed