Hostname: page-component-848d4c4894-x24gv Total loading time: 0 Render date: 2024-05-18T09:40:03.949Z Has data issue: false hasContentIssue false

Does Conjoint Analysis Mitigate Social Desirability Bias?

Published online by Cambridge University Press:  15 September 2021

Yusaku Horiuchi
Department of Government, Dartmouth College, Hanover, NH 03755, USA. E-mail:
Zachary Markovich*
Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Email:
Teppei Yamamoto
Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Email:
Corresponding author Zachary Markovich


How can we elicit honest responses in surveys? Conjoint analysis has become a popular tool to address social desirability bias (SDB), or systematic survey misreporting on sensitive topics. However, there has been no direct evidence showing its suitability for this purpose. We propose a novel experimental design to identify conjoint analysis’s ability to mitigate SDB. Specifically, we compare a standard, fully randomized conjoint design against a partially randomized design where only the sensitive attribute is varied between the two profiles in each task. We also include a control condition to remove confounding due to the increased attention to the varying attribute under the partially randomized design. We implement this empirical strategy in two studies on attitudes about environmental conservation and preferences about congressional candidates. In both studies, our estimates indicate that the fully randomized conjoint design could reduce SDB for the average marginal component effect (AMCE) of the sensitive attribute by about two-thirds of the AMCE itself. Although encouraging, we caution that our results are exploratory and exhibit some sensitivity to alternative model specifications, suggesting the need for additional confirmatory evidence based on the proposed design.

© The Author(s) 2021. Published by Cambridge University Press on behalf of the Society for Political Methodology

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)


Edited by Jeff Gill


Archer, K. J., and Kimes, R. V.. 2008. “Empirical Characterization of Random Forest Variable Importance Measures.” Computational Statistics & Data Analysis 52(4):22492260.CrossRefGoogle Scholar
Athey, S., and Imbens, G.. 2016. “Recursive Partitioning for Heterogeneous Causal Effects.” Proceedings of the National Academy of Sciences 113(27):73537360.CrossRefGoogle ScholarPubMed
Athey, S., Tibshirani, J., and Wager, S.. 2019. “Generalized Random Forests.” The Annals of Statistics 47(2):11481178.CrossRefGoogle Scholar
Atzmüller, C., and Steiner, P. M.. 2010. “Experimental Vignette Studies in Survey Research.” Methodology: European Journal of Research Methods for the Behavioral and Social Sciences 60(3):128138.CrossRefGoogle Scholar
Bechtel, M. M., Genovese, F., and Scheve, K. F.. 2019. “Interests, Norms and Support for the Provision of Global Public Goods: The Case of Climate Co-operation.” British Journal of Political Science 49(4):13331355.CrossRefGoogle Scholar
Bechtel, M. M., and Scheve, K. F.. 2013. “Mass Support for Global Climate Agreements Depends on Institutional Design.” Proceedings of the National Academy of Sciences 110(34):1376313768.CrossRefGoogle ScholarPubMed
Berinsky, A. J., Huber, G. A., and Lenz, G. S.. 2012. “Evaluating Online Labor Markets for Experimental Research:’s Mechanical Turk.” Political Analysis 20(3):351368.CrossRefGoogle Scholar
Blair, G., Coppock, A., and Moor, M.. 2020. “When to Worry about Sensitivity Bias: A Social Reference Theory and Evidence from 30 Years of List Experiments.” American Political Science Review 114(4):12971315.CrossRefGoogle Scholar
Blair, G., and Imai, K.. 2012. “Statistical Analysis of List Experiments.” Political Analysis 20(1):4777.CrossRefGoogle Scholar
Blair, G., Imai, K., and Zhou, Y.-Y.. 2015. “Design and Analysis of the Randomized Response Technique.” Journal of the American Statistical Association 110(511):13041319.CrossRefGoogle Scholar
Carey, J. M., Clayton, K., and Horiuchi, Y.. 2020. Campus Diversity: The Hidden Consensus. New York: Cambridge University Press.Google Scholar
Carrigan, M., and Attalla, A.. 2001. “The Myth of the Ethical Consumer – Do Ethics Matter in Purchase Behaviour?Journal of Consumer Marketing 18(7):560578.CrossRefGoogle Scholar
Chaudhuri, A. 2011. “Sustaining Cooperation in Laboratory Public Goods Experiments: A Selective Survey of the Literature.” Experimental Economics 14(1):4783.CrossRefGoogle Scholar
de la Cuesta, B., Egami, N., and Imai, K.. 2021. “Improving the External Validity of Conjoint Analysis: The Essential Role of Profile Distribution.” Political Analysis, forthcoming.Google Scholar
Eck, K., Hatz, S., Crabtree, C., and Tago, A.. 2021. “Evade and Deceive? Citizen Responses to Surveillance.” Journal of Politics, forthcoming.CrossRefGoogle Scholar
Hainmueller, J., Hangartner, D., and Yamamoto, T.. 2015. “Validating Vignette and Conjoint Survey Experiments against Real-World Behavior.” Proceedings of the National Academy of Sciences 112(8):23952400.CrossRefGoogle ScholarPubMed
Hainmueller, J., and Hopkins, D. J.. 2015. “The Hidden American Immigration Consensus: A Conjoint Analysis of Attitudes Toward Immigrants.” American Journal of Political Science 59(3):529548.CrossRefGoogle Scholar
Hainmueller, J., Hopkins, D. J., and Yamamoto, T.. 2014. “Causal Inference in Conjoint Analysis: Understanding Multidimensional Choices via Stated Preference Experiments.” Political Analysis 22(1):130.CrossRefGoogle Scholar
Hankinson, M. 2018. “When Do Renters Behave Like Homeowners? High Rent, Price Anxiety, and NIMBYism.” American Political Science Review 112(3):473493.CrossRefGoogle Scholar
Horiuchi, Y., Markovich, Z., and Yamamoto, T.. 2021. “Replication Data for: Does Conjoint Analysis Mitigate Social Desirability Bias?” Harvard Dataverse, V2. Scholar
Horiuchi, Y., Smith, D. M., and Yamamoto, T.. 2020. “Identifying voter Preferences for Politicians’ Personal Attributes: A Conjoint Experiment in Japan.” Political Science Research and Method 8(1):7591.CrossRefGoogle Scholar
Incerti, T. 2020. “Corruption Information and Vote Share: A Meta-Analysis and Lessons for Experimental Design.” American Political Science Review 114(3):761774.CrossRefGoogle Scholar
Jenke, L., Bansak, K., Hainmueller, J., and Hangartner, D.. 2021. “Using Eye-Tracking to Understand Decision-Making in Conjoint Experiments.” Political Analysis 29(1):75101.CrossRefGoogle Scholar
Kennedy, R., Clifford, S., Burleigh, T., Waggoner, P. D., Jewell, R., and Winter, N. J.. 2020. “The Shape of and Solutions to the Mturk Quality Crisis.” Political Science Research and Methods 8(4):614629.CrossRefGoogle Scholar
Klaiman, K., Ortega, D. L., and Garnache, C.. 2016. “Consumer Preferences and Demand for Packaging Material and Recyclability.” Resources, Conservation and Recycling 115:18.CrossRefGoogle Scholar
Krumpal, I. 2013. “Determinants of Social Desirability Bias in Sensitive Surveys: A Literature Review.” Quality & Quantity 47(4):20252047.CrossRefGoogle Scholar
Krupnikov, Y., Piston, S., and Bauer, N. M.. 2016. “Saving Face: Identifying Voter Responses to Black Candidates and Female Candidates.” Political Psychology 37(2):253273.CrossRefGoogle Scholar
Künzel, S. R., Sekhon, J. S., Bickel, P. J., and Yu, B.. 2019. “Metalearners for Estimating Heterogeneous Treatment Effects using Machine Learning.” Proceedings of the National Academy of Sciences 116(10):41564165.CrossRefGoogle ScholarPubMed
Künzel, S. R., Walter, S. J., and Sekhon, J. S.. 2019. “Causualtoolbox—Estimator Stability for Heterogeneous Treatment Effects.Observational Studies 5(2):105117.CrossRefGoogle Scholar
Mullinix, K. J., Leeper, T. J., Druckman, J. N., and Freese, J.. 2016. “The Generalizability of Survey Experiments.” Journal of Experimental Political Science 2(2):109138.CrossRefGoogle Scholar
Ostrom, E. 1990. Governing the Commons: The Evolution of Institutions for Collective Action. New York: Cambridge University Press.CrossRefGoogle Scholar
Rodriguez, L. M., Neighbors, C., and Foster, D. W.. 2014. “Priming Effects of Self-reported Drinking and Religiosity.” Psychology of Addictive Behaviors 28(1):19.CrossRefGoogle ScholarPubMed
Teele, D. L., Kalla, J., and Rosenbluth, F.. 2018. “The Ties that Double Bind: Social Roles and Women’s Underrepresentation in Politics.” American Political Science Review 112(3):525541.CrossRefGoogle Scholar
Tibshirani, J., et al. 2020. “Package grf: Generalized Random Forests.” Version 1.2.0, available at the Comprehensive R Archive Network.Google Scholar
Tourangeau, R., and Yan, T.. 2007. “Sensitive Questions in Surveys.” Psychological Bulletin 133(5):859883.CrossRefGoogle ScholarPubMed
Wager, S., and Athey, S.. 2018. “Estimation and Inference of Heterogeneous Treatment Effects using Random Forests.” Journal of the American Statistical Association 113(523):12281242.CrossRefGoogle Scholar
Wallander, L. 2009. “25 years of Factorial Surveys in Sociology: A Review.” Social Science Research 38(3):505520.CrossRefGoogle Scholar
Zaller, J., and Feldman, S.. 1992. “A Simple Theory of the Survey Response: Answering Questions versus Revealing Preferences.” American Journal of Political Science 36(3):579616.CrossRefGoogle Scholar
Supplementary material: Link

Horiuchi et al. Dataset

Supplementary material: PDF

Horiuchi et al. supplementary material

Horiuchi et al. supplementary material

Download Horiuchi et al. supplementary material(PDF)
PDF 246.1 KB