Hostname: page-component-76fb5796d-x4r87 Total loading time: 0 Render date: 2024-04-28T14:52:47.131Z Has data issue: false hasContentIssue false

Using Machine Learning to Test Causal Hypotheses in Conjoint Analysis

Published online by Cambridge University Press:  08 February 2024

Dae Woong Ham*
Affiliation:
Department of Statistics, Harvard University, Cambridge, MA, USA. URL: http://lucasjanson.fas.harvard.edu
Kosuke Imai
Affiliation:
Department of Statistics, Harvard University, Cambridge, MA, USA. URL: http://lucasjanson.fas.harvard.edu Department of Government and Statistics, Harvard University, Cambridge, MA, USA. URL: https://imai.fas.harvard.edu/
Lucas Janson
Affiliation:
Department of Statistics, Harvard University, Cambridge, MA, USA. URL: http://lucasjanson.fas.harvard.edu
*
Corresponding author: Dae Woong Ham; Email: daewoongham@g.harvard.edu

Abstract

Conjoint analysis is a popular experimental design used to measure multidimensional preferences. Many researchers focus on estimating the average marginal effects of each factor while averaging over the other factors. Although this allows for straightforward design-based estimation, the results critically depend on the ways in which factors interact with one another. An alternative model-based approach can compute various quantities of interest, but requires correct model specifications, a challenging task for conjoint analysis with many factors. We propose a new hypothesis testing approach based on the conditional randomization test (CRT) to answer the most fundamental question of conjoint analysis: Does a factor of interest matter in any way given the other factors? Although it only provides a formal test of these binary questions, the CRT is solely based on the randomization of factors, and hence requires no modeling assumption. This means that the CRT can provide a powerful and assumption-free statistical test by enabling the use of any test statistic, including those based on complex machine learning algorithms. We also show how to test commonly used regularity assumptions. Finally, we apply the proposed methodology to conjoint analysis of immigration preferences. An open-source software package is available for implementing the proposed methodology. The proposed methodology is implemented via an open-source software R package CRTConjoint, available through the Comprehensive R Archive Network https://cran.r-project.org/web/packages/CRTConjoint/index.html.

Type
Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of The Society for Political Methodology

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Edited by: Jeff Gill

References

Abramson, S., Kocak, K., Magazinnik, A., and Strezhnev, A.. 2020. “Improving Preference Elicitation in Conjoint Designs Using Machine Learning for Heterogeneous Effects.” Technical report, The Annual Summer Meeting of the Society for Political Methodology.Google Scholar
Andrews, R. L., Ansari, A., and Currim, I. S.. 2002. “Hierarchical Bayes versus Finite Mixture Conjoint Analysis Models: A Comparison of Fit, Prediction, and Partworth Recovery.” Journal of Marketing Research 39: 8798. https://doi.org/10.1509/jmkr.39.1.87.18936 CrossRefGoogle Scholar
Aronow, P. M. 2012. “A General Method for Detecting Interference between Units in Randomized Experiments.” Sociological Methods & Research 41: 316.CrossRefGoogle Scholar
Athey, S., Eckles, D., and Imbens, G. W.. 2018. “Exact P-Values for Network Interference.” Journal of the American Statistical Association 113: 230240.CrossRefGoogle Scholar
Bansak, K., Hainmueller, J., Hopkins, D., and Yamamoto, T.. 2020. “Using Conjoint Experiments to Analyze Elections: The Essential Role of the Average Marginal Component Effect (AMCE).” SSRN Electronic Journal.CrossRefGoogle Scholar
Bansak, K., Hainmueller, J., Hopkins, D. J., and Yamamoto, T.. 2018. “The Number of Choice Tasks and Survey Satisficing in Conjoint Experiments.” Political Analysis 26: 112119.CrossRefGoogle Scholar
Bansak, K., Hainmueller, J., Hopkins, D. J., and Yamamoto, T.. 2019. “Beyond the Breaking Point? Survey Satisficing in Conjoint Experiments.” Political Science Research and Methods 9: 5371.CrossRefGoogle Scholar
Barone, S., Lombardo, A., and Tarantino, P.. 2007. “A Weighted Logistic Regression for Conjoint Analysis and Kansei Engineering.” Quality and Reliability Engineering International 23: 689706.CrossRefGoogle Scholar
Bien, J., Taylor, J., and Tibshirani, R.. 2013. “A Lasso for Hierarchical Interactions.” Annals of Statistics 41: 11111141.CrossRefGoogle ScholarPubMed
Bodog, S., and Florian, G. L.. 2012. “Conjoint Analysis in Marketing Research.” Journal of Electrical and Electronics Engineering 5: 1922.Google Scholar
Campbell, B. L., Mhlanga, S., and Lesschaeve, I.. 2013. “Consumer Preferences for Peach Attributes: Market Segmentation Analysis and Implications for New Marketing Strategies.” Agricultural and Resource Economics Review 42: 518541.CrossRefGoogle Scholar
Candès, E., Fan, Y., Janson, L., and Lv, J.. 2018. “Panning for Gold: Model-X Knockoffs for High-Dimensional Controlled Variable Selection.” Journal of the Royal Statistical Society: Series B 80: 551577.CrossRefGoogle Scholar
Candès, E. J., and Sur, P.. 2018. “The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression.” The Annals of Statistics. https://api.semanticscholar.org/CorpusID:13804651.Google Scholar
Chernozhukov, V., Demirer, M., Duflo, E., and Fern’andez-Val, I.. 2017. “Generic Machine Learning Inference on Heterogenous Treatment Effects in Randomized Experiments.” Paper No. 1712.04802. https://ideas.repec.org/p/arx/papers/1712.04802.html.Google Scholar
de la Cuesta, B., Egami, N., and Imai, K.. 2022. “Improving the External Validity of Conjoint Analysis: The Essential Role of Profile Distribution.” Political Analysis 30: 1945.CrossRefGoogle Scholar
Dezeure, R., Bühlmann, P., Meier, L., and Meinshausen, N.. 2015. “High-Dimensional Inference: Confidence Intervals, $p$ -Values and R-Software hdi.” Statistical Science 30: 533558. https://doi.org/10.1214/15-STS527 CrossRefGoogle Scholar
Egami, N., and Imai, K.. 2019. “Causal Interaction in Factorial Experiments: Application to Conjoint Analysis.” Journal of the American Statistical Association 114: 529540.CrossRefGoogle Scholar
Goplerud, M., Imai, K., and Pashley, N. E.. 2022. “Estimating Heterogeneous Causal Effects of High-Dimensional Treatments: Application to Conjoint Analysis.” Technical report. Preprint. arXiv:2201.01357.Google Scholar
Green, P., Krieger, A., and Wind, Y.. 2001. “Thirty Years of Conjoint Analysis: Reflections and Prospects.” Interfaces 31: S56S73.CrossRefGoogle Scholar
Green, P. E., and Srinivasan, V.. 1990. “Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice.” Journal of Marketing 54: 319.CrossRefGoogle Scholar
Hainmueller, J., and Hopkins, D. J.. 2015. “The Hidden American Immigration Consensus: A Conjoint Analysis of Attitudes toward Immigrants.” American Journal of Political Science 59: 529548.CrossRefGoogle Scholar
Hainmueller, J., Hopkins, D. J., and Yamamoto, T.. 2014. “Causal Inference in Conjoint Analysis: Understanding Multidimensional Choices via Stated Preference Experiments.” Political Analysis 22: 130.CrossRefGoogle Scholar
Ham, D. W., Imai, K., and Janson, L.. 2023. “Replication Data for: Using Machine Learning to Test Causal Hypotheses in Conjoint Analysis.” https://doi.org/10.7910/DVN/ENI8GF CrossRefGoogle Scholar
Hauber, A. B., et al. 2016. “Statistical Methods for the Analysis of Discrete Choice Experiments: A Report of the ISPOR Conjoint Analysis Good Research Practices Task Force.” Value in Health 19: 300315.CrossRefGoogle Scholar
Imai, K., and Li, M. L.. 2021. “Experimental Evaluation of Individualized Treatment Rules.” Journal of the American Statistical Association 118: 242256.CrossRefGoogle Scholar
Imbens, G. W., and Rubin, D. B.. 2015. Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Liu, M., Katsevich, E., Janson, L., and Ramdas, A.. 2021. “Fast and powerful conditional randomization testing via distillation.” Biometrika 109 (2): 277293. https://doi.org/10.1093/biomet/asab039.CrossRefGoogle Scholar
Luce, R. D., and Tukey, J. W.. 1964. “Simultaneous Conjoint Measurement: A New Type of Fundamental Measurement.” Journal of Mathematical Psychology 1:127.CrossRefGoogle Scholar
McFadden, Daniel. 1973. “Conditional Logit Analysis of Qualitative Choice Behavior.” In Frontiers of Econometrics, edited by Zarembka, P., 105142. New York: Academic Press.Google Scholar
Newman, B. J., and Malhotra, N.. 2019. “Economic Reasoning with a Racial Hue: Is the Immigration Consensus Purely Race Neutral?Journal of Politics 81: 153166.CrossRefGoogle Scholar
Ono, Y., and Burden, B. C.. 2018. “The Contingent Effects of Candidate Sex on Voter Choice.” Political Behavior 41: 583607.CrossRefGoogle Scholar
Popovic, M., Kuzmanovic, M., and Martic, M.. 2012. “Using Conjoint Analysis to Elicit Employers’ Preferences toward Key Competencies for a Business Manager Position.” Management—Journal for Theory and Practice of Management 17: 1726.Google Scholar
Raghavarao, D., Wiley, J. B., and Chitturi, P.. 2010. Choice-Based Conjoint Analysis: Models and Designs. Boca Raton: Chapman and Hall/CRC.CrossRefGoogle Scholar
Rubin, D. B. 1990. “Comments on ‘on the Application of Probability Theory to Agricultural Experiments. Essay on Principles. Section 9’ by J. Splawa-Neyman. Translated from the Polish and Edited by D. M. Dabrowska and T. P. Speed.” Statistical Science 5: 472480.Google Scholar
Shiraito, Y., and Liu, G.. 2023. “Multiple Hypothesis Testing in Conjoint Analysis.” Political Analysis 31: 380395.Google Scholar
Spilker, G., Bernauer, T., and Umaña, V.. 2016. “Selecting Partner Countries for Preferential Trade Agreements: Experimental Evidence from Costa Rica, Nicaragua, and Vietnam.” International Studies Quarterly 60: 706718. https://doi.org/10.1093/isq/sqv024 CrossRefGoogle Scholar
Tansey, W., Veitch, V., Zhang, H., Rabadan, R., and Blei, D. M.. 2022. “The Holdout Randomization Test for Feature Selection in Black Box Models.” Journal of Computational and Graphical Statistics. Taylor & Francis, 31 (1): 151162. https://doi.org/10.1080/10618600.2021.1923520.CrossRefGoogle Scholar
Supplementary material: File

Ham et al. supplementary material

Ham et al. supplementary material
Download Ham et al. supplementary material(File)
File 333.4 KB
Supplementary material: Link

Ham et al. Dataset

Link