Hostname: page-component-76fb5796d-x4r87 Total loading time: 0 Render date: 2024-04-25T08:42:15.828Z Has data issue: false hasContentIssue false

Sensitive Questions, Truthful Answers? Modeling the List Experiment with LISTIT

Published online by Cambridge University Press:  04 January 2017

Daniel Corstange*
Affiliation:
Department of Government and Politics, University of Maryland, College Park, MD 20742
*
e-mail: dcorstange@gvpt.umd.edu (corresponding author)
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Standard estimation procedures assume that empirical observations are accurate reflections of the true values of the dependent variable, but this assumption is dubious when modeling self-reported data on sensitive topics. List experiments (a.k.a. item count techniques) can nullify incentives for respondents to misrepresent themselves to interviewers, but current data analysis techniques are limited to difference-in-means tests. I present a revised procedure and statistical estimator called LISTIT that enable multivariate modeling of list experiment data. Monte Carlo simulations and a field test in Lebanon explore the behavior of this estimator.

Type
Research Article
Copyright
Copyright © The Author 2008. Published by Oxford University Press on behalf of the Society for Political Methodology 

Footnotes

Author's Note: My thanks to Robert Axelrod, Janet Box-Steffensmeier, Sarah Croco, Adam Glynn, Sunshine Hillygus, John Jackson, Luke Keele, Gary King, James Kuklinski, Irfan Nooruddin, Mark Tessler, Ashutosh Varshney, and two anonymous reviewers for their comments and suggestions. Replication materials are available on the Political Analysis web site.

References

Berinsky, A. J. 1999. The two faces of public opinion. American Journal of Political Science 43(4): 1209–30.CrossRefGoogle Scholar
Berinsky, A. J. 2004. Silent voices: Public opinion and political participation in America. Princeton, NJ: Princeton University Press.Google Scholar
Brehm, J. 1993. The phantom respondents. Ann Arbor, MI: University of Michigan Press.Google Scholar
Chalabi, T. 2006. The Shi'is of Jabal ‘Amil and the New Lebanon. New York: Palgrave MacMillan.Google Scholar
Droitcour, J., Caspar, R. A., Hubbard, M. L., Parsley, T. L., Visscher, W., and Ezzati, T. M. 1991. The item count technique as a method of indirect questioning: A review of its development and a case study application. In Measurement errors in surveys, eds. Biemer, P. P., Groves, R. M., Lyberg, L. E., Mathiowetz, N. A., and Sudman, S., 185210. New York: John Wiley & Sons.Google Scholar
Gingerich, D. 2006. Corruption in general equilibrium: Political institutions and bureaucratic performance in South America. Ph.D. thesis, Harvard University, Cambridge, MA.Google Scholar
Greenwald, A. G., McGhee, D. E., and Schwartz, J. L. K. 1998. Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology 74(6): 1464–80.Google Scholar
Kane, J., Craig, S., and Wald, K. 2004. Religion and presidential politics in Florida: A list experiment. Social Science Quarterly 85(2): 281–93.CrossRefGoogle Scholar
Kuklinski, J. H., Cobb, M. D., and Gilens, M. 1997. Racial attitudes and the “New South”. Journal of Politics 59(2): 323–49.Google Scholar
Kuklinski, J. H., Sniderman, P., Knight, K., Piazza, T., Tetlock, P., Lawrence, G., and Mellers, B. 1997. Racial prejudice and attitudes toward affirmative action. American Journal of Political Science 41(2): 402–19.Google Scholar
Lane, K. A., Banaji, M. R., Nosek, B. A., and Greenwald, A. G. 2006. Understanding and using the implicit association test: IV, what we know (so far) about the method. In Implicit measures of attitudes, eds. Wittenbrink, B. and Schwarz, N., 59102. New York: Guildford Press.Google Scholar
Nosek, B. A., Greenwald, A. G., and Banaji, M. R. 2007. The implicit association test at age 7: A methodological and conceptual review. In Social psychology and the unconscious: The automaticity of higher mental processes, ed. Bargh, J. A., 265–92. Philadelphia, PA: Psychology Press.Google Scholar
Warner, S. 1965. Randomized response: A survey technique for eliminating evasive answer bias. Journal of the American Statistical Association 60(309): 63–9.Google Scholar
Zdep, S. M., Rhodes, I., Schwarz, R. M., and Kilkenny, M. 1979. The validity of the randomized response technique. Public Opinion Quarterly 43(4): 544–9.Google Scholar