Skip to main content

List Experiment Design, Non-Strategic Respondent Error, and Item Count Technique Estimators

  • John S. Ahlquist

The item count technique (ICT-MLE) regression model for survey list experiments depends on assumptions about responses at the extremes (choosing no or all items on the list). Existing list experiment best practices aim to minimize strategic misrepresentation in ways that virtually guarantee that a tiny number of respondents appear in the extrema. Under such conditions both the “no liars” identification assumption and the computational strategy used to estimate the ICT-MLE become difficult to sustain. I report the results of Monte Carlo experiments examining the sensitivity of the ICT-MLE and simple difference-in-means estimators to survey design choices and small amounts of non-strategic respondent error. I show that, compared to the difference in means, the performance of the ICT-MLE depends on list design. Both estimators are sensitive to measurement error, but the problems are more severe for the ICT-MLE as a direct consequence of the no liars assumption. These problems become extreme as the number of treatment-group respondents choosing all the items on the list decreases. I document that such problems can arise in real-world applications, provide guidance for applied work, and suggest directions for further research.

Corresponding author
* Email:
Hide All

Author’s note: See Ahlquist (2017) for the replication archive. Versions of this paper were presented at the 2014 PolMeth and Midwest Political Science Association meetings as well the UW-Madison Models and Data group and colloquia at UC San Diego and the University of Washington Center for Statistics and the Social Sciences. I thank Graeme Blair, Scott Gehlbach, Kosuke Imai, Simon Jackman, Tom Pepinsky, Margaret Roberts, Michael D. Ward, Yiqing Xu and, Alex Tahk for helpful conversations.

Contributing Editor: R. Michael Alvarez

Hide All
Ahlquist, J.2017. Replication data for: List experiment design, non-strategic respondent error, and item count technique estimators.
Ahlquist, J. S., Mayer, K. R., and Jackman, S.. 2014. Alien abduction and voter impersonation in the 2012 US general election: Evidence from a survey list experiment. Election Law Journal 13(4):460475.
Aronow, P. M., Coppock, A., Crawford, F. W., and Green, D. P.. 2015. Combining list experiment and direct question estimates of sensitive behavior prevalence. Journal of Survey Statistics and Methodology 3(1):4366.
Blair, G., and Imai, K.. 2010. List: Statistical methods for the item count technique and list experiment. Available at The Comprehensive R Archive Network.
Blair, G., and Imai, K.. 2012. Statistical analysis of list experiments. Political Analysis 20:4777.
Blair, G., Imai, K., and Lyall, J.. 2014. Comparing and combining list and endorsement experiments: Evidence from Afghanistan. American Journal of Political Science 58(4):10431063.
Chaudhuri, A., and Christofides, T. C.. 2007. Item count technique in estimating the proportion of people with a sensitive feature. Journal of Statistical Planning and Inference 187:589593.
Corstange, D. 2009. Sensitive questions, truthful answers? Modeling the list experiment with LISTIT. Political Analysis 17:4563.
Eady, G. 2017. The statistical analysis of misreporting on sensitive survey questions. Political Analysis 25(2):241259.
Gingerich, D. W., Oliveros, V., Corbacho, A., and Ruiz-Vega, M.. 2016. When to protect? Using the crosswise model to integrate protected and direct responses in surveys of sensitive behavior. Political Analysis 24:132156.
Glynn, A. 2013. What can we learn with statistical truth serum? Design and analysis of the list experiment. Public Opinion Quarterly 77:159172.
Imai, K. 2011. Multivariate regression analysis for the item count technique. Journal of the American Statistical Association 106(494):407416.
Imai, K., Park, B., and Greene, K. F.. 2015. Using the predicted responses from list experiments as explanatory variables in regression models. Political Analysis 23:180196.
Internal Revenue Service. 2015. Internal Revenue Service Data Book 2014. Publication 55B. Washington, DC.
Kiewiet de Jonge, C. P., and Nickerson, D. W.. 2013. Artificial inflation or deflation? Assessing the item count technique in comparative surveys. Political Behavior 36(3):124.
Kuha, J., and Jackson, J.. 2014. The item count method for sensitive survey questions: modelling criminal behaviour. Journal of the Royal Statistical Society Series C: Applied Statistics 63(2):321341.
Kuklinski, J. H., Cobb, M. D., and Gilens, M.. 1997. Racial Attitudes and the “New South”. Journal of Politics 59(2):323349.
Liu, Y., Tian, G.-L., Wu, Q., and Tang, M.-L.. 2017. Poisson–Poisson item count techniques for surveys with sensitive discrete quantitative data. Statistical Papers ,
Madden, M., and Rainie, L.. 2010. Adults and cell phone distractions. Technical Report Pew Internet and American Life Project, Washington, DC.
Naumann, R. B. 2011. Mobile device use while driving-United States and seven European countries, 2011. Morbidity and Mortality Weekly Report 62(10):177182.
Rosenfeld, B., Imai, K., and Shapiro, J. N.. 2015. An empirical validation study of popular survey methodologies for sensitive questions. American Journal of Political Science 60(3):783802.
Tian, G.-L., Tang, M.-L., Wu, Q., and Liu, Y.. 2017. Poisson and negative binomial item count techniques for surveys with sensitive question. Statistical Methods in Medical Research 26(2):931947.
Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

Political Analysis
  • ISSN: 1047-1987
  • EISSN: 1476-4989
  • URL: /core/journals/political-analysis
Please enter your name
Please enter a valid email address
Who would you like to send this to? *


Type Description Title
Supplementary materials

Ahlquist supplementary material
Ahlquist supplementary material 1

 Unknown (357 KB)
357 KB


Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed