No CrossRef data available.
Published online by Cambridge University Press: 04 January 2017
Survey researchers avoid using large multi-item scales to measure latent traits due to both the financial costs and the risk of driving up nonresponse rates. Typically, investigators select a subset of available scale items rather than asking the full battery. Reduced batteries, however, can sharply reduce measurement precision and introduce bias. In this article, we present computerized adaptive testing (CAT) as a method for minimizing the number of questions each respondent must answer while preserving measurement accuracy and precision. CAT algorithms respond to individuals' previous answers to select subsequent questions that most efficiently reveal respondents' positions on a latent dimension. We introduce the basic stages of a CAT algorithm and present the details for one approach to item selection appropriate for public opinion research. We then demonstrate the advantages of CAT via simulation and empirically comparing dynamic and static measures of political knowledge.
Authors' note: We are grateful for helpful comments provided by Martin Elff, Sunshine Hillygus, Walter Mebane, Brendan Nyhan, and two anonymous reviewers. A previous version of this article was presented at the 2012 Annual Meeting of the Midwest Political Science Association, the 2012 Saint Louis Area Methods Meeting, and the 2012 Summer Methods Meeting. Supplementary materials for this article are available on the Political Analysis Web site.