Hostname: page-component-77f85d65b8-242vh Total loading time: 0 Render date: 2026-03-26T11:40:48.905Z Has data issue: false hasContentIssue false

Addressing Measurement Errors in Ranking Questions for the Social Sciences

Published online by Cambridge University Press:  21 February 2025

Yuki Atsusaka
Affiliation:
Assistant Professor, Hobby School of Public Affairs, University of Houston, Houston, TX, USA; Email: atsusaka@uh.edu
Seo-young Silvia Kim*
Affiliation:
Assistant Professor, Department of Political Science and International Relations, Seoul National University, Seoul, South Korea
*
Corresponding author: Seo-young Silvia Kim; Email: sy.silvia.kim@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

Social scientists often use ranking questions to study people’s opinions and preferences. However, little is understood about the general nature of measurement errors in such questions, let alone their statistical consequences and what researchers can do about them. We introduce a statistical framework to improve ranking data analysis by addressing measurement errors in ranking questions. First, we characterize measurement errors from random responses—arbitrary and meaningless responses based on a wide range of random patterns. We then quantify bias due to random responses, show that the bias may change our conclusion in any direction, and clarify why item order randomization alone does not solve the statistical issue. Next, we introduce our methodology based on two key design-based considerations: item order randomization and the addition of an “anchor” ranking question with known correct answers. They allow researchers to (1) learn about the direction of the bias and (2) estimate the proportion of random responses, enabling our bias-corrected estimators. We illustrate our methods by studying the relative importance of people’s partisan identity compared to their racial, gender, and religious identities in American politics. We find that about 30% of respondents offered random responses and that these responses may affect our substantive conclusions.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Society for Political Methodology
Figure 0

Figure 1 Ranking question to measure relative partisanship.

Figure 1

Figure 2 Examples of random responses in ranking questions.

Figure 2

Figure 3 Design-based methods for estimating the proportion and distribution of random responses.

Figure 3

Figure 4 Example of an anchor ranking question.

Figure 4

Figure 5 Graphical representation of IPW.

Figure 5

Table 1 Comparison of two bias correction methods.

Figure 6

Figure 6 Visualization of the uniformity test: Distribution over all possible recorded responses in the target and anchor questions.Note: The dashed line represents 1/24 $\times $ 100%, to which the distribution should converge in the absence of random responses.

Figure 7

Table 2 Distribution of responses to the anchor question.

Figure 8

Figure 7 Distributions of identity rankings with bias-corrected and raw data.

Figure 9

Figure 8 Average ranks with and without bias correctionNote: The dashed line represents the average rank that arises when people are indifferent among the four items.

Figure 10

Figure 9 Predicted probabilities with and without bias correction.

Figure 11

Figure 10 Average ranks in the entire population under different assumptions.Note: The dashed line represents the average rank that arises when people are indifferent among the four items.

Supplementary material: File

Atsusaka and Kim supplementary material

Atsusaka and Kim supplementary material
Download Atsusaka and Kim supplementary material(File)
File 359.5 KB