Hostname: page-component-77f85d65b8-g98kq Total loading time: 0 Render date: 2026-03-28T21:23:28.486Z Has data issue: false hasContentIssue false

Reducing Attenuation Bias in Regression Analyses Involving Rating Scale Data via Psychometric Modeling

Published online by Cambridge University Press:  01 January 2025

Cees A. W. Glas*
Affiliation:
University of Twente
Terrence D. Jorgensen
Affiliation:
University of Amsterdam
Debby Ten Hove
Affiliation:
Vrije Universiteit Amsterdam
*
Correspondence should be made to Cees A. W. Glas, University of Twente, Enschede, The Netherlands. C.A.W.Glas@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

Many studies in fields such as psychology and educational sciences obtain information about attributes of subjects through observational studies, in which raters score subjects using multiple-item rating scales. Error variance due to measurement effects, such as items and raters, attenuate the regression coefficients and lower the power of (hierarchical) linear models. A modeling procedure is discussed to reduce the attenuation. The procedure consists of (1) an item response theory (IRT) model to map the discrete item responses to a continuous latent scale and (2) a generalizability theory (GT) model to separate the variance in the latent measurement into variance components of interest and nuisance variance components. It will be shown how measurements obtained from this mixture of IRT and GT models can be embedded in (hierarchical) linear models, both as predictor or criterion variables, such that error variance due to nuisance effects are partialled out. Using examples from the field of educational measurement, it is shown how general-purpose software can be used to implement the modeling procedure.

Information

Type
Theory & Methods
Creative Commons
Creative Common License - CCCreative Common License - BY
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/
Copyright
Copyright © 2024 The Author(s)
Figure 0

Table 1 Empirical Example 1: Differences between estimates of discrimination parameters using various estimation strategies.

Figure 1

Table 2 Empirical Example 1: Differences between estimates of average location parameters using various estimation strategies.

Figure 2

Table 3 Empirical Example 1: Estimates of variance components using various estimation strategies.

Figure 3

Table 4 Empirical Example 1: Changes in mean proficiency over subsequent measurements.

Figure 4

Table 5 Empirical Example 1: D-study: Agreement and reliability as a function of various numbers of raters and time-points.

Figure 5

Table 6 Empirical Example 2: Multilevel model with ICALT as predictor.

Figure 6

Table 7 Empirical Example 2: Varying the reliability of the ICALT.

Figure 7

Table 8 Empirical Example 3: Summary of models used for analyses.

Figure 8

Table 9 Empirical Example 3: Correlation between item parameter estimates for two models.

Figure 9

Table 10 Empirical Example 3: Latent correlations under various models.