Hostname: page-component-7d8f8d645b-9fg92 Total loading time: 0 Render date: 2023-05-27T16:16:42.357Z Has data issue: false Feature Flags: { "useRatesEcommerce": true } hasContentIssue false

Imperfect Corrections or Correct Imperfections? Psychometric Corrections in Meta-Analysis

Published online by Cambridge University Press:  27 May 2015

Frederick L. Oswald*
Department of Psychology, Rice University
Seydahmet Ercan
Department of Psychology, Rice University
Samuel T. McAbee
Department of Psychology, Rice University
Jisoo Ock
Department of Psychology, Rice University
Amy Shaw
Department of Psychology, Rice University
Correspondence concerning this article should be addressed to Frederick L. Oswald, Department of Psychology, Rice University, 6100 Main Street, MS-25, Houston, TX 77005. E-mail:


There is understandable concern by LeBreton, Scherer, and James (2014) that psychometric corrections in organizational research are nothing more than a form of statistical hydraulics. Statistical corrections for measurement error variance and range restriction might inappropriately ratchet observed effects upward into regions of practical significance and publication glory—at the expense of highly questionable results.

Copyright © Society for Industrial and Organizational Psychology 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)


Aytug, Z. G., Rothstein, H. R., Zhou, W., & Kern, M. C. (2011). Revealed or concealed? Transparency of procedures, decisions, and judgment calls in meta-analyses. Organizational Research Methods, 15, 103133. doi:10.1177/1094428111403495CrossRefGoogle Scholar
James, L. R., Demaree, R. G., Mulaik, S. A., & Ladd, R. T. (1992). Validity generalization in the context of situational models. Journal of Applied Psychology, 73, 673678. doi:10.1037//0021-9010.77.1.3CrossRefGoogle Scholar
Le, H., Schmidt, F. L., & Putka, D. (2009). The multifaceted nature of measurement artifacts and its implications for estimating construct-level relationships. Organizational Research Methods, 12, 165200. doi:10.1177/1094428107302900CrossRefGoogle Scholar
LeBreton, J. M., Scherer, K. T., & James, L. R. (2014). Corrections for criterion reliability in validity generalization: A false prophet in a land of suspended judgment. Industrial and Organizational Psychology: Perspectives on Science and Practice, 7, 478500. doi:10.1111/iops.12184Google Scholar
Newman, D. A., & Lyon, J. S. (2009). Recruitment efforts to reduce adverse impact: Targeted recruiting for personality, cognitive ability, and diversity. Journal of Applied Psychology, 94, 298317. doi:10.1037/a0013472CrossRefGoogle ScholarPubMed
Oswald, F. L., & McCloy, R. A. (2003). Meta-analysis and the art of the average. In Murphy, K. R. (Ed.), Validity generalization: A critical review (pp. 311338). Mahwah, NJ: Erlbaum.Google Scholar
Raju, N. S., Anselmi, T. V., Goodman, J. S., & Thomas, A. (1998). The effect of correlated artifacts and true validity on the accuracy of parameter estimation in validity generalization. Personnel Psychology, 51, 452465. doi:10.1111/j.1744-6570.1998.tb00733.xCrossRefGoogle Scholar
Russell, C. J., & Gilliland, S. W. (1995). Why meta-analysis doesn't tell us what the data really mean: Distinguishing between moderator effects and moderator processes. Journal of Management, 21, 813831. doi:10.1177/014920639502100412CrossRefGoogle Scholar
Sackett, P. R., Lievens, F., Berry, C. M., & Landers, R. N. (2007). A cautionary note on the effects of range restriction on predictor intercorrelations. Journal of Applied Psychology, 92, 538544. doi:10.1037/0021-9010.92.2.538CrossRefGoogle ScholarPubMed
Schmidt, F. L., & Hunter, J. E. (2015). Methods of meta-analysis: Correcting error and bias in research findings. Thousand Oaks, CA: Sage.CrossRefGoogle Scholar