Hostname: page-component-89b8bd64d-72crv Total loading time: 0 Render date: 2026-05-12T13:11:04.439Z Has data issue: false hasContentIssue false

Is Content-Related Evidence Useful in Validating Selection Tests?

Published online by Cambridge University Press:  07 January 2015

Kevin R. Murphy*
Affiliation:
Pennsylvania State University
*
E-mail: krm10@psu.edu, Address: Department of Psychology, Pennsylvania State University, Moore Bldg., University Park, PA 16802.

Abstract

The 12 papers commenting on K. R. Murphy (2009a) raise a number of important issues, most of which can be subsumed in one of four themes. First, papers examining content-oriented validation strategies are still necessary and useful, in part because of the frequent use of these strategies in the practice of industrial–organizational (I–O) psychology. Second, the term “content validity” means many different things both within and beyond the field of I–O psychology, and it is useful to understand what sorts of inferences examinations of test content do and do not support. Third, these 12 papers present very little evidence that content validation, as typically carried out by I–O psychologists, actually provides information about the likelihood that people who do well on the test will do well on the job. Finally, I believe that the best use of content-related evidence in validating selection tests is in developing hypotheses about relationships between test scores and criteria rather than in testing these hypotheses.

Information

Type
Response
Copyright
Copyright © Society for Industrial and Organizational Psychology 2009 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable