Hostname: page-component-89b8bd64d-ksp62 Total loading time: 0 Render date: 2026-05-09T22:30:59.720Z Has data issue: false hasContentIssue false

Comparing Rater Groups: How To Disentangle Rating Reliability From Construct-Level Disagreements

Published online by Cambridge University Press:  29 December 2016

Chockalingam Viswesvaran*
Affiliation:
Department of Psychology, Florida International University
Deniz S. Ones
Affiliation:
Department of Psychology, University of Minnesota
Frank L. Schmidt
Affiliation:
Department of Management, University of Iowa
*
Correspondence concerning this article should be addressed to Chockalingam Viswesvaran, Department of Psychology, Florida International University, 1200 SW 8th Street, Miami, FL 33199. E-mail: vish@fiu.edu
Rights & Permissions [Opens in a new window]

Extract

In this commentary, we build on Bracken, Rose, and Church's (2016) definition stating that 360° feedback should involve “the analysis of meaningful comparisons of rater perceptions across multiple ratees, between specific groups of raters” (p. 764). Bracken et al. expand on this component of the definition later by stressing that “the ability to conduct meaningful comparisons of rater perceptions both between (inter) and within (intra) groups is central and, indeed, unique to any true 360° feedback process” (p. 767; italicized in their focal article). Bracken et al. stress that “This element of our definition acknowledges that 360° feedback data represent rater perceptions that may contradict each other while each being true and valid observations” (p. 767).

Information

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2016