Hostname: page-component-77c78cf97d-bzm8f Total loading time: 0 Render date: 2026-04-24T12:47:47.853Z Has data issue: false hasContentIssue false

The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation

Published online by Cambridge University Press:  19 May 2011

Domenic V. Cicchetti
Affiliation:
VA Medical Center, West Haven, CT 06516, Electronic mail: cicchetti@yalevm.bitnet
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the 'Save PDF' action button.

The reliability of peer review of scientific documents and the evaluative criteria scientists use to judge the work of their peers are critically reexamined with special attention to the consistently low levels of reliability that have been reported. Referees of grant proposals agree much more about what is unworthy of support than about what does have scientific value. In the case of manuscript submissions this seems to depend on whether a discipline (or subfield) is general and diffuse (e.g., cross-disciplinary physics, general fields of medicine, cultural anthropology, social psychology) or specific and focused (e.g., nuclear physics, medical specialty areas, physical anthropology, and behavioral neuroscience). In the former there is also much more agreement on rejection than acceptance, but in the latter both the wide differential in manuscript rejection rates and the high correlation between referee recommendations and editorial decisions suggests that reviewers and editors agree more on acceptance than on rejection. Several suggestions are made for improving the reliability and quality of peer review. Further research is needed, especially in the physical sciences.

Information

Type
Target Article
Copyright
Copyright © Cambridge University Press 1991