Hostname: page-component-77f85d65b8-6c7dr Total loading time: 0 Render date: 2026-04-17T06:39:20.477Z Has data issue: false hasContentIssue false

Is the problem developmental review, or the development of peer review?

Published online by Cambridge University Press:  31 March 2026

William G. Obenauer*
Affiliation:
Maine Business School, University of Maine, Orono, ME, USA
Yannick Griep
Affiliation:
Samergo, Rotterdam, The Netherlands School of Industrial Psychology and Human Resource Management, North-West University, Potchefstroom, South Africa
*
Corresponding author: William G. Obenauer; Email: william.obenauer@maine.edu
Rights & Permissions [Opens in a new window]

Abstract

Information

Type
Commentaries
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of Society for Industrial and Organizational Psychology

We agree with Allen et al. (Reference Allen, French, Avery, King and Wiermik2026) that peer review is riddled with imperfections, but we suggest that, despite its priming effects, developmental review is being used as a scapegoat for larger systematic problems that detract from peer review quality. We evaluate three problems discussed by Allen et al., argue that these are systemic issues beyond the realm of developmental review, outline systemic constraints contributing to these problems, and propose a path forward.

Examples of problems in peer review

Allen et al., raised concerns about inaccurate feedback provided in reviews, arguing that accuracy is not evaluated in developmental review. However, accurate feedback contributes to the perceived usefulness of feedback (Brett & Atwater, Reference Brett and Atwater2001) and skill acquisition (Brand et al., Reference Brand, Novak, DiGennaro Reed and Tortolero2020). In fact, effective developmental review requires reviewers to recognize that they are not all-knowing entities; thus, there is no reasonable justification for them to make unsubstantiated, inaccurate statements (Ragins, Reference Ragins2015). Although we may not reward reviewer accuracy, because accurate reviews contribute to author development, the argument that developmental review contributes to inaccurate reviews is tenuous at best.

Allen et al., also called for reviewers to “refrain from suggesting [questionable research practices (QRPs)]” (p. 18). We agree that peer review has played an unfortunate role in the prevalence of QRPs. For example, scholars have suggested that the exclusion of certain variables or results from reporting may be influenced by editorial direction (e.g., Crede & Harms, Reference Crede and Harms2019; Fiedler & Schwarz, Reference Fiedler and Schwarz2016). Researcher justifications for QRPs (e.g., reviewer expectations, fear of potential rejection) have been positively associated with engagement in QRPs (Latan et al., Reference Latan, Chiappetta Jabbour, Lopes de Sousa Jabbour and Ali2023). Even when reviewers don’t directly suggest QRPs, comments such as those critiquing null results have the unintended consequence of indirectly suggesting QRPs by reinforcing beliefs that unsupported hypothesis tests are unpublishable (Banks et al., Reference Banks, O’Boyle, Pollack, White, Batchelor, Whelpley, Abston, Bennett and Adkins2016). However, as Allen et al., also noted, in recent years, the field of industrial and organizational psychology has been focused on reducing engagement in QRPs. In fact, Aguinis’s (Reference Aguinis2025) recent research methodology textbook devotes considerable attention to the problem of QRPs. If developmental review is focused on authors’ growth, it should emphasize engaging in open science practices and eliminating QRPs. Although peer review may be influencing QRPs, the argument that developmental review is driving this relationship seems unlikely.

Allen et al. (p. 7) also claim that developmental review drives reviewers to inform researchers how they should have conducted research, “rather than [engage] with the research as conducted.” This argument contradicts guidance that in developmental review, reviewers are not ghostwriters (Ragins, Reference Ragins2015) and should be forward-looking in identifying solutions to problems (Carpenter, Reference Carpenter2009). Therefore, this concern seems to be the consequence of reviewers deviating from, rather than engaging in, developmental review. Collectively, these arguments question whether developmental review motivates the wrong reviewer behaviors or if contextual constraints beyond the trend of developmental review (as outlined below) prevent reviewers and editors from engaging in meaningful developmental review.

Constraint #1: Time pressures and performance metrics

Action editors (AE) often juggle multiple manuscripts at different stages of review, in addition to their own academic responsibilities (Aguinis et al., Reference Aguinis, De Bruin, Cunningham, Hall, Culpepper and Gottfredson2010; Aguinis & Vaschetto, Reference Aguinis and Vaschetto2011; Starbuck et al., Reference Starbuck, Aguinis, Konrad, Baruch, Baruch, Konrad, Aguinis and Starbuck2008). In fact, our look behind the scenes using data provided by a large academic publisher shows that AEs at top business journals handle an average of 44 papers per year, with annual workloads ranging from a manageable 13 to as many as 101 manuscripts. This volume limits capacity to verify reviewer claims or reconcile conflicting feedback. Thus, the AE’s role risks shifting from independent evaluator to aggregator and synthesizer of reviewer input. In practice, this compromises one of the primary safeguards the system is supposed to offer: a layer of editorial oversight that contextualizes, filters, and validates reviewer feedback. This dynamic is further reinforced by an emphasis on turnaround times over the accuracy or rigor of decision letters through embedded metrics such as days to first decision and overall production speed. Such metrics implicitly reward “moving things along” over deeply engaging with the content of reviews and send a signal to editors and authors alike that speed is the dominant measure of good editorial practice. Although efficiency has obvious practical value, prioritizing it can have unintended consequences for the integrity of editorial work that can result in less rigorous research becoming a part of the scientific record, whereas more robust contributions are not published (Adam, Reference Adam2025; Palayew et al., Reference Palayew, Norgaard, Safreed-Harmon, Andersen, Rasmussen and Lazarus2020). In a high-volume, efficiency-focused model, AEs are not positioned to fully engage with manuscripts and/or reviewer feedback.

Constraint #2: Reviewer incentives and availability

There is little tangible motivation for unpaid reviewers to devote the considerable time and care required for a high-quality review. This incentive gap likely narrows the reviewer pool and leads to overreliance on a small group of reviewers whose comments often shape outcomes more than they should. Furthermore, challenges in securing timely and qualified reviews can lead to compromises in reviewer selection that erode confidence in the rigor and fairness of the process (Adam, Reference Adam2025; Alberts et al., Reference Alberts, Hanson and Kelner2008). Reviewer scarcity also constrains editorial independence, as AEs may feel pressure to honor flawed or low-quality reviewer input to maintain goodwill. This tendency runs counter to explicit guidance from leading journals, which emphasize that editors are not merely “vote counters” (Campbell & Aguilera, Reference Campbell and Aguilera2022; Sarker et al., Reference Sarker, Whitley, Goh, Hong, Mähring, Sanyal, Su, Xu, Xu, Zhang and Zhao2023) and highlights the editor’s evaluative role. Yet in practice, AEs may hesitate to override reviewers as contradicting a reviewer can be perceived as undermining the process and disincentivize reviewers from reviewing manuscripts in the future.

Constraint #3: Power dynamics and deference

Status cues, such as author prominence or elite institutional affiliation, can measurably affect peer review outcomes, creating a “hierarchy of voices” in which established authors exert outsized influence (Teplitskiy et al., Reference Teplitskiy, Acuna, Elamrani-Raoult, Körding and Evans2018; Tomkins et al., Reference Tomkins, Zhang and Heavlin2017). If we apply this dynamic to reviewers and the review process, AEs may be disproportionately influenced by the opinions of reviewers with significant social capital. Furthermore, they may be more reluctant to override flawed reviews when authors lack social capital because the consequences of professional friction with high-status reviewers are greater than the consequences of friction with lower status authors. The reputational and relational costs of openly challenging reviewers (e.g., jeopardizing future collaborations, complicating reciprocal reviewing arrangements) can foster a “politics of deference” that allows problematic review practices to persist uncorrected.

Constraint #4: Cultural norms in the field

Deep-seated cultural norms within our field influence not only how reviewers evaluate manuscripts but also how AEs interpret and act upon those reviews and thus may perpetuate the very problems that Allen et al., and we highlight. A persistent cultural norm shaping editorial decisions is the bias toward confirmatory findings, whereby statistically significant results are implicitly valued over null or inconclusive outcomes (Franco et al., Reference Franco, Malhotra and Simonovits2014). Even when AEs recognize the problem, field-wide expectations can make it difficult to push back, and decision letters that highlight unsupported hypotheses can unintentionally reinforce QRPs. The absence of explicit guidance against QRPs often leaves reviewers’ problematic suggestions (e.g., unplanned analyses) unchallenged and, at times, endorsed by AEs. This is compounded by the lack of formal expectations for fact-checking reviewer claims that can result in inaccurate statements passing into decision letters uncorrected.

System-level reforms

Overcoming these constraints requires systemic reform to the peer review process. At the editorial level, journals could expand AE evaluation criteria to include qualitative indicators, such as review accuracy and the resolution of contradictory feedback, rather than focusing almost exclusively on speed. Clear policies and training should also reinforce that the AE’s role includes fact checking and reconciling reviewer claims. Several initiatives (e.g., COPE, 2017; Sage Journals, 2025) outline such responsibilities, but they have yet to be integrated into formal performance expectations. Explicit guidance that author–reviewer dynamics should not deter the AE from correcting inaccuracies or declining problematic suggestions would further help shift the perception of editorial independence from optional to essential.

Yet empowering AEs alone will not suffice if reviewer incentives remain weak and power dynamics remain strong. One way to begin addressing both of these issues is to recalibrate the reward structure for reviewing. Modest financial compensation can broaden the reviewer pool and improve reviewer timeliness (Chetty et al., Reference Chetty, Saez and Sandor2014), allowing AEs more time to critically review manuscript feedback. By widening the reviewer base, such models reduce overreliance on a small, high-prestige group, relieving AEs of the pressure to satisfy reviewers and empowering AEs to challenge inaccuracies and make decisions driven by evidence rather than reputation.

Finally, making editorial policies clear, visible, and operational is essential. Journals could discourage abstract conjectures by requiring reviewers to cite specific manuscript passages when critiquing methodology and mandating that any factual inconsistencies be flagged and resolved explicitly in editorial correspondence. Journals can affirm that sound methods are valued independent of outcome through formal editorial statements and policy innovations. In one such innovation, registered reports, manuscripts are provisionally accepted on the basis of theoretical significance and methodological rigor prior to data collection, with final publication guaranteed regardless of whether results are statistically significant (Briker & Gerpott, Reference Briker and Gerpott2024; Chambers & Tzavella, Reference Chambers and Tzavella2022; Miller & Bamberger, Reference Miller and Bamberger2016). Similarly, “results-blind” review holds potential for reducing editorial bias toward confirmatory findings (Findley et al., Reference Findley, Jensen, Malesky and Pepinsky2016).

Taken together, these reforms aim not merely to correct individual failings but to recalibrate the system itself, strengthening editorial independence by enabling AEs to resist default cultural pressures and to justify decisions that uphold rigor and transparency. The resulting alignment of editorial practices, reviewer incentives, and field-wide expectations should support a peer-review culture anchored in accuracy, transparency, and fairness, regardless of whether the focus of review is developmental or constructive.

Footnotes

The authors used AI-based tools solely for language and grammar proofreading. All theoretical ideas, arguments, and contributions presented in this manuscript are entirely generated and owned by the authors.

References

Adam, D. (2025). The peer-review crisis: How to fix an overloaded system. Nature, 644(8075), 2427. https://www.nature.com/articles/d41586-025-02457-2 10.1038/d41586-025-02457-2CrossRefGoogle ScholarPubMed
Aguinis, H. (2025). Research methodology: Best practices for rigorous, credible, and impactful research (1st ed.). Sage.Google Scholar
Aguinis, H., De Bruin, G., Cunningham, D., Hall, N., Culpepper, S., & Gottfredson, R. (2010). What does not kill you (sometimes) makes you stronger: Productivity fluctuations of journal editors. Academy of Management Learning and Education, 9(4), 683695. https://doi.org/10.5465/AMLE.2010.56659885 Google Scholar
Aguinis, H., & Vaschetto, S. J. (2011). Editorial responsibility: Managing the publishing process to do good and do well. Management and Organization Review, 7(3), 407422. https://doi.org/10.1111/j.1740-8784.2011.00223.x CrossRefGoogle Scholar
Alberts, B., Hanson, B., & Kelner, K. L. (2008). Reviewing peer review. Science, 321(5885), 15. https://doi.org/10.1126/science.1162115 CrossRefGoogle ScholarPubMed
Allen, T. D., French, K., Avery, D. R., King, E., & Wiermik, B. M. (2026). Developmental reviewing: Is it really good for science? Industrial and Organizational Psychology: Perspectives on Science and Practice, 19(1), 1–15.Google Scholar
Banks, G. C., O’Boyle, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., Abston, K. A., Bennett, A. A., & Adkins, C. L. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42(1), 520. https://doi.org/10.1177/0149206315619011 CrossRefGoogle Scholar
Brand, D., Novak, M. D., DiGennaro Reed, F. D., & Tortolero, S. A. (2020). Examining the effects of feedback accuracy and timing on skill acquisition. Journal of Organizational Behavior Management, 40(1–2), 318. https://doi.org/10.1080/01608061.2020.1715319 CrossRefGoogle Scholar
Brett, J. F., & Atwater, L. E. (2001). 360° feedback: Accuracy, reactions, and perceptions of usefulness. Journal of Applied Psychology, 86(5), 930942. https://doi.org/10.1037/0021-9010.86.5.930 CrossRefGoogle ScholarPubMed
Briker, R., & Gerpott, F. H. (2024). Publishing registered reports in management and applied psychology: Common beliefs and best practices. Organizational Research Methods, 27(4), 588620. https://doi.org/10.1177/10944281231210309 CrossRefGoogle Scholar
Campbell, J. T., & Aguilera, R. V. (2022). Why I rejected your paper: Common pitfalls in writing theory papers and how to avoid them. Academy of Management Review, 47(4), 521527. https://doi.org/10.5465/amr.2022.0331 CrossRefGoogle Scholar
Carpenter, M. A. (2009). Editor’s comments: Mentoring colleagues in the craft and spirit of peer review. Academy of Management Review, 34(2), 191195. https://doi.org/10.5465/AMR.2009.36982609 CrossRefGoogle Scholar
Chambers, C. D., & Tzavella, L. (2022). The past, present and future of registered reports. Nature Human Behaviour, 6(1), 2942. https://doi.org/10.1038/s41562-021-01193-7 CrossRefGoogle ScholarPubMed
Chetty, R., Saez, E., & Sandor, L. (2014). What policies increase prosocial behavior? An experiment with referees at the Journal of Public Economics . Journal of Economic Perspectives, 28(3), 169188. https://doi.org/10.1257/jep.28.3.169 CrossRefGoogle Scholar
COPE. (2017). Ethical guidelines for peer reviewers Version 2. https://publicationethics.org/files/ethical_guidelines_for_peer_reviewers_2.pdf. https://doi.org/10.24318/cope.2019.1.9 CrossRefGoogle Scholar
Crede, M., & Harms, P. (2019). Questionable research practices when using confirmatory factor analysis. Journal of Managerial Psychology, 34(1), 1830. https://doi.org/10.1108/JMP-06-2018-0272 CrossRefGoogle Scholar
Fiedler, K., & Schwarz, N. (2016). Questionable research practices revisited. Social Psychological and Personality Science, 7(1), 4552. https://doi.org/10.1177/1948550615612150 CrossRefGoogle Scholar
Findley, M. G., Jensen, N. M., Malesky, E. J., & Pepinsky, T. B. (2016). Can results-free review reduce publication bias? The results and implications of a pilot study. Comparative Political Studies, 49(13), 1667-1703.CrossRefGoogle Scholar
Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 15021505. https://doi.org/10.31857/s013116462104007x CrossRefGoogle ScholarPubMed
Latan, H., Chiappetta Jabbour, C. J., Lopes de Sousa Jabbour, A. B., & Ali, M. (2023). Crossing the red line? Empirical evidence and useful recommendations on questionable research practices among business scholars. Journal of Business Ethics, 184(3), 549569. https://doi.org/10.1007/s10551-021-04961-7 CrossRefGoogle Scholar
Miller, C. C., & Bamberger, P. (2016). Exploring emergent and poorly understood phenomena in the strangest of places: The footprint of discovery in replications, meta-analyses, and null findings. Academy of Management Discoveries, 2(4), 313319. https://doi.org/10.5465/amd.2016.0115 CrossRefGoogle Scholar
Palayew, A., Norgaard, O., Safreed-Harmon, K., Andersen, T. H., Rasmussen, L. N., & Lazarus, J. V. (2020). Pandemic publishing poses a new COVID-19 challenge. Nature Human Behaviour, 4(7), 666669. https://doi.org/10.1038/s41562-020-0911-0 CrossRefGoogle ScholarPubMed
Ragins, B. R. (2015). Editor’s comments: Developing our authors. Academy of Management Review, 40(1), 18. https://doi.org/10.5465/amr:2014.0477 CrossRefGoogle Scholar
Sage Journals. (2025). Editor Guide to Peer Review Best Practice. Information for Editors. https://www.sagepub.com/journals/information-for-editors/editor-guide-to-peer-review-best-practice Google Scholar
Sarker, S., Whitley, E. A., Goh, K. Y., Hong, Y., Mähring, M., Sanyal, P., Su, N., Xu, H., Xu, J. D., Zhang, J., & Zhao, H. (2023). Editorial: Some thoughts on reviewing for Information systems research and other leading information systems journals. Information Systems Research, 34(4), 13211338. https://doi.org/10.1287/isre.2023.editorial.v34.n4 CrossRefGoogle Scholar
Starbuck, W. H., Aguinis, H., Konrad, A. M., & Baruch, Y. (2008). Tradeoffs among editorial goals in complex publishing environments. In Baruch, Y., Konrad, A. M., Aguinis, H., & Starbuck, W. H. (Ed.), Opening the Black Box of Editorship (pp. 250270). Palgrave Macmillan.10.1057/9780230582590_25CrossRefGoogle Scholar
Teplitskiy, M., Acuna, D., Elamrani-Raoult, A., Körding, K., & Evans, J. (2018). The sociology of scientific validity: How professional networks shape judgement in peer review. Research Policy, 47(9), 18251841. https://doi.org/10.1016/j.respol.2018.06.014 CrossRefGoogle Scholar
Tomkins, A., Zhang, M., & Heavlin, W. D. (2017). Reviewer bias in single- versus double-blind peer review. Proceedings of the National Academy of Sciences of the United States of America, 114(48), 1270812713. https://doi.org/10.1073/pnas.1707323114 CrossRefGoogle ScholarPubMed