Hostname: page-component-77f85d65b8-t6st2 Total loading time: 0 Render date: 2026-04-10T13:42:56.136Z Has data issue: false hasContentIssue false

The peril of requesting additional studies

Published online by Cambridge University Press:  31 March 2026

Greg L. Stewart*
Affiliation:
Wake Forest University, Winston-Salem, NC, USA
*
Rights & Permissions [Opens in a new window]

Abstract

Information

Type
Commentaries
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of Society for Industrial and Organizational Psychology

Allen et al. (Reference Allen, French, Avery, King and Wiernik2026) make a compelling case that the shift toward developmental reviewing has negative consequences. I agree with their premise, and I want to highlight a particular practice I believe is especially problematic: the all-too-common instruction from reviewers and editors to “conduct an additional study.” This request is the epitome of the developmental approach, and I believe it is a key cause of many of the negative outcomes identified by Allen et al.

Why do I see requests for additional studies as the quintessence of developmental reviewing? Asking for an additional study undermines author expertise. It not only tells authors how they should have done their current work but also how to structure their improvement effort, with the reviewer acting as mentor and study designer. Focusing on the additional study also draws attention from the accuracy and validity of the original submission. Rather than evaluating the submission on its merits, the reviewer-directed additional study is seen as a path for overcoming weaknesses in the initial design, sample, measurement, and analysis. Additional studies create a seemingly endless cycle that increases reviewer burden, as each resubmission with new data and analyses essentially starts the review process over.

The prevalence of multiple studies

How prevalent is the practice of asking for additional studies? Although we don’t know if original submissions included multiple studies, we do get some sense of prevalence by examining how many published articles include more than one study. By comparing data to the timeline of developmental reviewing, using some of the same time periods adopted by Allen et al., we see a striking pattern emerge. In 2003, only 17% of the empirical articles (removing meta-analyses and scale development studies) published in the six volumes of Journal of Applied Psychology included more than one study. By the final six volumes of 2024, that figure had jumped to 86%. Given that articles including more studies are likely longer, we can also gain insight by assessing article length. The six volumes of Journal of Applied Psychology published in 2003 contained 91 studies with an average length of approximately 12 pages, whereas the final six 2024 volumes had only 50 articles with an average length of almost 21 pages. Similar patterns are evident in other journals like Personnel Psychology, Academy of Management Journal, and Journal of Management. Although I was an author, reviewer, and action editor during these years, I feel like the proverbial frog in the slowly warming pot. The process of conducting and disseminating research has substantially changed, something I didn’t fully grasp before seeing the data.

The harm of requiring additional studies

Is publishing only about half as many articles with five times the rate of additional study inclusion an improvement? I believe the answer is no, and here’s why.

First, I suspect that additional studies included through the review process are often inferior to the initial studies. We’ve probably all found ourselves scrambling to conduct an additional study during the few short summer months when typical student participants are not available. This is a common consequence of the compressed timelines that come with most revise-and-resubmit invitations. The original study often took over a year to design and complete, but the follow-up is rushed, limiting the quality of its design, data, and analysis.

Second, the quality of the initial study is likely diminished when authors expect requests for additional studies. Over the years I have heard researchers explain their approach to maximizing research productivity with a plan something like the following: (a) submit as many studies as possible recognizing that none of them are very good; (b) hope a review team finds one of the studies interesting enough to request a revision; (c) once a revision is requested, shift attention to that study and conduct the studies the reviewers request. Such an approach not only fills the review pipeline with inferior studies, but as explained by Allen et al., it frequently means that the research process is being directed by reviewers whose motivation and expertise are less than the authors. Such a shift is also unfair for anonymous reviewers who go beyond expectations and put forth a great deal of effort to craft the research with no hope of personal reward. I acknowledge that fewer requests for additional studies will likely equate with a higher rejection rate for initial submissions. The goal is not to publish inferior research but rather to cull studies that don’t meet a minimum threshold of acceptability earlier in the review process, conserving valuable time and placing the onus of assuring adequate design and analysis on authors at the initial submission stage.

Third, substituting an additional study by the same authors offers less overall assurance of accuracy and validity to the field than would allocating more journal space to publishing independent research. The developmental review process seeks to publish comprehensive articles that reviewers have helped assure will be replicated. Indeed, reviewer comments often convey something along the lines of “I’m not convinced that your conclusions are accurate and generalizable, so please conduct an additional study.” Authors, motivated to maintain consistency with the original findings, conduct these additional studies over multiple rounds of review hoping—and perhaps unintentionally but instinctively assuring—that the results and conclusions converge. Rather than dedicating so much time and effort to shoring up a single article, it would be more productive to publish additional articles designed and carried out by others. Such an approach would allow for more robust triangulation of findings across multiple settings and authors.

Solutions

To facilitate moving away from developmental reviewing, I propose a simple change: We as editors and reviews should cease our near-automatic requests for additional studies. As long as revisions contain new data, the call by Allen et al., for second-round publication decisions will remain unmet. The sheer volume of new information introduced by supplementary studies makes a decision almost impossible without generating further reviewer queries. A return to the publication standards that existed 30 years ago when I began my career—where a definitive up-or-down decision was usually made at the second round—would be more productive than the status quo. If multiple studies are truly beneficial, authors can increase their chance of article acceptance by including them in their original submission, returning the responsibility of research design to authors rather than reviewers.

Consistent with Allen et al.’s recommendation for replications and practical studies, elevating the status of publication outlets such as the Journal of Management Scientific Reports (JOMSR) could be another crucial step toward reducing the prevalence of developmental reviewing and associated requests for additional studies. With its core mission of publishing rigorous research that confirms, refines, or refutes existing theories, JOMSR provides a model for refining the current process. Instead of requiring authors to incorporate multiple studies to construct a grand theory and address every possible criticism, this model promotes the cumulation of knowledge through independent scientific reports that are usually superior in design to the supplemental studies included in most current research articles. Perhaps outlets such as Journal of Applied Psychology could also return to their roots and publish more research reports. The six 2003 Journal of Applied Psychology volumes referenced above published a total of 20 reports, whereas the six 2024 volumes published only six.

References

Allen, T. D., French, K., Avery, D. R., King, E., & Wiernik, B. M. (2026). Developmental reviewing: Is it really good for science? Industrial and Organizational Psychology, 19(1), 115.Google Scholar