Writing high-quality reviews is a professional skill. As with any skill, strong performance depends not only on underlying ability but also how that ability is applied during learning experiences to develop skills over time (Ackerman, Reference Ackerman2023). Yet, as the focal article highlights (Allen et al., Reference Allen, French, Avery, King and Wiernik2026), reviewer learning experiences currently seem to range from inconsistent to nonexistent. The lack of consistent reviewer development experiences belies a flawed underlying assumption that strong research abilities will translate into skilled reviewing. Through the lens of the science of workplace learning, we can reimagine reviewers’ skill acquisition and development as an ongoing learning process worthy of our field’s attention.
Specifically, we believe that it is important to focus on two distinct stages of reviewer skill acquisition. First, it is essential to target learning experiences during the initial ramp up of skill acquisition—the transition from a novice who has potentially never read a review to becoming a reviewer oneself. Second, we must consider how our field can facilitate continual skill refinement and development throughout one’s career as a reviewer.
In this commentary, we leverage the literature on workplace learning to outline considerations for each stage of reviewer skill development and highlight practical suggestions to facilitate more effective reviewer skill learning. By training reviewers more intentionally, we believe the field could reap the positive changes Allen et al. (Reference Allen, French, Avery, King and Wiernik2026) envision: reviews more consistently focused on improving our science.
Starting from scratch: Scaffolding reviewer learning from the ground up
In the focal article, Allen et al. assert that “we have an obligation as a community of scholars to develop and train those who are inexperienced to become high quality reviewers, but given the career stakes for those submitting the work for journal peer review, we encourage additional means of training” (p. 15). We agree and underscore the point that the stakes are too high to hope for reviewer skill development without building the infrastructure to facilitate and ensure it.
Because most reviewers in our field likely hold a graduate degree in I-O psychology or a related field, graduate school is a natural place to incorporate intentional, scaffolded skill learning. Although doctoral students are not the primary target for top journal reviewer assignments, they are sometimes invited to review outright or through journal programs that provide structured review training (e.g., Academy of Management Review Bridge Reviewer Program). Similarly, the Journal of Applied Psychology allows primary reviewers to include co-reviewers with acknowledgment, signaling another way students may gain early, supervised exposure. These initiatives suggest that intentional scaffolding is beginning to emerge in some corners of the field, though such practices remain the exception rather than the norm. Indeed, some editors decline requests for senior doctoral students to shadow reviews, reflecting an active resistance in some journals to involving trainees in the review process. Rather, our current system often assumes that future reviewers will learn informally through exposure to the peer review process during graduate school, wherein graduate students submit their research and receive reviewer feedback, thereby learning about peer review. However, this informal learning relies on the idea that students are actively submitting and receiving reviews, even though this assumption may not hold for all students. For instance, students may not regularly encounter the review process if they are early in their graduate program, working on longer term projects, experiencing only desk rejections, or waiting on a delayed peer review process.
Further, for those students who do receive regular exposure to the peer review process, this experience may not be optimal for learning. For a learner to benefit from informal field-based learning and self-directed learning, they must exercise self-regulation and control over their learning, and the extent to which learning happens depends on the decisions they make (Kraiger & Ford, Reference Kraiger and Ford2021). Kolb’s (Reference Kolb1984) model of experiential learning can be applied to understand that it is not only the learners’ experiences themselves (in this case, exposure to the content of reviews) but their interactions with these experiences—the ways in which they reflect upon the experience, how it challenges their preconceptions, and how it connects to other experiences—that contribute to effective and meaningful learning (in this case, whether receiving reviews translates into building the skills to become a qualified reviewer). To most effectively benefit from this process, learners should have the opportunity to actively experiment, receive feedback, and reflect upon their experience (Kolb & Kolb, Reference Kolb and Kolb2005). In the current system, we argue that even those students who do gain exposure to the peer review process are not automatically learning to be better reviewers by receiving reviews.
Resource allocation theory (Kanfer & Ackerman, Reference Kanfer and Ackerman1989) argues that learners have limited resources at their disposal during skill acquisition, and those resources (e.g. attentional resources) are divided during learning between on-task, off-task, and self-regulatory uses. Importantly, resource allocation theory also highlights the idea that individuals bring different resources to bear, and tasks vary in their resource demands, so learning is most likely when enough on-task and self-regulatory resources are available during learning to match the learning task’s demands. Resource allocation theory is a useful theoretical framework to inform potential learning experiences for reviewers by considering the match between the resources novices bring to bear and the resource demands of the learning tasks being asked of them.
One evidence-based element of training design that considers appropriate levels of resource-demand is scaffolding, wherein a task is learned by mastering more basic aspects first, then adding on additional levels of complexity until an advanced skill level is reached (Kraiger & Ford, Reference Kraiger and Ford2021; Plott et al., Reference Plott, McDermott, Archer, Carolan, Hutchins, Fisher, Gronowski, Wickens and Orvis2014). A scaffolded learning experience approach does not rely on the learner to structure their learning, instead facilitating a clear progression of skill development, and allows the resource demands of learning (e.g. task complexity) to increase as learner resources (e.g. skill level) increase. In practice, an example of scaffolding might be incorporating shadowing into the earliest stages of reviewer development, allowing new reviewers to observe a seasoned colleague before completing a review independently. Shadowing, however, is not itself immune to problems: For instance, if the seasoned reviewer being shadowed is ineffective, those habits may be passed on.
The workplace learning literature highlights the importance of articulating, and aligning instruction to, measurable learning outcomes (Kraiger & Ford, Reference Kraiger and Ford2021). Without evaluating learning outcomes, it is difficult to determine whether learning has occurred and for individuals or organizations (in this case, journals) to make formative adjustments in light of persistent learning gaps. Journals could provide more concrete guidance that sets a clear expectation around what constitutes an effective review so that future reviewers could measure their learning progress against a clear set of learning outcomes. A universal training module akin to CITI research ethics training (CITI Program, n.d.) could orient early reviewers to the structure and format of reviews, positive practices to emulate, and common pitfalls to avoid (e.g., lack of focus on the science, as detailed in the focal article). The training module could also include an assessment of baseline reviewer competencies and/or knowledge. Additionally, journal-specific supplemental learning materials could provide annotated exemplar reviews of what a rejection, revise and resubmit, and conditional acceptance might look like. Journal-specific materials could help both novice and experienced reviewers alike calibrate their evaluations against clear standards, especially if they review across journals with different priorities, norms, and audiences.
Developing the train while it is moving: Continuous learning for reviewers
Reviewer qualifications are often treated as a categorical variable, wherein unqualified graduate students become qualified potential reviewers at the moment of degree conferral. As with any consideration of categorical variables, meaningful nuance is lost by reducing the variance in this way: the adult learning literature directly contradicts this notion of a categorical shift from novice to expert (Kanfer & Ackerman, Reference Kanfer and Ackerman1989) and there is a robust literature indicating that skill development is not linear, and even has the potential to decay over time (Arthur & Day, Reference Arthur, Day, Ward, Schraagen, Gore and Roth2020).
Unfortunately, reviewers are essentially on their own to develop their reviewing skills once they’ve started conducting reviews. This is further complicated by the fact that building up skill as a reviewer is likely more difficult for first-time or less experienced reviewers. Experienced reviewers can draw upon their accumulated knowledge of reviewing practices to learn and adapt more efficiently in novel situations (e.g., encountering a new journal’s standards), whereas newer reviewers lack this base of reviewing experience and may need more time to calibrate their approach (Ackerman & Beier, Reference Ackerman and Beier2006; Beier & Ackerman, Reference Beier and Ackerman2005). Then, ironically, those reviewers who might need skill development the most may also find it the most difficult to learn. One way that reviewers may engage in self-directed learning is by comparing their own comments to that of the editor and other reviewers—but this is a resource-intensive learning approach, and the likelihood of learning from such comparisons is highly dependent on the quality of the comments and the individual’s ability and motivation to engage in the learning exercise in a way that results in meaningful learning.
Evidence from the science of learning suggests that providing feedback is critical for learning. Feedback is most effective when it is timely, clear, and focused, and especially insofar as it supports error management (i.e. understanding and processing one’s mistakes; Keith & Frese, Reference Keith and Frese2005; Kraiger & Ford, Reference Kraiger and Ford2021). A critical barrier to sustained reviewer learning is the absence of meaningful feedback on reviewing. Allen et al. outline the misalignment between the feedback that is received (e.g. timeliness of turning around reviews) and feedback that would be most meaningful regarding quality. Without structured opportunities for improvement, reviewers are liable to fall into a pattern of completing assignment after assignment without reflecting on or refining the quality of their reviews. One could argue that it is the responsibility of reviewers to take ownership of their training and seek out feedback, but literature suggests that this is rarely how learning unfolds in practice. Zhang et al. (Reference Zhang, Harrington and Sherf2022) note that “the sense of efficacy associated with proficient knowledge and skills can result in an inference that others’ input may not be valuable, creating a barrier to the effective quest for input” (p. 92). Once reviewers become experts, they may actually be less inclined to seek out feedback on their own. Left unattended, this tendency allows reviewing skills to stagnate—not because reviewers are unwilling to grow, but because growth is not intentionally built into our current system. Ongoing evaluative feedback may look like brief notes from associate editors highlighting especially helpful comments, comments that are out of alignment with journal’s expectations, or comments that drift too far from scientific critique. Another idea that would leverage existing systems and practices would be to shift the function of reviewer evaluations, which are already happening on the back end, and make them visible to the reviewers themselves, at least in part.
Recommendations for reviewer skill development
Recommendation 1: Incorporate structured reviewer training into graduate school training. This could include intentionally building training into coursework from the start that focuses on skills that are precursors to important reviewer skills yet are not specific to reviewing (e.g. in-class discussion of norms, appropriate constructive article critiques) to build the skill of delivering high-quality, substantive feedback. Further, we also recommend the addition of reviewer-specific learning experiences to graduate school preparation, which could include exposure to example reviews, mock reviews, and shadowing senior reviewers.
Recommendation 2: Roll out a universal organizational science reviewer training credential program, similar to CITI training, that covers the basics of reviewing. Beyond journal-specific efforts, there is also an opportunity for field-wide coordination to ensure consistency in reviewer training. For example, the American Psychological Association (APA) already issues graduate training recommendations and provides resources for professional socialization and reviewer mentorship. Embedding reviewer preparation into the APA training guidelines—and developing a centralized module akin to CITI training—would help to establish reviewing as a core professional competency. Ensuring that graduate students and early career scholars receive consistent, foundational exposure to reviewing expectations across doctoral programs is key, while still allowing journals to supplement with outlet-specific modules. A reviewer credential program would provide clear learning outcomes for aspiring reviewers to work toward, as well as more concrete norms for reviewers across the field. SIOP could potentially collaborate in this effort and integrate training opportunities into the SIOP conference and other programming. SIOP has provided some such training before (e.g. reviewer bootcamps as part of consortia programming), and its involvement would help to underscore the idea that high-quality reviewing is a critical professional norm.
Recommendation 3: Facilitate opportunities for reviewer skill development across the career span via more active investment from journals. Part of this should include building mechanisms for students who aspire to review in the future to shadow and collaboratively review. Retaining experienced, qualified reviewers to be “on the hook” for the review’s quality in its final form can protect the authors of papers being reviewed from potentially adverse outcomes resulting from inexperienced reviewers. Journals could build pipeline programs to match students with review mentors in cases where the student does not have access to a faculty mentor willing to co-review. In addition, journals must be intentional about providing feedback to reviewers, no matter their career stage, to ensure they can use that evaluation of their skill performance to inform their self-directed continual learning processes and develop as reviewers over time.
Conclusion
Developing reviewer competence cannot be left to chance; as Allen et al. assert, the stakes are simply too high. In service of our field’s scientific integrity, we recommend intentional, structured, and evidence-based opportunities for reviewer learning throughout the career span. These opportunities could include integrating structured training throughout graduate education, implementing universal reviewer credentialing, and establishing mechanisms for feedback. Such efforts cannot be undertaken solely by students, graduate programs, or journals—the entire academic community must work in tandem to strengthen reviewer development. By doing so, our field may finally achieve the payoff Allen et al. envision: a more transparent, rigorous, and objective peer review process and, consequently, more impactful, credible, and high-quality research.