Hostname: page-component-848d4c4894-p2v8j Total loading time: 0 Render date: 2024-05-12T08:55:18.829Z Has data issue: false hasContentIssue false

Choosing Reviewers: Predictors of Undergraduate Manuscript Evaluations

Published online by Cambridge University Press:  08 February 2022

Christina P. Walker
Affiliation:
Purdue University, USA
Terri L. Towner
Affiliation:
Oakland University, USA
Lea Hilliker
Affiliation:
Oakland University, USA
Rights & Permissions [Opens in a new window]

Abstract

There is a substantial amount of research examining bias in the peer-review process and its influence on the quality and content of political science journal articles. However, there is limited research examining how students peer review other undergraduate research for publication. To better understand the predictors of manuscript evaluations and build on prior literature, this study examines seven years of undergraduate peer evaluations submitted to the Pi Sigma Alpha Undergraduate Journal of Politics from 2013 to 2020. Empirical analyses reveal that a peer reviewer’s prior service on the editorial board (i.e., experience) and race are consistently and significantly associated with manuscript evaluations. By examining how undergraduate peer reviewers assess anonymized manuscripts, this research reveals potential biases in the political science peer-review process. Additionally, the benefits of undergraduate students participating in the peer-review process are explored and discussed.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the American Political Science Association

As the political science discipline expands, there is a steady growth in departments, instructors, students, research, and journals (Walker et al. Reference Walker, Towner, Clawson, Oxley, Nemacheck and Rapoport2021). With this expansion, many universities increasingly call on undergraduate students to participate in academic research to gain valuable insight into potential career paths, expertise with research methods and theory, and general writing and proofreading skills (Walker et al. Reference Walker, Towner, Clawson, Oxley, Nemacheck and Rapoport2021). These experiences lend themselves to students participating in research conferences and submitting their work for academic publication. Students also can participate in the peer-review side of a journal—an opportunity that allows them to better understand the research processes.

By participating as a peer reviewer, students also have the opportunity to understand the inherent subjectivity present in the process. Editors of academic journals at all levels strive for the fundamental virtues of inclusivity and diversity during the peer-review process to acquire quality feedback. However, there is discourse about whether these values are achieved (Fox et al. Reference Fox, Duffy, Fairbairn and Meyer2019; Sperotto et al. Reference Sperotto, Granada, Henriques, Timmers and Contini2021). Therefore, we ask: How do the characteristics (i.e., race, sex, year in school, major, and prior semesters served) of undergraduate peer reviewers influence their manuscript evaluations? Furthermore, we present a dialogue about the positive impact of serving on an editorial board as an undergraduate student. With these two tasks, we contribute to the discourse in the literature about demographic biases in the peer-review process and the benefits of students participating in undergraduate journals.

By participating as a peer reviewer, students also have the opportunity to understand the inherent subjectivity present in the process.

AN INCREASE IN UNDERGRADUATE JOURNALS

As acknowledged by political science departments and academic conferences, undergraduate research plays an increasingly critical role in the political science discipline (Cox and Kent Reference Cox and Kent2018; Walker et al. Reference Walker, Towner, Clawson, Oxley, Nemacheck and Rapoport2021). Along with the increase in undergraduate research is an increase in undergraduate journals that publish this research and are commonly run by undergraduate editorial boards and leadership. On the American Political Science Association (APSA) website, 15 undergraduate journals in political science are advertised, including Ilios: Journal of Political Science and Philosophy, Compass, Journal of Politics & Society, and Politikon: The IAPSS Journal of Political Science (APSA 2021). However, many other journals have student editorial boards, including area studies journals. Each journal is unique in its scope, broadness of submissions, and how its operation is financed. With this diversity in mind, we recognize that the results described in this article cannot be generalized to the entire population of undergraduate journals. However, this research is an essential step forward for understanding the undergraduate peer-review process in political science journals. Additionally, we hope that this research encourages other journals—beyond undergraduate journals—to take a more analytical look at the diversity of their editorial board and how they review submissions.

THE PI SIGMA ALPHA UNDERGRADUATE JOURNAL OF POLITICS

This article focuses on the Pi Sigma Alpha Undergraduate Journal of Politics (PSAJ), which is a blind peer-reviewed journal sponsored by Pi Sigma Alpha, the National Political Science Honor Society. Between 2001 and 2020, the PSAJ published 37 issues (biannually in the fall and the spring), including four to six original manuscripts in each issue. During this almost 20-year period, five different institutions hosted PSAJ (currently at Elon University 2020–2023), at which undergraduates peer evaluate hundreds of submitted manuscripts. Because of differing cultures and structures across the five host institutions as well as data limitations, we examine only one host institution. Oakland University hosted the PSAJ for seven years from 2013 to 2020, the longest time that a single institution has done so.

The PSAJ included an undergraduate editorial board of approximately 15 undergraduate students who were divided into groups to serve as reviewers, according to their interests. At the beginning of each semester, the content editor (i.e., the student leader of the editorial board) gave a brief training on how to review manuscripts. This training included a short discussion about how the PSAJ publishes mixed methodologies and topic areas ranging from American politics, international relations, and comparative politics but does not publish advocacy pieces. Moreover, all research must be clear in its theoretical approach and reasoning. Each week, students reviewed an average of two manuscripts according to an evaluation form (see online appendix table A) that included Likert-scale questions as well as “yes” or “no” questions about originality and methods. The student editorial team at Purdue University created the evaluation form, which has been used at all host institutions. Manuscripts were received, anonymized, and assigned to peer-review groups by the content editor, according to their area of interest. The anonymization of manuscripts and training were implemented to eliminate biases in the peer-review process. However, this article analyzes whether peer-review biases still existed based on a reviewer’s demographics, independent from biases created by submitting institutions and authors (Bauer et al. Reference Bauer, Ogás, Shakir, Oxley and Clawson2009; Walker et al. Reference Walker, Towner, Clawson, Oxley, Nemacheck and Rapoport2021).

BENEFITS OF THE UNDERGRADUATE PEER-REVIEW PROCESS

Prior research discusses the benefits that undergraduate students gain from participating in the peer-review process, including as a learning opportunity for quantitative and qualitative methods, leadership, academia, and research (Bolsen et al. Reference Bolsen, Fairbanks, Aviles, Pritchett, Kingsland, LaPlant, Montgomery and Rogol2019; Garbati and Brockett Reference Garbati and Brockett2018; Mariani et al. Reference Mariani, Buckley, Reidy and Witmer2013; Walker et al. Reference Walker, Towner, Clawson, Oxley, Nemacheck and Rapoport2021). Cox and Kent (Reference Cox and Kent2018) described the student peer-review process as a way for student researchers to engage with the academic community, including fellow student researchers, and to understand the steps of the process as a peer reviewer and author. As illustrated in Walker et al. (Reference Walker, Towner, Clawson, Oxley, Nemacheck and Rapoport2021, 349), previous PSAJ student editorial board members all noted that it gave them the ability “to create an opening and welcoming environment” and that being a peer reviewer provided “an opportunity to hone the essential skills that they have used throughout their career.”

PEER-REVIEW SUBJECTIVITY

Prior literature critiques the current peer-review process in political science research (Jefferson et al. Reference Jefferson, Alderson, Wager and Davidoff2002; Lee et al. Reference Lee, Sugimoto, Zhang and Cronin2013), discussing biases from institutional ties and suggesting the solution of a double-blind peer-review process. Other scholars examined author demographics, such as gender and race (Erosheva Reference Erosheva2020; Grandizio et al. Reference Grandizio, Pavis, Hayes, Laychur and Klena2020; Hero Reference Hero2015; Lee et al. Reference Lee, Sugimoto, Zhang and Cronin2013). This research found a lack of minority authors and an uneven distribution of the regional location of editors and reviewers (Erosheva Reference Erosheva2020; Grandizio et al. Reference Grandizio, Pavis, Hayes, Laychur and Klena2020); however, it focused on manuscripts that were not anonymized. Therefore, the current biases are correlated to the effect of author–editor relationships rather than inherent biases from peer-review demographics or qualities.

Other studies questioned the demographic biases of peer reviewers, some of which found that women are more critical (Borsuk et al. Reference Borsuk, Aarssen, Budden, Koricheva, Leimu, Tregenza and Lortie2009); others found that gender did not have a significant influence on the assessment of whether a manuscript was of publishable quality (Borsuk et al. Reference Borsuk, Aarssen, Budden, Koricheva, Leimu, Tregenza and Lortie2009; Nylenna, Riis, and Karlsson Reference Nylenna, Riis and Karlsson1994). However, multiple studies have shown that the peer-review assessments of junior scholars are more critical of manuscripts, which indicates that less-experienced peer reviewers—particularly first-time editors—are more likely to give negative or more constructive comments (Borsuk et al. Reference Borsuk, Aarssen, Budden, Koricheva, Leimu, Tregenza and Lortie2009; Nylenna, Riis, and Karlsson Reference Nylenna, Riis and Karlsson1994).

These variances in findings of how demographics influence editorial practices highlight the subjectivity of the peer-review process. Kassirer and Campion (Reference Kassirer and Campion1994) highlighted how fundamental flaws are published due to reviewers glossing over inaccuracies because of a lack of time. Djupe (Reference Djupe2015) found that subjectivity stems from a lack of a universal method or standard to decipher what is publishable. Taylor (Reference Taylor2011) suggested using a quality-level indicator to which all manuscripts must be held to eliminate the subjectivity stemming from a lack of equal comparison.

Our analysis expands peer-review research by examining an undergraduate journal that used a double-blind process and a quantifiable scale for each reviewed manuscript. Therefore, we could test whether a quantifiable scale reduces the subjectivity in the editorial process or demographic biases remain. Furthermore, we expanded previously understudied demographics (e.g., a reviewer’s race) to enhance the conversation on how race can impact manuscript reviews while also noting the limitations of racial classifications (James Reference James, Zuberi and Bonilla-Silva2008).

METHODOLOGY

We analyzed 12 consecutive semesters of manuscript reviews (i.e., Fall 2014 to Spring 2020). An average of 185 peer reviews were completed each semester. All manuscript reviews were collected and entered into a digital database by the faculty advisor. The total sample includes 2,218 independent peer reviews,Footnote 1 with multiple observations for each reviewer in each semester.Footnote 2 For example, in one semester, a reviewer may have completed 15 to 20 independent manuscript reviews, which eliminated the ability to treat these data as panel data. Instead, we estimated an ordinary least squares (OLS) regression.Footnote 3

From 2013 to 2020, 97 reviewers served on the editorial board with some serving multiple semesters. The dataset included the demographic and academic characteristics for each reviewer, along with their review. Specifically, we examined a reviewer’s major (1 = political science major, 0 = nonpolitical major); sex (1 = male, 0 = female); race (1 = white; 0 = nonwhite); year in school (1 = freshman, 2 = sophomore, 3 = junior, 4 = senior); and prior semesters served on the board (0 to 5 semesters). These data were collected by the faculty editor using official university records.Footnote 4 Table 1 lists descriptive statistics for reviewers’ characteristics.

Table 1 Descriptive Statistics: Reviewer Evaluations and Characteristics 2014–2020

Notes: aN = 97, Nonwhites = 9, Whites = 88.

b Women = 56, Men = 41. These numbers give context to the composition of the editorial board and meaning to the frequency column.

When reading a manuscript, each reviewer completed an evaluation form with seven main criteria: (1) originality of contribution, (2) importance of main conclusions, (3) interest in main conclusions, (4) strength of evidence provided for main conclusions, (5) appropriateness of methodology, (6) writing quality, and (7) organization.Footnote 5 Each criterion (except originality) was evaluated on a five-point scale; higher scores indicated higher manuscript quality. Criteria 2–7 were combined into an additive index,Footnote 6 creating an evaluation index ranging from 6 to 24. Criteria 1, “originality,” was examined separately because it was coded as 1 = yes, 2 = maybe, 3 = no (see table 1).

We also included control variables to represent each semester in which the editorial board reviewed manuscripts. These variables are represented by dummy variables for 12 semesters using Spring 2020 for comparison (e.g., 1 = Fall 2014, 0 = all other semesters, 1 = Spring 2015, 0 = all other semesters). The empirical results for these dummy variables are reported in online appendix table B.

RESULTS

Table 2, column 1, lists the regression results for the additive index, estimated using OLS regression. The most significant demographic factor was race, which was positively associated with the additive evaluation index. Thus, these results indicate that white undergraduate reviewers gave higher manuscript ratings. Race is an understudied demographic in this line of research; therefore, it is interesting to find evidence that race is more impactful than a reviewer’s sex. Among professional journals, a reviewer’s sex often is considered a more influential indicator of manuscript assessments. It is interesting that our findings reveal no significant link between reviewers’ sex and their manuscript evaluation. The latter finding is notable because it is inconsistent with the prior literature (Borsuk et al. Reference Borsuk, Aarssen, Budden, Koricheva, Leimu, Tregenza and Lortie2009).

Table 2 Peer Reviewers’ Manuscript Evaluations by Reviewer Characteristics a

Notes: All estimates are unstandardized OLS coefficients, with standard errors in parentheses.

a This dependent variable (i.e., additive evaluations) used an additive index of six measures.

b This dependent variable (i.e., original) was coded as 1=yes, 2=maybe, 3=no.

c These results are truncated because the dummy variables representing each of the 12 semesters have been removed for parsimony. The dummy variable results are reported in online appendix table B. *p<0.05, **p<0.01 (two-tailed).

Race is an understudied demographic in this line of research; therefore, it is interesting to find evidence that race is more impactful than a reviewer’s sex.

In terms of experience, the number of prior semesters served as a reviewer has a negative relationship, suggesting that the more experience undergraduates have as reviewers, the more negative their manuscript review. Perhaps this association means that peer reviewers become more critical after more time and experience on the editorial board, which contradicts prior literature on faculty peer reviews (Borsuk et al. Reference Borsuk, Aarssen, Budden, Koricheva, Leimu, Tregenza and Lortie2009; Nylenna, Riis, and Karlsson Reference Nylenna, Riis and Karlsson1994). This provides evidence for the learning opportunity of undergraduates participating in the peer-review process. As undergraduate students serve more semesters on an editorial board, their reviews are more critical because they become more analytical and understanding of the niche of political science research. As shown in table 2, column 1, there is no evidence that a reviewer’s year in school, sex, and major are linked to manuscript evaluations.

Table 2, column 2, lists the regression results for the second dependent variable, “originality of the contribution,” which was estimated using OLS regression. In contrast to additive evaluations (column 1), we found that reviewers’ demographics have differing effects on “originality” scores. Regarding race, the results show that nonwhite reviewers gave significantly higher originality ratings than white reviewers, with a reviewer’s race being the most robust indicator. Table 2, column 2, shows a positive relationship between originality and a reviewer’s prior semesters served. This link suggests that experienced peer reviewers likely have gained a better understanding of prior research by previous peer reviewing. They are aware of how essential it is for new research to fill gaps and overcome limitations. Unlike the predictors of additive evaluations, a reviewer’s major is associated with originality scores. Table 2, column 2, shows a positive linkage between political science majors and originality—perhaps because these reviewers are more familiar with the political science literature and the gaps in the scholarship. There is no empirical linkage between reviewers’ year in school and major and their assessment of originality.

DISCUSSION AND CONCLUSION

When creating editorial-board groups for the PSAJ, the advisors and content editors sought a peer-review board that would be diverse in race, sex, major, and experience to account for any demographic biases. This research finds that creating a diverse editorial board is necessary because there are significant differences in undergraduate reviewers’ comments that are dependent on demographics and experience—which is in line with prior research (Tennant and Ross-Hellauer Reference Tennant and Ross-Hellauer2020). Our results show that the most significant factors for evaluations are a reviewer’s race and prior experience serving as a reviewer.

Experience also seems to be a factor that influences how peer reviewers review manuscripts. This result could be explained by undergraduate students gaining more experience with research and mainly quantitative methodology (Cox and Kent Reference Cox and Kent2018; Garbati and Brockett Reference Garbati and Brockett2018). Previous studies have shown that experience produces more critical peer reviewers (Nylenna, Riis, and Karlsson Reference Nylenna, Riis and Karlsson1994). This is consistent with responses from former editorial board members who, when asked to describe their experience, stated that their tenure aided in their discussion and analytical skills and motivated some to create their own research and to enter academia (Walker et al. Reference Walker, Towner, Clawson, Oxley, Nemacheck and Rapoport2021).

Despite inconsistencies in the prior literature, this research shows that by having more diverse individuals reviewing papers, errors become known and subjectivity decreases (Djupe Reference Djupe2015; Kassirer and Campion Reference Kassirer and Campion1994). Indeed, this study provides preliminary evidence that a reviewer’s race is associated with manuscript assessments. This is an interesting finding because professional journals have only recently begun to collect data on reviewers’ racial identity. Therefore, scholars and editors must continue to analyze demographic variances in the peer-review process. Overall, this research highlights the importance of diversity in reviewers’ backgrounds and characteristics to ensure breadth and a standardized reviewing practice.

FUTURE RESEARCH AND A CALL TO ACTION

Despite measures to make the blind peer-review process fair and balanced, this research highlights the importance of a diverse pool of peer reviewers. We call on undergraduate students and professional-journal editors to collect demographic information (i.e., race and sex) of submitting reviewers and authors to improve equity in the field. Additionally, we encourage manuscript authors to submit their papers to the Gender Balance Assessment Tool (Sumner Reference Sumner2018), which estimates cited authors’ gender composition. Furthermore, finding evidence that demographics impact the undergraduate peer-review evaluations, we encourage traditionally white universities to recruit reviewers from partner institutions. This will diversify the reviewer pool and pave the way for historically black colleges and universities to host undergraduate journals in the future.

Despite measures to make the blind peer-review process fair and balanced, this research highlights the importance of a diverse pool of peer reviewers.

LIMITATIONS

The control variable (i.e., semester year of the journal) or the number of semesters that the journal was at the same institution indicate that each editorial board was unique, at times significantly influencing manuscript evaluations. We also acknowledge that the manuscripts submitted to the editorial board differed each semester. Also, our study was limited to reviews of undergraduate manuscripts submitted to only one undergraduate journal. We recognize that reviewers at other universities may review and act differently. Moreover, other journals may abide by a different set of criteria. Therefore, we cannot generalize our results to other undergraduate or professional journals. Despite these limitations, we provide evidence to show editors and other decision makers that experience and diversity matter.

ACKNOWLEDGMENTS

We thank Harvard graduate student, Ghazi Ghazi, for his help revising this article. Additionally, we thank all former faculty editors, faculty reviewers, and—most important—undergraduate students who submitted papers to the PSAJ and who served on the editorial board.

DATA AVAILABILITY STATEMENT

Research documentation and data that support the findings of this study are openly available at the Harvard Dataverse at https://doi.org/10.7910/DVN/HLWNGC.

SUPPLEMENTARY MATERIALS

To view supplementary material for this article, please visit http://doi.org/10.1017/S1049096521001888.

CONFLICTS OF INTEREST

The authors declare no ethical issues or conflicts of interest in this research.

Footnotes

1. The sample is based on reviews instead of reviewers because each review was dependent on each individual paper.

2. The observations in this study are not independent because each reviewer had multiple observations in each period due to each student reviewing multiple manuscripts per semester. Therefore, fixed-effect panel data cannot account for the multiple observations per time frame unless the timeframe parameter is removed. When the data were estimated with clustered standard errors on the reviewer’s name, the results from the OLS regression were essentially the same, with the same values being significant.

3. We estimated a model with standard errors clustered on the reviewer and found essentially the same results.

4. This project underwent Institutional Review Board approval and was deemed “exempt research” (Project Number 1551821-1).

5. See online appendix table A for an example of the evaluation form.

6. Cronbach’s Alpha = 0.89.

References

REFERENCES

American Political Science Association. 2021. Graduate and Undergraduate Journals. Washington, DC: American Political Science Association.Google Scholar
Bauer, Benjamin J., Ogás, Whitney C., Shakir, Omar R., Oxley, Zoe M., and Clawson, Rosalee A.. 2009. “Learning through Publishing: The Pi Sigma Alpha Undergraduate Journal of Politics .” PS: Political Science & Politics 42 (3): 565–69.Google Scholar
Bolsen, Toby W., Fairbanks, Bailey, Aviles, Eduardo E., Pritchett, Reagan G., Kingsland, Justin T., LaPlant, Kristina, Montgomery, Matthew, and Rogol, Natalie C.. 2019. “Merging Undergraduate Teaching, Graduate Training, and Producing Research: Lessons from Three Collaborative Experiments.” PS: Political Science & Politics 52 (1): 117–22.Google Scholar
Borsuk, Robyn M., Aarssen, Lonnie W., Budden, Amber E., Koricheva, Julia, Leimu, Roosa, Tregenza, Tom, and Lortie, Christopher J.. 2009. “To Name or Not to Name: The Effect of Changing Author Gender on Peer Review.” BioScience 59 (11): 985–89.CrossRefGoogle Scholar
Cox, Michaelene, and Kent, Jaimie. 2018. “Political Science Student Journals: What Students Publish and Why Student Publishing Matters.” PS: Political Science & Politics 51 (4): 804–10.Google Scholar
Djupe, Paul A. 2015. “Peer Reviewing in Political Science: New Survey Results.” PS: Political Science & Politics 48 (2): 346–52.Google Scholar
Erosheva, Elena A. 2020. “NIH Peer Review: Criterion Scores Completely Account for Racial Disparities in Overall Impact Scores.” Science Advances 6 (23).CrossRefGoogle ScholarPubMed
Fox, Charles W., Duffy, Meghan A., Fairbairn, Daphne J., and Meyer, Jennifer. 2019. “Gender Diversity of Editorial Boards and Gender Differences in the Peer-Review Process at Six Journals of Ecology and Evolution.” Ecology and Evolution 9 (24): 13636–49.CrossRefGoogle Scholar
Garbati, Jordana, and Brockett, Esther. 2018. “Students Speak Out: The Impact of Participation in an Undergraduate Research Journal.” Canadian Journal for Studies in Discourse and Writing 28:227–45.Google Scholar
Grandizio, Louis C., Pavis, Elizabeth J., Hayes, Daniel S., Laychur, Andrew J., and Klena, Joel C.. 2020. “Comparison of Editor, Reviewer, and Author Demographics in Journal of Hand Surgery.Journal of Hand Surgery Global Online 2 (4): 182–85.CrossRefGoogle ScholarPubMed
Hero, Rodney. 2015. “Reflections on ‘How Political Science Can Be More Diverse.’” PS : Political Science & Politics 48 (3): 469–71.CrossRefGoogle Scholar
James, Angela. 2008. “Making Sense of Race and Racial Classification.” In White Logic, White Methods: Racism and Methodology, ed. Zuberi, Tukufu and Bonilla-Silva, Eduardo, 3146. Lanham, MD: Rowman & Littlefield Publishers.Google Scholar
Jefferson, Tom, Alderson, Philip, Wager, Elizabeth, and Davidoff, Frank. 2002. “Effects of Editorial Peer Review.” Journal of the American Medical Association 287 (21): 2784–86.CrossRefGoogle ScholarPubMed
Kassirer, Jerome P., and Campion, Edward W.. 1994. “Peer Review: Crude and Understudied, but Indispensable.” Journal of American Medical Association 272 (2): 9697.CrossRefGoogle ScholarPubMed
Lee, Carole J., Sugimoto, Cassidy R., Zhang, Guo, and Cronin, Blaise. 2013. “Bias in Peer Review.” Journal of the American Society for Information Science and Technology 64 (1): 217.CrossRefGoogle Scholar
Mariani, Mack, Buckley, Fiona, Reidy, Theresa, and Witmer, Richard. 2013. “Promoting Student Learning and Scholarship through Undergraduate Research Journals.” PS: Political Science & Politics 46 (4): 830–35.Google Scholar
Nylenna, Magne, Riis, Povl, and Karlsson, Yngve. 1994. “Multiple Blinded Reviews of the Same Two Manuscripts.” Journal of American Medical Association 272 (2): 149–51.CrossRefGoogle ScholarPubMed
Sperotto, Raul Antonio, Granada, Camille E., Henriques, João Antonio P., Timmers, Luis Fernando S. M., and Contini, Verônica. 2021. “Editorial Decision Is Still a Men’s Task.” Annals of the Brazilian Academy of Sciences 93 (1). DOI:10.1590/0001-3765202120201803.CrossRefGoogle Scholar
Sumner, Jane Lawrence. 2018. “The Gender Balance Assessment Tool (GBAT): A Web-Based Tool for Estimating Gender Balance in Syllabi and Bibliographies.” PS: Political Science & Politics 51 (2): 396400.Google Scholar
Taylor, Jim. 2011. “The Assessment of Research Quality in UK Universities: Peer Review or Metrics?British Journal of Management 22 (2): 202–17.CrossRefGoogle Scholar
Tennant, Jonathan P., and Ross-Hellauer, Tony. 2020. “The Limitations to Our Understanding of Peer Review.” Research Integrity and Peer Review 5 (6). https://doi.org/10.1186/s41073-020-00092-1.CrossRefGoogle ScholarPubMed
Walker, Christina P., Towner, Terri L., Clawson, Rosalee A., Oxley, Zoe M., Nemacheck, Christine L., and Rapoport, Ronald. 2021. “Learning Through Peer Reviewing and Publishing in the Pi Sigma Alpha Undergraduate Journal of Politics: Twenty Years Later.” PS: Political Science & Politics 54 (2): 346–52.Google Scholar
Figure 0

Table 1 Descriptive Statistics: Reviewer Evaluations and Characteristics 2014–2020

Figure 1

Table 2 Peer Reviewers’ Manuscript Evaluations by Reviewer Characteristicsa

Supplementary material: PDF

Walker et al. supplementary material

Walker et al. supplementary material

Download Walker et al. supplementary material(PDF)
PDF 120.7 KB
Supplementary material: Link

Walker et al. Dataset

Link