Hostname: page-component-76fb5796d-vvkck Total loading time: 0 Render date: 2024-04-27T22:36:42.041Z Has data issue: false hasContentIssue false

Research rigor and reproducibility in research education: A CTSA institutional survey

Published online by Cambridge University Press:  01 February 2024

Cathrine Axfors
Affiliation:
Stanford University School of Medicine, Stanford Program on Research Rigor & Reproducibility (SPORR), Stanford, CA, USA Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA
Mario Malički
Affiliation:
Stanford University School of Medicine, Stanford Program on Research Rigor & Reproducibility (SPORR), Stanford, CA, USA Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA, USA
Steven N. Goodman*
Affiliation:
Stanford University School of Medicine, Stanford Program on Research Rigor & Reproducibility (SPORR), Stanford, CA, USA Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA, USA Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA
*
Corresponding author: S. N. Goodman, MD, PhD; Email: steve.goodman@stanford.edu
Rights & Permissions [Opens in a new window]

Abstract

We assessed the rigor and reproducibility (R&R) activities of institutions funded by the National Center for Advancing Translational Sciences (NCTSA) through a survey and website search (N = 61). Of 50 institutional responses, 84% reported incorporating some form of R&R training, 68% reported devoted R&R training, 30% monitored R&R practices, and 10% incentivized them. Website searches revealed 9 (15%) freely available training curricula, and 7 (11%) institutional programs specifically created to enhance R&R. NCATS should formally integrate R&R principles into its translational science models and institutional requirements.

Type
Brief Report
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

The clinical translatability of laboratory research has long been a concern of the National Institutes of Health (NIH) and was a key motivation for the development of the Clinical and Translational Science Awards (CTSA) program [Reference Zerhouni1]. As Elias Zerhouni stated in 2005, “The scale and complexity of today’s biomedical research problems demand that scientists move beyond the confines of their individual disciplines and explore new organizational models for team science [Reference Zerhouni1].” Correspondingly, CTSA hubs are intended to address this problem through education and structures to enhance collaboration of scientists across disciplines and the translational spectrum. The translational pathway model has been expanded and elaborated over the ensuing two decades, under the auspices of the National Center for Advancing Translational Sciences (NCATS), formed in 2011 to administer the CTSA consortium and whose leadership has taken the lead in formalizing and promoting a new “Science of Translational Science [Reference Austin2].” This has produced attendant organizational and educational requirements of CTSA-holding institutions, with a goal of increasing the efficiency of the clinical translation.

In 2012, articles by scientists at Bayer and Amgen caught the attention of the scientific community, pointing to poor reproducibility of academic translational research [Reference Prinz, Schlange and Asadullah3,Reference Begley and Ellis4]. These articles confirmed the concerns of scientists over the preceding decade that the variable quality of the underlying science was a major cause of translational roadblocks, combined with a variety of system features. This provoked a 2014 article by NIH Director Francis Collins, stating that the poor reproducibility of NIH-supported science required “immediate and substantive action” and that “success will come only with full engagement of the entire biomedical enterprise [Reference Collins and Tabak5].” This was followed by a series of NIH Rigor and Reproducibility (R & R) requirements for R01 grants (in 2016) [6], T32 grants (in 2020) [7], and data management and sharing plans (in 2023) [8]. Scientific rigor is defined as the strict application of the scientific method to ensure robust and unbiased experimental design, methodology, analysis, interpretation, and reporting of results [6]. A study has good reproducibility if its design, data gathering, analysis, and inferences can be re-run and corroborated. Computational reproducibility refers to the process of obtaining the same (statistical) results by re-running the published analysis using the researchers’ methods and (deposited) code or data [Reference Goodman, Fanelli and Ioannidis9].

Interestingly, the NIH’s concern with poor research rigor and reproducibility as a contributor to translational failure is not reflected in NCATS translational models or in CTSA hub requirements. There are no requirements specifically related to rigor and reproducibility in the most recent CTSA funding opportunity announcement [10], and minimal language in the 2022 NCATS paper “Advancing Translational Science Education [Reference Faupel-Badger, Vogel, Austin and Rutter11].” In that paper, the only mention of the R & R comes in a description of a translational scientist as a “Rigorous researcher” who “Conducts research at the highest level of rigor and transparency, possesses strong statistical analysis skills, and designs research projects to maximize reproducibility.” A new heading of “Rigor and Reproducibility” was added to the NCATS Translational Science Principles webpage in April 2023, albeit with minimal details about its operationalization [12].

With the strong NIH emphasis on R & R training and practices as central to the issue of efficient translation, and with the lack of formal R & R institutional requirements from NCATS, we conducted a survey to determine the degree to which CTSA hubs incorporated R & R training and support into their translational research education and support infrastructure.

Materials and methods

We sent an online survey to principal investigators of all CTSA-funded institutions and searched their websites using “rigor” and “reproducibility” as keywords. The survey had 12 questions related to R & R activities and an open-ended comment section developed by the authors based on their knowledge of the existing activities. Full survey questions, website search strategy, and the list of surveyed institutions are available in the Supplementary File. The survey was sent initially on 6 January 2022, and included three email reminders, as well as two phone call attempts to reach non-respondents. Responses were gathered until August of 2022. The final response rate was 82% (50 of 61 institutions). Survey results are reported as a percentage (and number) of responding institutions (N = 50), while resources are collected as a number (and percentage) of all CTSA-funded institutions (N = 61). Open-ended answers were inductively classified to identify common themes.

Results

Survey respondents indicated that 84% (N = 42) of institutions had incorporated R & R training into existing programs and courses, 68% (N = 34) had training specifically devoted to R & R, 30% (N = 15) monitored R & R at their institutions, and 10% (N = 5) recognized or incentivized best R & R practices of their researchers (Table 1). In the free text comments section, many respondents indicated that their institutions had “mandatory research methods,” “good laboratory practice,” or “responsible conduct of research” courses, which they considered to fall under R & R even if that terminology was not used in course syllabi. Based on the survey responses and website searches, we identified 33 (54%) institutions with descriptions of R & R training in existing courses, and 34 (56%) with training specifically devoted to R & R. We also identified 34 different R & R resources (e.g., guides, textbooks, courses, etc.) created or externally linked on institutional websites, which included training from nine (15%) institutions with freely available materials. Finally, we identified seven (11%) hubs with programs specifically designed to enhance R & R at their institution (Table 2).

Table 1. Rigor and reproducibility (R & R) activities of Clinical and Translational Science Awards hubs reported by survey respondents (N = 50)

Table 2. Rigor and reproducibility guides, reports or recommendations, programs, and trainings with available course materials identified from Clinical and Translational Science Awards funded institutions

Discussion

Our study found that most CTSA hubs reported incorporating R & R content into their courses or had dedicated R & R training. This is likely a result of the NIH policies previously described. Incentives and recognition for these practices were reported as present in only five institutions. This was not surprising, as USA and international tenure and promotion criteria rarely specify R & R criteria or outcomes [Reference Alperin, Schimanski, La, Niles and McKiernan13,Reference Rice, Raffoul, Ioannidis and Moher14]. Our survey also revealed that respondents saw overlaps between R & R and topics embedded in either standard research methodology education or responsible conduct of research (RCR) training, and it was difficult to discern from survey results how respondents were making that distinction. We, therefore, believe the actual percentage of hubs with meaningful support for R & R is closer to the roughly 50%–70% formally using the terms “rigor” and “reproducibility” in courses or on their websites, rather than the 84% of PIs who stated that it was taught.

With this year being declared to be the “Year of the Open Science” in the USA [15] and the focus on development of open science practices and education, greater clarity will be needed regarding requirements for distinctive or integrated education or training in RCR, R & R, and open science [Reference Pontika, Knoth, Cancellieri and Pearce16,Reference Vicente-Saez and Martinez-Fuentes17]. Further efforts will be needed to facilitate accreditation of courses, and establishment of competencies for these specific terms. Greater transparency requires attention to data management processes before data are cleaned or analyzed. The importance of this has been demonstrated in a variety of many-lab and many-analyst projects in a wide range of applications, from cell-counting to imaging and psychology [Reference Silberzahn, Uhlmann and Martin18Reference Botvinik-Nezer, Holzmeister and Camerer20], as well as a variety of high-profile cases where conclusions were found to be unsupported only after close scrutiny of raw data [Reference Baggerly, Morris and Coombes2124]. It is also a focus of the 2023 NIH Data management requirements, which require a description of the pre-analytic data management process [8]. Openness and transparency are also necessary for proper assessment of rigor and for confirming reproducibility [Reference Menke, Roelandse, Ozyurt, Martone and Bandrowski25,Reference McIntosh, Whittam, Porter, Vitale, Kidambi and Science26]. “Research rigor” requires attention not only to experimental design and conduct, including sample size implications, but to topics like hidden multiplicity, reporting of negative results, misinterpretations of p-values and statistical significance, and to the true strength of the evidence underlying research claims.

T32 requirements for R & R training, first instituted in May 2020, could have broad influence on R & R education at CTSA hubs as T32 grants are renewed. The effect on faculty practice is as yet uncertain, and these requirements do not extend to the array of research support services supported by CTSAs. Without broad-based integration at all levels of the research enterprise, the impact of trainee education could be limited. NCATS requirements and translational models should formally incorporate these principles, as there is substantial empirical evidence that it affects the translatability of both preclinical and clinical research.

Our study has a number of limitations. We did not receive responses from 11 of 61 (18%) CTSA hubs. As it is unlikely that non-respondents had more R & R activities than respondents, our reported rates are probably biased upwards. As we could only search publicly available websites, content on institutions' intranets was missed unless reported by survey respondents. Also, while respondents reported the existence of R & R-related training, we could not assess the coverage of R & R topics; we hope to collect such information in the future. One of the main motivations behind our study was to stimulate a broader discussion and establishment of standards that would make it clearer whether training satisfies RCR, GLP, or R & R requirements, and in which cases it could satisfy all three. We also did not ascertain the specifics of the monitoring and incentives that institutions reported. Furthermore, we did not assess the quality or extent of resources that the CTSAs provided.

We know of no other studies examining the support of rigor and reproducibility education and support provided by CTSA hubs. We hope this study facilitates sharing of R & R resources and best practices across the CTSA network and can serve as a baseline to monitor future progress. The collected resources reported herein are posted on the website of the Stanford Program for Research Rigor and Reproducibility (SPORR.stanford.edu) for use by the CTSA network and those outside. This web information will be updated with new information sent to SPORR [27].

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/cts.2024.10.

Acknowledgments

Cathrine Axfors was a postdoc at Stanford and is currently employed at the Research Center for Clinical Neuroimmunology and Neuroscience Basel, University Hospital Basel and University of Basel, Basel, Switzerland. The Stanford affiliation is credited as most of her work for this study has been conducted during her postdoctoral training at Stanford.

Author contributions

Cathrine Axfors: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Writing – original draft, Writing – review & editing. Mario Malički: Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Project administration, Writing – original draft, Writing – review & editing. Steven N Goodman: Conceptualization, Data curation, Formal Analysis, Funding acquisition, Methodology, Resources, Supervision, Writing – review & editing.

Funding statement

Research reported in this publication was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1TR003142. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Competing interests

The survey results include responses from Stanford University, which were provided by the authors of the manuscript.

Ethical approval

The Institutional Review Board at Stanford University has ruled that the project did not meet the definition of human subjects research and exempted it from Institutional Review Board review.

Footnotes

a

These two authors contributed equally to this work.

References

Zerhouni, EA. Translational and clinical science — time for a new vision. N Engl J Med. 2005;353(15):16211623. doi: 10.1056/NEJMsb053723.CrossRefGoogle ScholarPubMed
Austin, CP. Opportunities and challenges in translational science. Clin Transl Sci. 2021;14(5):16291647. doi: 10.1111/cts.13055.CrossRefGoogle ScholarPubMed
Prinz, F, Schlange, T, Asadullah, K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10(9):712712. doi: 10.1038/nrd3439-c1.CrossRefGoogle ScholarPubMed
Begley, CG, Ellis, LM. Raise standards for preclinical cancer research. Nature. 2012;483(7391):531533. doi: 10.1038/483531a.CrossRefGoogle ScholarPubMed
Collins, FS, Tabak, LA. Policy: NIH plans to enhance reproducibility. Nature. 2014;505(7485):612613. doi: 10.1038/505612a.CrossRefGoogle ScholarPubMed
NOT-OD-16-011: Implementing Rigor and Transparency in NIH & AHRQ Research Grant Applications. https://grants-nih-gov.stanford.idm.oclc.org/grants/guide/notice-files/NOT-OD-16-011.html. Accessed August 28, 2023.Google Scholar
NOT-OD-20-033: NIH and AHRQ Announce Upcoming Changes to Policies, Instructions and Forms for Research Training Grant, Fellowship, and Career Development Award Applications. https://grants-nih-gov.stanford.idm.oclc.org/grants/guide/notice-files/NOT-OD-20-033.html. Accessed August 28, 2023.Google Scholar
NOT-OD-21-013: Final NIH Policy for Data Management and Sharing. https://grants-nih-gov.stanford.idm.oclc.org/grants/guide/notice-files/NOT-OD-21-013.html. Accessed August 28, 2023.Google Scholar
Goodman, SN, Fanelli, D, Ioannidis, JPA. What does research reproducibility mean? Sci Transl Med. 2016;8(341):341ps12. doi: 10.1126/scitranslmed.aaf5027.CrossRefGoogle ScholarPubMed
PAR-21-293: Clinical and Translational Science Award (UM1 Clinical Trial Optional). https://grants-nih-gov.stanford.idm.oclc.org/grants/guide/pa-files/PAR-21-293.html. Accessed August 28, 2023.Google Scholar
Faupel-Badger, JM, Vogel, AL, Austin, CP, Rutter, JL. Advancing translational science education. Clin Transl Sci. 2022;15(11):25552566. doi: 10.1111/cts.13390.CrossRefGoogle ScholarPubMed
Translational Science Principles. National Center for Advancing Translational Sciences. Published January 6, 2022. https://ncats.nih.gov/training-education/translational-science-principles. Accessed August 28, 2023.Google Scholar
Alperin, JP, Schimanski, LA, La, M, Niles, MT, McKiernan, EC. The Value of Data and Other Non-traditional Scholarly Outputs in Academic Review, Promotion, and Tenure in Canada and the United States. Open Handb Linguist Data Manag. Cambridge, MA: The MIT Press, 2020.Google Scholar
Rice, DB, Raffoul, H, Ioannidis, JPA, Moher, D. Academic criteria for promotion and tenure in biomedical sciences faculties: cross sectional analysis of international sample of universities. BMJ. 2020;369:m2081. doi: 10.1136/bmj.m2081.CrossRefGoogle ScholarPubMed
FACT SHEET: Biden-Harris Administration Announces New Actions to Advance Open and Equitable Research | OSTP. The White House. Published January 11, 2023. https://www.whitehouse.gov/ostp/news-updates/2023/01/11/fact-sheet-biden-harris-administration-announces-new-actions-to-advance-open-and-equitable-research/. Accessed August 28, 2023.Google Scholar
Pontika, N, Knoth, P, Cancellieri, M, Pearce, S. Fostering open science to research using a taxonomy and an eLearning portal. In: Proceedings of the 15th International Conference on Knowledge Technologies and Data-Driven Business. i-KNOW ’15. Association for Computing Machinery; 2015:18.Google Scholar
Vicente-Saez, R, Martinez-Fuentes, C. Open science now: a systematic literature review for an integrated definition. J Bus Res. 2018;88:428436. doi: 10.1016/j.jbusres.2017.12.043.CrossRefGoogle Scholar
Silberzahn, R, Uhlmann, EL, Martin, DP, et al. Many analysts, one data set: making transparent how variations in analytic choices affect results. Adv Methods Pract Psychol Sci. 2018;1(3):337356. doi: 10.1177/2515245917747646.CrossRefGoogle Scholar
Niepel, M, Hafner, M, Mills, CE, et al. A multi-center study on the reproducibility of drug-response assays in Mammalian cell lines. Cell Syst. 2019;9(1):3548.e5. doi: 10.1016/j.cels.2019.06.005.CrossRefGoogle Scholar
Botvinik-Nezer, R, Holzmeister, F, Camerer, CF, et al. Variability in the analysis of a single neuroimaging dataset by many teams. Nature. 2020;582(7810):8488. doi: 10.1038/s41586-020-2314-9.CrossRefGoogle ScholarPubMed
Baggerly, KA, Morris, JS, Coombes, KR. Reproducibility of SELDI-TOF protein patterns in serum: comparing datasets from different experiments. Bioinformatics. 2004;20(5):777785. doi: 10.1093/bioinformatics/btg484.CrossRefGoogle ScholarPubMed
Micheel, CM, Nass, SJ, Omenn, GS. Evolution of Translational Omics: Lessons Learned and the Path Forward. Washington, DC: National Academies Press, 2012, doi: 10.17226/13297 CrossRefGoogle Scholar
Piller, C. Blots on a field? Science. 2022;377(6604):358363. doi: 10.1126/science.add9993.CrossRefGoogle ScholarPubMed
After honesty researcher’s retractions, colleagues expand scrutiny of her work. https://www.science.org/content/article/after-honesty-researcher-s-retractions-colleagues-expand-scrutiny-her-work. Accessed December 21, 2023.Google Scholar
Menke, J, Roelandse, M, Ozyurt, B, Martone, M, Bandrowski, A. The rigor and transparency index quality metric for assessing biological and medical science methods. iScience. 2020;23(11):101698. doi: 10.1016/j.isci.2020.101698.CrossRefGoogle ScholarPubMed
McIntosh, LD, Whittam, R, Porter, S, Vitale, CH, Kidambi, M, Science, D. Dimensions Research Integrity White Paper. London, UK: Digital Science, 2023. doi: 10.6084/m9.figshare.21997385.v2.Google Scholar
Stanford Program on Research Rigor and Reproducibility. Stanford Program on Research Rigor & Reproducibility. https://med.stanford.edu/sporr. Accessed August 28, 2023.Google Scholar
Figure 0

Table 1. Rigor and reproducibility (R & R) activities of Clinical and Translational Science Awards hubs reported by survey respondents (N = 50)

Figure 1

Table 2. Rigor and reproducibility guides, reports or recommendations, programs, and trainings with available course materials identified from Clinical and Translational Science Awards funded institutions

Supplementary material: File

Axfors et al. supplementary material

Axfors et al. supplementary material
Download Axfors et al. supplementary material(File)
File 32.8 KB