Evaluation is central to the clinical and translational science enterprise, but what counts as “good evaluation” is being fundamentally reimagined. The Clinical and Translational Science Award (CTSA) Program [1] and the IDeA Clinical and Translational Research (CTR) Program [2] communities are evolving from compliance-oriented tracking and narrow academic metrics toward innovative and evidence-informed approaches that contribute to the science of evaluation. This special issue, “Advancing Evaluation Practices in Clinical and Translational Science/Research,” showcases that evolution by highlighting novel advancements, methodologies, and transformative practices in the evaluation of clinical and translational science. Collectively, the articles move the field toward more rigorous, use-driven evaluation that is highly attentive to impact. We hope this special issue generates new ideas, encourages innovation, and continues to advance the discipline through methodologically sound, creative, and effective approaches.
The articles in this issue cluster into five themes. The first, Reframing Translational Impact, includes articles advancing the use of the Translational Science Benefit Model by focusing on impact methodology [Reference Miovsky, Woodworth and Andersen3], return on investment and benefits of seed funding [Reference Philibert, Batson and Fletcher4], and community engagement and impact [Reference Gomes, Murphy and Mitchell5]. Also grouped in this theme is an article exploring a conceptual model for translational science impact [Reference Presley, Worrell, Jauregui-Dusseau, Madden and James6]. The second cluster, Using Innovative Measurement Tools, brings readers’ attention to a novel, comprehensive tool assessing our clinical and translational research workforce [Reference Joly, Cousineau, Gray and Harder7], a digital tool to increase evaluation capacity [Reference Kuhfeldt and Brimhall8], and community-engaged development of a mixed-method evaluation tool [Reference Allen, Graff and Carlin9]. The third group, Building Infrastructures and Cultures for Continuous Improvement, highlights continuous quality improvement around data infrastructure [Reference Padek, Mudaranthakam, Pepper, Penne Mays and Ellis10, Reference Welch, Daudelin, Serrano, Chen, Cabrera and Gibson11], reimagining evaluation as a catalyst for continuous improvement [Reference Fishman, Lounsbury, Lechuga, Kahn, Kim and Keller12], and cross-institution collaboration [Reference Kane, Lipschitz and Abedin13]. Within this theme, other articles introduce a protocol for tracking scholarly products [Reference Lucas, Kuhn and Witkemper14], support partnerships between translational and social science [Reference Sperling, Muhigaba, Quenstedt, Wyman Roth, Brown and McClernon15], and integrate perspectives and methods from other disciplines [Reference Molldrem, Moses, Tumilty, Swanson, Farroni and Smith16]. In the fourth theme, Innovating Methods and Tools for Community Engagement and Inclusiveness, investigators share useful community engagement evaluation approaches [Reference Waters, Joly, Gray, Carney and Fairfield17] and metrics for measuring engagement [Reference Murphy, Weber, Cordonnier and Mitchell18]. Other studies focus on returning value to participants via a modular digital system [Reference Carmichael, Walter and Labbree19], applying a structured framework to evaluating community engagement efforts [Reference Do-Golden, Wolfe, Maccalla, Settles and Kipke20], and promoting a data-driven approach to evaluating equity and inclusiveness [Reference Chen, Mohamed, Benn and Brittain21]. The fifth thematic cluster, Strengthening Evaluation Capacity and Fidelity, contains studies that improve assessment of fidelity of intervention delivery [Reference Martin, Moser and Bunger22], use meta-evaluation [Reference Giancola, Stevenson and Philibert23], and focus on team science evaluation [Reference Sweeney, Hunt, Sajdyk and Volkov24]. Finally, a special communication, reporting on the 2025 Association for Clinical and Translational Science Evaluation Special Interest Group meeting, synthesizes current priorities and innovations emerging from the broader clinical and translational science evaluation community. Taken together with the other contributions, it signals a maturing field that is increasingly deliberate about its own learning agenda [Reference Volkov, Samuels and Sperling25].
As co-editors of this special issue, we are proud to share these innovative contributions to our field of evaluation, and we would like to thank the authors, peer reviewers, and the Journal of Clinical and Translational Science Editorial Board for providing the forum and support for this important dialogue on evaluation. Our goal was to have translational scientists, clinical researchers, evaluation practitioners, community partners, and CTSA/CTR program stakeholders contribute original articles, systematic reviews, case studies, and other manuscripts addressing critical aspects of evaluation in the field, and we feel we attained this goal with the depth and breadth of articles. We hope that by highlighting innovative evaluation approaches and evaluation capacity building initiatives, identifying evaluation barriers and facilitators, and showcasing real-world evaluation use and impact, this issue will support further evidence-based advances in the field of clinical and translational science. We invite you to explore this issue as we consider what new frontiers in evaluation of clinical and translational science may emerge.
Author contributions
Boris Volkov: Conceptualization, Writing-original draft, Writing-review & editing; Valerie Harder: Conceptualization, Writing-original draft, Writing-review & editing; Brenda Joly: Conceptualization, Supervision, Writing-review & editing.
Funding statement
This work was supported, in part, through the National Institutes of Health. We acknowledge grant # UM1 TR004405 from the National Center for Advancing Translational Sciences (NCATS) and grant # U54 GM115516 from the National Institute of General Medical Sciences (NIGMS). The content is solely the responsibility of the authors and does not necessarily represent the official views of the contributors’ institutions or funders.
Competing interests
The authors have no conflicts of interest to declare.