5 results
172 Roles and Expectations for Evaluators within a Learning Health System
- Part of
- Anna Perry, Doug Easterling
-
- Journal:
- Journal of Clinical and Translational Science / Volume 8 / Issue s1 / April 2024
- Published online by Cambridge University Press:
- 03 April 2024, pp. 51-52
-
- Article
-
- You have access Access
- Open access
- Export citation
-
OBJECTIVES/GOALS: Our objective is to explore the evolving role of evaluators within Learning Health Systems (LHSs) and the implications for evaluation approaches in these dynamic healthcare environments. We aim to disseminate lessons learned to help inform best practices for other CTSA hubs operating under a LHS model. METHODS/STUDY POPULATION: Our investigation builds upon our prior qualitative analysis of the LHS literature and contextualization of unique challenges, and potential remedies, of a LHS in Academic Health Centers. As evaluators, we are particularly interested in understanding how evaluation work is conducted in LHSs and exploring ways to optimize the role of evaluators and their skillset in this context. For this investigation, we examined the competencies necessary for evaluators working in LHS and the specialized evaluation approaches needed to fulfill these requirements. Our approach drew from multi-faceted data and experience. We leveraged insights from our literature review, direct experience within WFUSOM CTSI, and discussions with other evaluators. This combination of data sources provided the foundation for our analysis. RESULTS/ANTICIPATED RESULTS: We expect that as more health systems move toward the LHS model, they will have an increased need for various forms of evaluation, requiring resources well beyond what they are currently dedicating to evaluation. Expectations for evaluators will be enhanced in the following distinct, yet complementary, categories: generating new knowledge and translating research knowledge into practice. Anticipated results include identifying essential competencies for evaluators in LHS, such as data proficiency, clinical understanding, and adaptive skills. We also expect to uncover various evaluation approaches specific to LHS, including quality improvement studies, pragmatic trials, and stakeholder-engaged research. DISCUSSION/SIGNIFICANCE: Understanding the evolving role of evaluators and specialized evaluation approaches in LHS is crucial. It enhances the ability to generate localized evidence, customize interventions, and improve patient care. This knowledge empowers healthcare systems to adapt, innovate and deliver high-quality care for a higher impact on patient outcomes.
517 Understanding Distinctions in the Implementation of Learning Health System (LHS)
- Doug Easterling, Anna Perry, Sabina Gesell
-
- Journal:
- Journal of Clinical and Translational Science / Volume 6 / Issue s1 / April 2022
- Published online by Cambridge University Press:
- 19 April 2022, pp. 106-107
-
- Article
-
- You have access Access
- Open access
- Export citation
-
OBJECTIVES/GOALS: The LHS concept has been promoted as a means for health systems to improve quality, safety, efficiency and equity. NAMs definition has been widely adopted, but is broad and has led to variation in how LHS is operationalized. Drawing on a taxonomy developed through a review of literature, we developed a tool that shows how LHSs are implemented in practice. METHODS/STUDY POPULATION: The LHS Implementation Assessment Tool (LHS-IAT) will indicate which forms of work are being carried out by a health system that purports to operate a LHS. LHS-IAT is based on the LHS Consolidated Framework (LHS-CF); which was developed through a qualitative analysis of LHS literature. LHS-CF contains 38 primary elements’ and 56 secondary elements’ that have been associated with the LHS construct. These elements are organized into 5 bodies of work• (e.g.; translating evidence into practice) and 4 enabling conditions• (e.g.; supportive culture). LHS-IAT assesses whether a health system operating as an LHS is implementing each of the key elements in LHS-CF. The usefulness of LHS-IAT will be demonstrated by applying the tool to 5 LHSs that have been described in the literature. RESULTS/ANTICIPATED RESULTS: LHS-IAT produces a quantitative profile for any given health system operating as a LHS; each LHS element is assessed as either emphasized; otherwise present; or absent. With this information, we create profiles for each implementation of LHS, using spider graphs. Systems that emphasize different elements will have different shapes for their spider graphs. Based on our initial coding of publications, we expect at least 4 distinct profiles within our sample, reflecting differences in emphasis on factors such as: continuous improvement practices, adoption of internally and externally tested interventions, research conducted to address patient care issues prioritized by institutional leaders, investigator-initiated research, clinician-engaged research, and engagement of patients and families. DISCUSSION/SIGNIFICANCE: The LHS-IAT will show differences in how health systems are translating the LHS concept into practice. This will allow for a shared language for those studying and/or implementing LHS. With the ability to map out an approach, health system leaders will have a tool to clarify intent and gain consensus as to which LHS model they want to implement and invest in.
23688 Impact of moving to a virtual format with the Wake Forest School of Medicine (WFSM) Mentor Academy (MA)
- Debra I. Diz, Claudia Olivier, Anna Perry, Doug Easterling
-
- Journal:
- Journal of Clinical and Translational Science / Volume 5 / Issue s1 / March 2021
- Published online by Cambridge University Press:
- 30 March 2021, p. 61
-
- Article
-
- You have access Access
- Open access
- Export citation
-
ABSTRACT IMPACT: The Wake Forest School of Medicine Mentor Academy has adapted to provide continued effective and relevant formal mentoring training to translational researchers in a virtual format, to improve mentoring and provide effective mentor-mentee communication tools. OBJECTIVES/GOALS: To determine whether the WFSM Mentor Academy (MA), an effective long-standing mentoring program for research faculty, is compromised after moving from an in-person to an online format as a result of COVID-19 restrictions. METHODS/STUDY POPULATION: A vetted National Research Mentoring Network (NMRN) implemented at WFSM addresses 6 major competencies (Effective Communication, Aligning Expectations, Assessing Understanding, Addressing Equity/Inclusion, Fostering Independence, Promoting Professional Development) over 6 months with 10 sessions (20 contact hrs). COVID-19 required that the MA (13 participants) move to an online format after 3 (out of 10) in-person sessions. We survey 26 self-rated mentoring competencies pre- and post MA, based on a numerical 7-point scale (abstract published for ACTS 2020) and, in 2020, included additional assessments of online versus in-person MA satisfaction/effectiveness and perceived impact on abilities of MA participants to mentor in an exclusively virtual format. RESULTS/ANTICIPATED RESULTS: All 13 participants responded to the survey and rated the online format as effective (9) or somewhat effective (4) for learning content. However, for participant interactions, only 4 found it effective and 9 somewhat effective. When assessing ability to mentor in a virtual format, most negatively affected competencies were ‘helping your mentee network effectively’ (7 of 13), ‘motivating your mentee’ (7), and ‘identifying and accommodating different communication styles’ (6). Goal setting (research goals, career goals) was rated easier under COVID-19 restrictions by 3 mentors. Increases in Pre-Post self-expressed mentoring effectiveness (+1 pt quality; +1 pt meeting mentee expectations) are similar to historical values, and 12 of the 13 mentors changed mentoring practices based on MA experiences. DISCUSSION/SIGNIFICANCE OF FINDINGS: While 2020 ratings for increased effectiveness are similar to prior years, since the 2021 MA will remain online, we will adjust content to address challenges identified in training mentors and in mentoring trainees in virtual settings by strategies to keep MA participants engaged online and sharing new resources for virtual/hybrid format mentoring.
3403 Understanding the career pathways of scholars participating in Scholar Programs and Academy
- Tanha Patel, Sabina Gesell, Doug Easterling
-
- Journal:
- Journal of Clinical and Translational Science / Volume 3 / Issue s1 / March 2019
- Published online by Cambridge University Press:
- 26 March 2019, p. 76
-
- Article
-
- You have access Access
- Open access
- Export citation
-
OBJECTIVES/SPECIFIC AIMS: The Wake Forest Clinical and Translational Science Institute (CTSI) put in place a KL2 program in 2007 and introduced an 18-month Translational Research Academy (TRA) in 2010. The TRA provides education and leadership development training, research support services, mentoring, and networking opportunities to 15-20 early-career clinical and translational researchers, including those receiving KL2 awards. The KL2 and TRA programs make up the Mentored-Career Development Core that is administered by the Wake Forest CTSA Education Program. Over the years, the program administrators have collected feedback from the graduates on what they liked and did not like about the programs. However, a comprehensive evaluation to understand the impact of the trainings on helping scholars advance their research and their research careers was not conducted. The purpose of this evaluation was to assess if and how the KL2 and TRA programs are helping scholars advance their research and career in research. METHODS/STUDY POPULATION: Semi-structured phone interviews were conducted with 11 selected CTSI scholars from both KL2 and TRA programs. The interviews focused on 4 key areas: expectations coming into the program, role the program played in their career development over time, what else could/should have program done to support them and their research, and which of the other CTSI services were valuable in their career development. RESULTS/ANTICIPATED RESULTS: Even though scholars joined the program as an early-career researcher, the amount of experience in research they had varied and so did their areas of interest. The participants in the program also included both clinician and basic researchers. Scholars came into the program with different level of expectations and drive to use this program as a platform to lift their research careers. The conversations during these interviews gave an insight on career trajectories that the scholars have taken before and after joining the academy. For example, among the scholars that had graduated from the programs, 5 of the 6 had received a career development award, all 6 had their own grants to support their research, 3 had received professional promotions, and all 6 have an administrative leadership role they play, in addition to focusing on their research career. The information on where the scholars were at the beginning and the pathway they have taken to get to where they are now allowed us to better understand what aspects of the program was most valuable. The scholars noted that sessions around grant writing and developing specific aims were very helpful. Among the services provided, having a grant editor support was something that everyone noted as the most important service to them, even after leaving the program. DISCUSSION/SIGNIFICANCE OF IMPACT: The impact of the interviews and candid feedback provided by the scholars have been immensely valuable to program administrators to really understand the impact the program (and which components specifically) is having for the scholars. Looking at the career trajectories of the scholars, it was noted that scholars coming in to the academy with some experience doing research and ability to continue engaging in research benefited the most from what the program had to offer. Scholars that were still very early in their research career gained knowledge but were not always able to apply it because they were not working towards a grant at the time. Using the data on career trajectories, recommendations for improving the program, and other CTSI services that were most valuable, the program administrators decided to redesign the curriculum. The new version of the program is now tailored for scholars who have research experience and are working towards a career development grant such as a K or R. This will allow them to have a curriculum that is more intense and hands-on with an expectation that the scholars will submit the application towards the end of the program. A separate program is being developed for early-career researchers who are still setting their foot into t field to provide them basic research competencies through ad hoc courses and seminars.
3388 Evaluation of Mentor Academy using self-assessed research mentoring competencies
- Tanha Patel, Whitney Davis, Doug Easterling
-
- Journal:
- Journal of Clinical and Translational Science / Volume 3 / Issue s1 / March 2019
- Published online by Cambridge University Press:
- 26 March 2019, p. 70
-
- Article
-
- You have access Access
- Open access
- Export citation
-
OBJECTIVES/SPECIFIC AIMS: The goal of the Wake Forest Clinical and Translational Science Institute (WF CTSI) Mentor Academy is to contribute to increasing the next generation of faculty with competencies specific to research mentoring. The curriculum of the Mentor Academy is adapted from an evidence-based national curriculum developed by the National Research Mentoring Network and includes 20 contact hours of didactic and experiential training, complemented with outside readings and assignments. A pre-post-follow-up competency assessment is built in as part of the curriculum for both participants and their current mentees. The purpose of this study was to assess self-rated research mentoring competencies among the Mentor Academy participants to better understand the effectiveness of the Mentor Academy. METHODS/STUDY POPULATION: A total of 37 mid-level or early senior faculty members from WF have participated in the 3 Mentor Academy cohorts that have completed so far. All of the participants receive 5% salary support and are expected to regularly participate in Mentor Academy sessions; complete a pre, post, and 6-month follow-up self-assessments; and provide a list of their active mentees. The identified mentees are also asked to assess the participating mentors’ research mentoring competencies before the start and 6-months after the end of the Mentor Academy. The same list of 26 mentoring competencies are included in the self-assessments for both mentors and mentees. RESULTS/ANTICIPATED RESULTS: The initial results of the self-assessments suggest that mentors are coming into the academy with a rather high self-assessed competency ratings. The change in competency ratings pre/post is not as significant. On average the change in self-assessed competency ratings increases by 1.0 on a 7-point scale. Interestingly enough, for 2 of the cohorts were mentees were also asked to assess their mentors’ competencies, the mentees rated their mentors as having a higher competency (for all 26 items) than what the mentor rated themselves, at both pre and 6-month follow-up assessments. DISCUSSION/SIGNIFICANCE OF IMPACT: After compiling data for 3 different cohorts, we are consistently seeing similar patterns in self-assessed competency ratings; participants are coming in with a high level of competency and an increased level of competency rating by mentees. These findings need to be further considered. For example, the program administrators need to discuss how participants are recruited, if we are recruiting the intended users, and what should we be expecting as an outcome(s) of the program. We also need to further explore different perceptions of mentor-mentee relationships and expectations to see how reliable are the data from mentees. A collaboration with the National Research Mentoring Network is also needed to see how the self-assessed competencies compare to those utilizing their curriculum outside of WF.