Hostname: page-component-848d4c4894-2pzkn Total loading time: 0 Render date: 2024-06-02T05:09:19.279Z Has data issue: false hasContentIssue false

A mixed-methods analysis to understand the implementation of a multistakeholder research consortium: Environmental influences on child health outcomes (ECHO)

Published online by Cambridge University Press:  29 August 2023

Elissa Z. Faro*
Affiliation:
Department of Internal Medicine, Carver College of Medicine, University of Iowa, Iowa City, IA, USA
Katherine A. Sauder
Affiliation:
Department of Pediatrics, Section of Nutrition, University of Colorado School of Medicine, Aurora, CO, USA
Gwendolyn S. Norman
Affiliation:
Department of Obstetrics and Gynecology, Wayne State University, Detroit, MI, USA
Amber Anderson
Affiliation:
Duke Clinical Research Institute, Duke University, Durham, NC, USA
Carmen Vélez-Vega
Affiliation:
Social Sciences Department, Graduate School of Public Health, University of Puerto Rico, San Juan, Puerto Rico
David Napp
Affiliation:
Practical Applications of Public Health, Durham, NC, USA
Kathi C. Huddleston
Affiliation:
College of Public Health, George Mason University, Fairfax, VA, USA
*
Corresponding author: E. Z. Faro, PhD; Email: elissa-faro@uiowa.edu
Rights & Permissions [Opens in a new window]

Abstract

Introduction:

Large, transdisciplinary research consortia have increasingly been called upon to address complex and challenging health problems. The National Institutes of Health’s (NIH) Environmental influences on Child Health Outcomes (ECHO) Program developed multisite collaboration strategies to promote impactful collaborative observational research on child health. Team science and implementation science offer theoretical and methodological structure to answer questions about the strategies that facilitate successful consortia. We sought to characterize the elements and conditions that influence the implementation of a complex, interdisciplinary longitudinal research program, ECHO.

Methods:

Informed by the Practical, Robust, Implementation and Sustainability Model, our ethnographic research included semi-structured interviews with internal stakeholders and program evaluation metrics. We conducted template and matrix analysis and triangulated the qualitative and quantitative data to understand the implementation of ECHO.

Results:

Between February and May 2022, we conducted 24 virtual interviews with representatives from ECHO components. The main cross-cutting topics that emerged from thematic analysis were collaboration and team science; communication and decision-making; data processes and harmonization; and diversity, equity, and inclusion. Both the qualitative and secondary quantitative evaluation data provided insights into the reach, adoption, implementation, and effectiveness of the program.

Conclusion:

A large, multidisciplinary research consortium such as ECHO has produced conceptual, instrumental, capacity building, and connectivity impact for internal and external stakeholders. Facilitators included infrastructure that supported collaboration and learning, alignment of data processes, and harmonization. Opportunities for enhanced impact include multidisciplinary, multimethod communication strategies, and alignment of research priorities.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of The Association for Clinical and Translational Science

Introduction

Large research collaboratives often have more success than single investigators conducting research alone; they produce more publications in journals with higher impact factors and are cited more frequently [Reference Wuchty, Jones and Uzzi1,Reference Hall, Stokols and Stipelman2]. This can lead to higher impact on programs and policies as this research reaches larger stakeholder audiences. As a result, funding agencies are increasing their support for large, transdisciplinary research consortia to address complex, challenging health problems [Reference Hohl, Knerr and Thompson3]. There has been an accompanying rise of team science research to understand strategies that facilitate successful teams (e.g., communication, leadership) [Reference Aarons, Reeder, Miller and Stadnick4]. As part of the National Institutes of Health’s (NIH) focus on multiple cross-disciplinary programs and research centers [Reference Luke, Carothers and Dhand5,Reference Guise, Winter, Fiore, Regensteiner and Nagel6], the Environmental influences on Child Health Outcomes (ECHO) Program developed a multisite collaboration to promote impactful, collaborative research [Reference Thompson, Hall, Vogel, Park and Gillman7].

Implementation science offers a structure to understand the identifiable contextual factors that impact implementation of the ECHO Program. These factors, such as policies, organizational climate, incentives, workflow, and target population, are multilevel and complex, and related to implementation outcomes [Reference McCreight, Rabin and Glasgow8,Reference Øvretveit9]. Despite the increasing calls for team science, there are still a number of outstanding questions to be addressed, such as the effects of research structures and funding mechanisms on team functioning [Reference Aarons, Reeder, Miller and Stadnick4,Reference Hall, Olster, Stipelman and Vogel10]. We sought to develop an in-depth understanding of the implementation of the observational ECHO Program and compare qualitatively assessed contextual factors to quantitative implementation outcomes. The Practical, Robust Implementation and Sustainability Model (PRISM) guided our research design, data collection, analysis, and the integration of findings to better understand the contextual factors that impact the implementation of a multistakeholder consortium [Reference Etingen, Patrianakos and Wirth11]. We chose PRISM as our framework to provide the structure for understanding the complex system of ECHO and how its components interact. PRISM has four major domains: (1) intervention (the ECHO Program); (2) recipients (program internal and external stakeholders); (3) implementation and sustainability infrastructure (ECHO steering committee, working groups, data harmonization, etc.); and (4) external environment. The external environment comprised the funder, external stakeholders’ policies, and programs, and, ultimately, the COVID-19 pandemic. We captured implementation outcomes qualitatively and quantitatively using the Reach Effectiveness Adoption Implementation and Maintenance (RE-AIM) framework (Fig. 1) [Reference McCreight, Rabin and Glasgow8,Reference Feldstein and Glasgow12Reference Glasgow, Harden and Gaglio14].

Figure 1. The PRISM framework [Reference Feldstein and Glasgow12] with details specific to our ECHO program research.

We focused on characterizing the context for the implementation of the ECHO Program by using ethnographic methods; triangulating the qualitative interview data with quantitative implementation outcomes, a preexisting subset of the measures developed by ECHO to track progress toward program goals [Reference McCreight, Rabin and Glasgow8,Reference Glasgow, Harden and Gaglio14]. We present our qualitative and quantitative findings to understand how internal stakeholders perceive the facilitators and barriers to successful implementation of a large research consortium.

Materials and Methods

Setting

In September 2016, the NIH launched the ECHO Program, a 7-year nationwide multisite collaborative research program encompassing both observation and interventional arms exploring the effects of a broad range of early environmental exposures on child health and development [Reference Gillman and Blaisdell15]. As of the seventh year of funding, ECHO has enrolled over 60,000 participants (from the prenatal period through adulthood) across 46 U.S. states and territories. ECHO comprises over 1,200 researchers across 84 initial observational cohorts, the NIH, the Coordinating Center (CC), Data Analysis Center (DAC), Human Health Exposure Analysis Resource (HHEAR), and Patient-Reported Outcomes Core (PRO Core) (Fig. 2). We did not include the interventional arm of ECHO, the IDeA States Pediatric Clinical Trials Network [Reference Annett, Chervinskiy and Chun16], as it does not utilize the same organizational structure and has different protocols and processes as a clinical trial arm. Finally, at the time of data collection, we could not identify representatives from the nascent Genetics Core for inclusion in the study.

Figure 2. Organizational structure of the environmental influences on child health outcomes (ECHO) program. HHEAR = human health exposure analysis resource; IDeA =states institutional development award states; NIH = National Institutes of Health; PI = principal investigator; PRO = person-reported outcomes (from LeWinn et al. 2022 [Reference LeWinn, Caretta, Davis, Anderson and Oken40]).

Study Population

We interviewed internal stakeholders from the ECHO components, seeking broad representation of research and administrative roles across ECHO [Reference Kristensen and Ravn17Reference Palinkas, Horwitz, Green, Wisdom, Duan and Hoagwood19]. All internal ECHO stakeholders were informed about the study via an internal email with study information and an interview invitation. Those interested contacted the research team via email to schedule an interview via Zoom. From the respondents, we purposefully selected a sample to ensure maximum variation that represented diverse perspectives (geography, role, gender, etc.) from groups that we hypothesized would have different experiences of ECHO [Reference Palinkas, Horwitz, Green, Wisdom, Duan and Hoagwood19]. We supplemented the initial invitation with targeted email invitations to gather additional perspectives (e.g., roles, ECHO components) not represented in the initial email volunteers. The Duke University IRB determined this study to be exempt.

Data Collection

The research team developed a semi-structured interview guide informed by PRISM that explored topics identified in the literature and by stakeholders. Six members of the research team (EF, KS, GN, DN, CV, and KH), who had previously conducted qualitative research and were trained specifically on the PRISM-informed semi-structured interview guide by the lead author (EF), conducted 20–45-minute virtual interviews until thematic saturation was achieved [Reference Guest, Bunce and Johnson20,Reference Guest, Namey and Chen21]. The interviews were recorded, transcribed, and accessible only to research team members.

As quantitative implementation outcomes, we conducted secondary analysis of the Goals, Outcomes, Indicators, and Targets (GOITs), which are continuously collected, aggregated, and analyzed by the CC and the DAC. The ECHO GOITs are an evaluation plan developed to track progress along domains identified by ECHO-wide internal stakeholders as important to the success of the program. The metrics are organized around four goals: (1) enroll and retain a large and diverse group of participants in the ECHO-wide Cohort to answer key scientific questions; (2) collect and make high-quality data available for analysis; (3) collect, store, and use biospecimens and extant assay data to support ECHO-wide Cohort research; and (4) publish and disseminate high-quality, impactful science. Performance on these metrics is shared with internal stakeholders on a regular basis. The GOITs have evolved over time reflecting slight variations depending on the trajectory of the program overall.

Data Analysis

Qualitative data were analyzed using template and matrix analysis methods to assess perceptions about ECHO implementation, document attitudes toward collaborative team science, and generate a formative understanding of multilevel contextual factors [Reference Vindrola-Padros22Reference Sangaramoorthy and Kroeger24]. Using a template developed with deductive a priori PRISM domains, the analysis also captured inductive, emergent themes. Pairs of team members were assigned a subset of transcripts. To establish agreement and consistency, each transcript was summarized independently by two co-authors, and then the summaries were reconciled until consensus was reached. The summaries were next entered into a matrix (i.e., Excel spreadsheet) organized by PRISM domain (i.e., each tab was a separate PRISM domain), and each domain was analyzed vertically by individual team members through analytic memos, in which emergent themes were identified within each domain. The team collectively reviewed and discussed the analytic memos and identified the overarching themes that independently emerged across multiple PRISM domains.

We mapped the quantitative GOITs onto the RE-AIM framework to measure implementation outcomes (Table 1). We looked at reach as the number and representativeness of participants together with qualitative data on recruitment, especially focused on equity. We used effectiveness to understand broader outcomes, for which research publications were a proxy measure. Related qualitative data were interviewees’ perceptions of the potential impacts of ECHO science. We used our qualitative data to understand adoption through internal stakeholder perceptions and experiences. While much of the qualitative findings are related to implementation, we also mapped GOITs concerning data collection, harmonization, and analysis to understand program delivery outcomes [Reference Holtrop, Estabrooks and Gaglio13]. We did not look at maintenance per se; ECHO is in the implementation phase. Once we completed analysis of the interviews, we integrated the qualitative and quantitative data using side-by-side comparisons in joint displays [Reference Etingen, Patrianakos and Wirth11, Reference Guetterman, Fetters and Creswell25].

Table 1. RE-AIM measures in ECHO (Year 6 GOITs)

DAC = data analysis center; ECHO = environmental influences on child health outcomes; GOIT = goals, outcomes, indicators, and targets; HHEAR = Human Health Exposure Analysis Resource. Level 2 participants = ECHO Program participants consented to all elements of the study protocol.

Results

We conducted 24 virtual interviews with ECHO stakeholders between February 2022 and May 2022 (Table 2). Most interviewees were affiliated with a cohort, which reflects the ratio of components of ECHO; we were able to gather the perspective of all the components except HHEAR and the NIH. NIH declined to participate and despite invitations directed specifically to HHEAR representatives, none volunteered to participate. Table 3 illustrates the emergent themes that each research team member identified in their thematic analysis when writing an analytic memo for that specific PRISM domain. The research team’s discussion of the thematic analysis identified three major cross-cutting themes that emerged independently across the PRISM domains (bolded in Table 3). We present below the findings in three overarching themes synthesized across PRISM domains: (1) collaboration and team science; (2) communication and decision making; (3) diversity, equity, and inclusion; and an additional theme (4) implementation outcomes, in which the qualitative interview and quantitative GOIT data are presented together by RE-AIM domains. Further representative quotations with the emergent themes in PRISM domains are included in Table 3 to illustrate the breadth of responses and perceptions about the implementation of ECHO, not all of which could be include in the narrative below.

Table 2. Interview participant characteristics

Table 3. Representative quotations within the a priori PRISM domains

PRISM = practical, robust implementation and sustainability model [Reference McCreight, Rabin and Glasgow8]; CC = coordinating center; DAC = data analysis center; ECHO = environmental influences on child health outcomes; GOITs = goals, outcomes, indicators, and targets.

Bolded emergent themes signify overarching themes elaborated in narrative text, emergent themes with an (*) were aligned with RE-AIM implementation outcomes.

Collaboration and Team Science

The benefit of collaboration and team science, broadly defined as experiences with other internal elements of ECHO, was the most common cross-cutting theme that emerged across the Intervention, Recipient, and Implementation Infrastructure domains. Many interviewees enjoyed being part of a national study that brings greater breadth and depth of expertise to the team and more exciting translational work and clinical research. One respondent explained:

People have begun to find a lot of commonalities across not just the outcomes, but the exposures and finding different and new ways to collaborate with each other. I’m writing a paper right now that includes 47 cohorts, and so we have 40 plus co-authors and that’s huge (#1020).

Many felt that ECHO had enhanced the science of their specific cohort with its expanded focus aligned across multiple exposures and outcomes. However, some interviewees who felt they had willingly adapted their original study plans to meet overall ECHO goals were frustrated that some ECHO colleagues did not prioritize collaboration over their own research. Nonetheless, interviewees enjoyed opportunities to collaborate, both at large in-person and smaller group virtual meetings, which were seen as opportunities to learn from their ECHO colleagues. Many noted that while the first few years of the program felt chaotic, things had progressively stabilized and become more standardized and streamlined with greater focus and guidance.

Most expressed a desire for more opportunities for direct connection, learning, and sharing. Interviewees wanted more time to “build camaraderie within so that we could learn from each other” (#1009). Respondents felt there were opportunities for cohorts to support each other, acknowledging that many of the barriers were common, such as aligning the needs of similar groups of participants throughout the cohorts and leveraging the expertise of those working out in the field. There were requests for opportunities for problem solving and collaboration at the staff research level:

…more communication between cohort study coordinators, I just personally think would be super helpful. When we used to go like in person to our in-person meetings in DC or like the larger ECHO-wide meetings pre-COVID. The most I got from that was a breakout groups talking other coordinators, just like hearing their opinions and talking through like different scenarios and issues with them, I think, was like the most helpful out of those meetings. (#1001).

Another interviewee shared:

I know there’s like the places on the [online portal] where people can share these resources, but they're not utilized. I do not have an answer for how, but I just I know that collectively we have resources and strategies that would be beneficial to each other (#1014).

The interviewees spoke positively about the program’s collaboration: “ECHO has proven that you can have all these various cohorts and all these various components within a program work and be successful” (#1004), and, “I'm definitely learning… how to work with all these different cohorts that have different data structures and study structures and it’s been really interesting to kind of learn the sides of the science and contribute to that part of it” (#1018).

Communication and Decision Making

The theme of communication and decision making also emerged across the Intervention, Recipient, and Implementation Infrastructure domains. This theme encompassed responsiveness between ECHO components, internal meetings, requests, and infrastructure elements such as data harmonization. While those interviewed generally felt that ECHO is well organized, some reported varied experiences and impacts on their work. Communication among the ECHO components was characterized as initially complicated – one interviewee said ECHO, “needs better communication and a better decision making process (#1009)” – but progressively getting better. Multiple respondents perceived a long time to get answers to routine requests from the DAC or CC, which was said to delay staff entry into the field or create inefficiencies when data collection continues occurring in real time:

We might be trying to do something like a sample collection, and we need to know something specific and it takes a while to…find out the answer, and then in that meantime you've…maybe not been collecting the sample…now you’re behind (#1002).

At the same time, some interviewees expressed difficulty with receiving multiple requests at the same time, all with seemingly urgent deadlines. Respondents sometimes felt:

there’s a big disconnect maybe even just in the urgency of it. A lot of times we're given a week or two maybe to respond… month or two would be more realistic fitting in with everything else that teams are doing (#1017).

However, interviewees recognized that the size of ECHO impeded timely, clear, and efficient communication:

I think that that’s possibly just a function of the size of ECHO and how many, you know how many people need to be behind each decision and things like that it might not be easy to always give a quick answer (#1002).

Respondents identified the opportunity for greater transparency in decision making, like clarifying who in the organizational chart answers which questions when and whose needs are prioritized:

It seems like in ECHO everything has to go to committee, everything has to be discussed and checked with everyone else and there’s not a sense of collective trust like you know I’m going to just let somebody else make that decision and not worry about it… let’s move on it’s just there’s a lot of checking and rechecking and you know. People are spending a lot of time in meetings that you know, sometimes an hour will go by and I will ask myself what really got accomplished there? (#1021).

The size and complexity of the consortium added to the greater need for transparency considering the understanding that everyone has their own perspective, which can impede communication. One ECHO Program member commented, “It feels like sometimes you talk to people and they feel like their one small thing is the most important thing, but it’s like you’ve got to remember that everyone has their small thing (#1018).

Descriptions of the development of the ECHO-wide data collection protocol and the current data processes (e.g., collection, cleaning, harmonization, etc.) reflected some of the challenges and opportunities in decision making. Respondents discussed how more work should have been done up front to streamline the protocol and expressed hope that it would be more aligned in the future:

It’s been a challenge in terms of just kind of trying to be both flexible to respect where the cohorts came in, but also have something standard so that we can create this database…I think that was one and still is one of the biggest challenges of having everyone participate in a standard protocol and be able to have usable data (#1020).

Another interviewee added:

I think we needed to put more time in up front on definition and harmonization activities and not do it at the backend. I think that will be our limitation throughout the rest of the current years and I think it will haunt us in the next phase a little bit as well. So, I think that if I had to do some things over again, that would be something that I think we should have put more energy in up front and defined things much better (#1006).

Interviewees mentioned the perceived burden on ECHO participants of the Program protocol as another opportunity for further alignment and greater flexibility. The sheer number of elements were often cited as a concern for families, “because the protocol is so much more extensive and they’re targeting multiple people in the same household, so the contact is more frequent, it has been hurting retention” (#1015). Respondents generally felt that streamlined decision making would help focus the protocol.

Diversity, Equity, and Inclusion (DEI)

Diversity, equity, and inclusion (DEI) emerged across multiple PRISM domains both explicitly (e.g., Intervention) and implicitly, in relation to implementation outcomes such as recruitment and retention. Although diversity was not specifically defined in the interview script, interviewees tended to interpret this term as being in reference to underrepresented or non-White groups. Interviewees recognized that DEI had been well prioritized. “DEI was obviously going on in our home base, but ECHO has really educated me a lot about it and given the opportunities for trying to advance that field” (#1010). ECHO’s focus on DEI some years after the start of the program left some cohorts feeling they couldn't respond as well because they could only recruit from the participants of their original study. Conversely, the ECHO cohorts that were still recruiting were happy to have the chance to meet the DEI-related goals. Other cohorts felt that access to more diverse families was lost both in the longer time it took to start implementing the protocol. However, most of the cohorts had recruited underserved populations initially, so interviewees felt that DEI was nevertheless supported and maintained.

In addition to ECHO participation, interviewees described different avenues to enhance and ensure DEI across ECHO. One interviewee described their cohort’s efforts to ensure biospecimens are representative of all children in their study sample:

For example, we have hair collection videos so sort of like how to collect your own hair and we've recently added type 5 hair, African American, different types of hairstyles, we've created 4 different additional videos. So that people with braids or locks can have a video that’s more tailored to them, instead of watching somebody else with type 4 hair but loose (#1014).

Implementation Outcomes

Our data on implementation outcomes are both qualitative and quantitative; the interview themes are presented alongside the GOITs that have been mapped onto the RE-AIM measures (Table 1). The GOITs from our data collection period provide context for the interviewees’ experiences and perceptions. They also provide insight into the shared priorities and how the program itself assesses implementation. The GOITs were also a specific domain of interest in the interviews, so we present our qualitative findings about the GOITs to frame the results in the implementation domains.

Goals, Objectives, Indicators, and Targets (GOITs)

We present data from the GOITs for Year 6 (2021–2022) collected concurrently with the semi-structured interviews (February–May 2022). The GOITs were developed initially in the fifth year of ECHO by a task force of internal stakeholders, based on expert recommendations for evaluation and program-specific metadata, such as number of cohorts, proposals in the pipeline, etc.; the objectives are adjusted annually. For example, the task force initially identified the dissemination targets (60 publications and presentations per year) based on the number of cohorts in ECHO, and then used metadata from the publications pipeline to predict that target in future iterations. Similar to other ECHO infrastructure, interviewees felt the process of identifying and selecting GOITs improved over time:

I think they're somewhat useful, probably not as useful as the amount of time we've dedicated to talking about them and developing them and studying them and figuring out how to measure them… I think it is important to have these targets because of our competing priorities, but we seem like we spent a lot of time talking about that that could have been spent on other things, like writing papers (#1010).

In general, they felt the metrics are important:

I think they're really helpful to have metrics that you know you're being measured by rather than just you know, trying to do everything right but not sure exactly what is the most important. So, I think it’s really helpful to have written metrics that everyone’s held to the same metrics (#1002).

While respondents felt the GOITs were a useful framework for articulating, monitoring, and focusing attention on program priorities—important given ECHO’s scientific and operational complexity—there were concerns about feasibility. Some interviewees expressed concern about achieving targets for enrollment, retention, and data collection that were impacted by things outside of their control (e.g., COVID, rurality). One interviewee explained that GOITs are “useful in knowing what the larger ECHO program is looking for, but I do not think it represents completely the work that’s being done at the local sites (#1017). Others expressed additional concerns, “data collection goals are harder for rural sites relying on remote data collection,” (#1012) and “I think sometimes depending on the goal, it could take away from some of the science. If we say we want X number of publications--that’s great and that would be wonderful--but we do not want just to churn out publications for publication’s sake, because science can take a while” (#1020).

Reach

We defined the reach measure concept as the absolute number, proportion, and representativeness of individuals who are willing to participate in the ECHO Program, with a focus on recruitment strategies. The reach domain aligned with ECHO GOIT A.1: Race/Ethnicity of Pregnant Women and Child Participants Across the ECHO-Wide Cohort. At the time of the interviews, the ECHO-wide program was at or very close to the targets set for these metrics (Table 4). The related qualitative findings from the interviews addressed opportunities to enhance strategies for more diversity in recruitment and retention.

Table 4. Goal A: enrollment and retention of participants

ECHO = environmental influences on child health outcomes; N = number.

Several interviewees expressed the need for more cultural competency in ECHO, especially in terms of the marked differences in how particular groups should be approached:

People don't understand what it means to study Dominicans, like Latinx Dominicans are very different from Mexicans, who are very different from Puerto Ricans and so I think that there just isn't enough attention to cultural detail and cultural competency in ECHO at all (#1008).

This case was also made in reference to enrolling and retaining indigenous families, which also requires acknowledgement of historical injustices, attention to trust building, and a greater investment in resources:

Those communities take more time, they just take more time… You don't start off with the questions right away, you see how their family is doing, and how their kids are doing, and you go back years first, before you get to the first question… with the American Indian population that we work with it’s all relationships, it’s all trust, it’s all that that’s what that culture, you know really thrives on… it’s all about relationship building (#1009).

Resources and time were cited as critical both to reaching more underserved or minoritized communities and keeping them engaged in research.

Effectiveness

Effectiveness is the impact of the program on important outcomes, for which research publications using ECHO-wide data (versus single cohort studies) were a proxy measure. Table 5 shows data on (1) the number of ECHO-wide Cohort manuscripts published that were derived from proposals submitted during the previous grant year, and (2) the number of ECHO-wide Cohort presentations at scientific meetings. By the end of the data collection period, the program was not on target to reach its goals by the end of the year, likely due to delays in initial infrastructure (e.g., data harmonization) building. However, the ECHO Publications Committee received a 44% increase in manuscript submissions and an 867% increase in presentation submissions compared to those received during grant year 5. Our qualitative data focused on the broad impacts for the public and the scientific and research communities.

Table 5. Goal D: publication and dissemination

ECHO = environmental influences of child health outcomes; N = number.

Most interviewees mentioned the broad hope that ECHO would result in healthier children and would benefit ECHO participants, families, and the public at large through better understanding the impact of exposures on child health. Interviewees cited the healthcare system, economic system, and entire country, while others described potential benefits focused on specific groups (e.g., children with a specific diagnosis). Several interviewees mentioned ECHO’s potential impact at the policy level:

It’s going to take some time, for these analysis and data cleaning but I’m hoping it turns into some type of public policy where they're able to see that this geographic region of the US had a higher incidence of you know, this type of marker which we then were able to link to this, you know household chemical you know, maybe it will impact policy in that manner, or at least put out warnings for people” (#1012).

Many interviewees described benefits to the scientific and research community, including best practices and lessons learned for team science, large-scale studies, and collaboration. Several interviewees mentioned how it would benefit individual researchers’ careers, especially junior investigators, in terms of experience and networking. “It will help people’s careers… it’s going to, you know, find whoever the next director of NIEHS is, or the NICHD” (#1008).

A few interviewees mentioned specific outcomes, exposures, or health conditions that ECHO research could impact significantly, including Attention-deficit/hyperactivity disorder (ADHD), autism-spectrum condition (ASC), and asthma:

The biggest ones that I think will actually happen are probably going to be something about asthma, because we'll have a lot of sample size and, maybe, be able to find some environmental relationship that could lead to actual regulation (#1003).

ECHO research in genetics, genomics, and epigenomics was described as, “poised to make some true discoveries on health outcomes and maybe even therapeutics” (#1006), and “chemical exposure data from biospecimens we can assay that will give us rich exposure data from biospecimens that we can link to later child health outcomes” (#1025).

Several interviewees mentioned new research methods developed over the course of the program which would benefit the scientific community for years to come, including new measures and methods. As a result of ECHO, the Patient-Reported Outcomes Measurement Information System (PROMIS) team has developed new measures and validated others for younger ages, expanding our ability to understand impacts at earlier life stages [Reference Halfon, Forrest, Lerner and Faustman26]. The COVID-19 pandemic also prompted advances in remote data collection:

Information gained during the pandemic can help researchers move more to remote. I think we can get harder-to-reach populations, even outside of the pandemic, I think we can get harder to reach populations if we have better recommendations for how to collect all sorts of data remotely, and I think that that can be gained from this, because we're collecting so many different types of data (#1014).

Some also saw benefits in the ECHO dataset itself as a long-term scientific resource; the sheer size and diversity of the sample, especially populations that have previously been excluded from research (e.g., Indigenous populations) [Reference Knapp, Kress and Parker27]. Interviewees described opportunities arising from the ECHO data infrastructure including the harmonization of the extant data, “harnessing past existing data to lead to new research questions (#1013), the opportunities for intergenerational studies, and the ability to look at longitudinal data together with biospecimens. Going forward, deidentified data is available for investigators for scientific purposes by applying to the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) Data and Specimen Hub (DASH) [Reference Hazra, Tenney and Shlionskaya28].

Adoption

We considered adoption in terms of both ECHO Program participants and ECHO internal stakeholders. With regards to the former, interviewees had considerable variation in their observations of participant experience; they were both encouraging and challenging. One cohort described having very committed families and gave the example of a dad continuing to participate when the mom passed. ECHO participants were described as engaged, with low attrition, but even when recruitment and retention was good, there remained the problem of getting participants to commit to the study visit schedule.

For internal stakeholder adoption, respondents mostly spoke positively about the investigative network and opportunity to do collaborative science, although some felt that there was not enough inclusion of team members working across multiple roles: “I have a doctorate and…I have been involved on one of the writing teams, but I think it’s very cliquey (#1008). Interviewees shared great advantages to participating in a large research consortium, including many stakeholders willing to listen to ideas:

“I'm definitely learning…how to work with all these different cohorts that have different data structures and study structures and it’s been really interesting to kind of learn the sides of the science and contribute to that part of it (#1018).

However, respondents noted inefficiencies such as investigators sometimes having to do things outside their expertise to meet Program goals. One interviewee offered their perspective on the reason for challenges with alignment:

We task a lot of investigators with doing things in ECHO that is not in their area of expertise. We do it for the inclusiveness and to make sure voices are heard, but… They're not getting paid to figure out how to track publications or even things around biospecimens… But in ECHO, we take a lot of that responsibility, hand it off to a committee who doesn't really have to sweat about it, because at the end of the day, if it doesn't get done, they weren't getting paid to do it… That committee comes up with something and we roll it out and still half of everybody’s mad. It’s that balance of being able to listen to voices, but get things done (#1019).

Implementation

We considered implementation in terms of internal stakeholders’ fidelity to elements of ECHO, including data collection and analysis infrastructure. The GOITs that aligned with this concept concerned data, biospecimens, assays, and completeness of data collection to support program-wide research (Table 6). During our study, ECHO was meeting some goals while there was less progress towards others. The qualitative data reflected challenges with alignment and opportunities for enhanced collaboration.

Table 6. Goal C: biospecimens and assays

DAC = data analysis center; ECHO = environmental influences on child health outcomes; GOIT = goals, outcomes, indicators, and targets; HHEAR = human health exposure analysis resource; N = number.

Multiple respondents cited challenges around flexibility with data submission and extant data harmonization. Some interviewees expressed difficulty managing diverse stakeholders with different research priorities:

Anything that we do to promote uniformity makes cohorts less happy. We have to have the cohorts, we have to have people engaged with particular participants, and wanting to do the ECHO protocol, but if everybody’s going in 70 directions, we generally do not get anything done (#1019).

However, most interviewees felt that ECHO-wide sharing and data management infrastructure were relatively well-organized and helpful, but that getting new people access and trained in the systems was often difficult given the turnover in such a long project, “I mean it takes six months to get a research assistant trained up. I mean, yes, you’re behind the curve when they start” (#1009).

Discussion

Our findings provide insights into ECHO’s internal stakeholders’ perceptions about the contextual factors impacting program implementation. While (1) collaboration and team science, (2) communication and decision making, and (3) DEI were major cross-cutting themes, there was considerable heterogeneity among interviewees’ perceptions and descriptions of facilitators and barriers to the implementation of ECHO. Whether certain factors were considered to facilitate or hinder program success reflected the variation in perspectives, training, roles, etc. across the program. This diversity of experiences illustrates that ”best practices” for large research consortia may not be one-size fits all, and implementation and engagement likely need to be tailored for different groups, both internal and external stakeholders. While overall committed to and excited by the opportunities afforded by multisite collaboration, the size and complexity of the program sometimes left individuals feeling frustrated or adrift.

These findings may reflect the dearth of published implementation studies about large multidisciplinary research consortia. The literature so far has centered on consortia focused on a specific disease state (e.g., human immunodeficiency virus (HIV)), which would allow for more immediate alignment of goals and procedures, or supporting collaboration across smaller projects, for which the issues are different [Reference Brazier, Maruri and Duda29Reference Morrison, Mourby, Gowans, Coy and Kaye31]. Additionally, previous research to understand team science in the context of large research consortia has employed exclusively quantitative methods (e.g., surveys and social network analysis) [Reference Thompson, Hall, Vogel, Park and Gillman7,Reference McCreight, Rabin and Glasgow8,Reference Turner and Baker32Reference Holtrop, Rabin and Glasgow36].

Our study contributes a theory-informed mixed methods approach to understanding implementation of a large, multidisciplinary research consortium. Our findings also offer an in-depth understanding of why and how stakeholders collaborate, and what works to produce impactful science. Previous analysis of research stakeholder engagement has offered a classification of four types of impact: (1) conceptual (changing knowledge, understanding, and attitudes); (2) instrumental (changing policy and practice, given research findings); (3) capacity-building (changing researchers’ ability to conduct future work), and (4) connectivity (shaping the existence and strengths of networks of people and organizations using the research) [Reference Panter-Brick37,38]. In this context, our qualitative and quantitative data provide insight into how the infrastructure and contextual factors of a large longitudinal research consortium produces impact for its stakeholders. The overarching theme from respondents was that team science, co-learning, and collaboration were the most valued and important elements; they saw opportunities and lessons learned around ways to enhance communication and collaboration. Additionally, inclusivity for both internal ECHO stakeholders (across components, roles, etc.) and external stakeholders (especially engagement with underrepresented and historically marginalized communities) was commonly identified as an overarching, guiding principle going forward.

Limitations

It is possible that the interviewees were not representative of all ECHO stakeholders, but our purposeful and targeted sample attempted representation as broad as possible. One major limitation is that we were not able to include ECHO participants in the interviews, which would have provided an important perspective, especially with regard to the Recipient PRISM domain. Future implementation studies should include participant voices and selected representation from every ECHO component (i.e., HHEAR) as well as other stakeholder groups external to ECHO to make the picture more complete. Ongoing work is assessing participant experience and perceptions of burden directly from the participants themselves in a participant feedback instrument that is part of the ECHO protocol. This will provide valuable insight into how elements of participant experience (time spent, participation valued, satisfaction with level of return of results, the role of compensation and duration of study involvement) vary by participant characteristics and interactions with the ECHO study. Additionally, although the Year 6 GOITs may not have been the most accurate way to measure and evaluate ECHO Program implementation quantitatively, we explored respondents’ overall perceptions of GOITs in general as a useful resource for program implementation during the interviews. Future research should include considerations of maintenance (e.g., the “M” in RE-AIM), especially considering how crucial ongoing participant engagement is for long-term observational studies.

This research may be analytically generalizable and transferable to other large research consortia, and could benefit a wide range of stakeholders, including funding organizations [Reference Smith39]. The implementation science structure organizes mixed methods data collection and analysis to provide a real-time understanding of implementation to ensure impactful science. Finally, our interviewees’ perspectives provide investigators and researchers with insights into participation in large transdisciplinary research consortia.

Acknowledgments

ECHO Collaborators Acknowledgment: The authors wish to thank our ECHO Colleagues; the medical, nursing, and program staff; and the children and families participating in the ECHO cohorts. Specifically, we would like to thank Melanie Kelly and Terren Green for their extensive help preparing and editing the manuscript. We also acknowledge the contribution of the following ECHO Program collaborators:

ECHO Components – Coordinating Center: Duke Clinical Research Institute, Durham, North Carolina: Smith PB, Newby LK; Data Analysis Center: Johns Hopkins University Bloomberg School of Public Health, Baltimore, Maryland: Jacobson LP; Research Triangle Institute, Durham, North Carolina: Catellier DJ; Person-Reported Outcomes Core: Northwestern University, Evanston, Illinois: Gershon R, Cella D.

The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Funding statement

Research reported in this publication was supported by the Environmental influences on Child Health Outcomes (ECHO) Program, Office of the Director, National Institutes of Health, under Award Numbers U2COD023375 (Coordinating Center), U24OD023382 (Data Analysis Center), U24OD023319 with co-funding from the Office of Behavioral and Social Science Research (PRO Core), UH3OD023320 (Aschner), UH3OD023248 (Dabelea), UH3OD023337 (Wright), UH3OD023253 (Camargo), UH3OD023251 (Alshawabkeh).

Competing interests

The authors have no disclosures to declare.

References

Wuchty, S, Jones, BF, Uzzi, B. The increasing dominance of teams in production of knowledge. Science. 2007;316(5827):10361039.CrossRefGoogle ScholarPubMed
Hall, KL, Stokols, D, Stipelman, BA, et al. Assessing the value of team science: a study comparing center- and investigator-initiated grants. Am J Prev Med. 2012;42(2):157163.CrossRefGoogle ScholarPubMed
Hohl, SD, Knerr, S, Thompson, B. A framework for coordination center responsibilities and performance in a multi-site, transdisciplinary public health research initiative. Res Evaluat. 2019;28(3):279289.CrossRefGoogle Scholar
Aarons, GA, Reeder, K, Miller, CJ, Stadnick, NA. Identifying strategies to promote team science in dissemination and implementation research. J Clin Transl Sci. 2019;4(3):180187.CrossRefGoogle ScholarPubMed
Luke, DA, Carothers, BJ, Dhand, A, et al. Breaking down silos: mapping growth of cross-disciplinary collaboration in a translational science initiative. Clini Translat Sci. 2015;8(2):143149.CrossRefGoogle Scholar
Guise, J-M, Winter, S, Fiore, SM, Regensteiner, JG, Nagel, J. Organizational and training factors that promote team science: a qualitative analysis and application of theory to the National Institutes of Health 19;s BIRCWH career development program. J Clin Transl Sci. 2017;1(2):101107.CrossRefGoogle Scholar
Thompson, LC, Hall, KL, Vogel, AL, Park, CH, Gillman, MW. Conceptual models for implementing solution-oriented team science in large research consortia. J Clin Transl Sci. 2021;5(1):e139.CrossRefGoogle ScholarPubMed
McCreight, MS, Rabin, BA, Glasgow, RE, et al. Using the practical, robust implementation and sustainability model (PRISM) to qualitatively assess multilevel contextual factors to help plan, implement, evaluate, and disseminate health services programs. Transl Behav Med. 2019;9(6):10021011.CrossRefGoogle ScholarPubMed
Øvretveit, J. Understanding the conditions for improvement: research to discover which context influences affect improvement success. Bmj Qual Saf. 2011;20(Suppl 1):i18i23.CrossRefGoogle ScholarPubMed
Hall, KL, Olster, DH, Stipelman, BA, Vogel, AL. News from NIH: resources for team-based research to more effectively address complex public health problems. Transl Behav Med. 2012;2(4):373375.CrossRefGoogle ScholarPubMed
Etingen, B, Patrianakos, J, Wirth, M, et al. TeleWound practice within the veterans health administration: protocol for a mixed methods program evaluation. JMIR Res Protoc. 2020;9(7):e20139.CrossRefGoogle ScholarPubMed
Feldstein, AC, Glasgow, RE. A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Jt Comm J Qual Patient Saf. 2008;34(4):228243.Google ScholarPubMed
Holtrop, JS, Estabrooks, PA, Gaglio, B, et al. Understanding and applying the RE-AIM framework: clarifications and resources. J Clin Transl Sci. 2021;5(1):131.CrossRefGoogle ScholarPubMed
Glasgow, RE, Harden, SM, Gaglio, B, et al. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7:64.CrossRefGoogle ScholarPubMed
Gillman, MW, Blaisdell, CJ. Environmental influences on child health outcomes, a research program of the national institutes of health. Curr Opin Pediatr. 2018;30(2):260262.CrossRefGoogle Scholar
Annett, RD, Chervinskiy, S, Chun, TH. TH, etal, IDeA states pediatric clinical trials network for underserved and rural communities. Pediatrics. 2020;146(4):e20200290.CrossRefGoogle Scholar
Kristensen, GK, Ravn, MN. The voices heard and the voices silenced: recruitment processes in qualitative interview studies. Qual Res. 2015;15(6):722737.CrossRefGoogle Scholar
Goodman, LA. Comment: on respondent-driven sampling and snowball sampling in hard-to-reach populations and snowball sampling not in hard-to-reach populations. Sociol Methodol. 2011;41(1):347353.CrossRefGoogle Scholar
Palinkas, LA, Horwitz, SM, Green, CA, Wisdom, JP, Duan, N, Hoagwood, K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health. 2015;42(5):533544.CrossRefGoogle ScholarPubMed
Guest, G, Bunce, A, Johnson, L. How many interviews are enough? Field Method. 2016;18(1):5982.CrossRefGoogle Scholar
Guest, G, Namey, E, Chen, M. A simple method to assess and report thematic saturation in qualitative research. PLoS One. 2020;15(5):e0232076.CrossRefGoogle ScholarPubMed
Vindrola-Padros, C. Rapid Ethnographies: A Practical Guide. Cambridge; New York, NY: Cambridge University Press; 2021.Google Scholar
Vindrola-Padros, C, Vindrola-Padros, B. Quick and dirty? a systematic review of the use of rapid ethnographies in healthcare organisation and delivery. BMJ Qual Saf. 2018;27(4):321330.CrossRefGoogle ScholarPubMed
Sangaramoorthy, T, Kroeger, KA. Rapid Ethnographic Assessments: A Practical Approach and Toolkit for Collaborative Community Research. Oxford; New York, NY: Routledge; 2020.CrossRefGoogle Scholar
Guetterman, TC, Fetters, MD, Creswell, JW. Integrating quantitative and qualitative results in health science mixed methods research through joint displays. Ann Fam Med. 2015;13(6):554561.CrossRefGoogle ScholarPubMed
Halfon, N, Forrest, CB, Lerner, RM, Faustman, EM. Handbook of Life Course Health Development. Cham: Springer Nature; 2018.CrossRefGoogle ScholarPubMed
Knapp, EA, Kress, AM, Parker, CB, et al. The environmental influences on child health outcomes (ECHO)-wide cohort. Am J Epidemiol. 2023;92(8):1249–1263.Google Scholar
Hazra, R, Tenney, S, Shlionskaya, A, et al. DASH, the data and specimen hub of the national institute of child health and human development. Sci Data. 2018;5(1):180046.CrossRefGoogle ScholarPubMed
Brazier, E, Maruri, F, Duda, SN, et al. Implementation of "Treat-all" at adult HIV care and treatment sites in the global ieDEA consortium: results from the site assessment survey. J Int AIDS Soc. 2019;22(7):e25331.CrossRefGoogle ScholarPubMed
Tucker, JD, Iwelunmor, J, Abrams, E, et al. Accelerating adolescent HIV research in low-income and middle-income countries: evidence from a research consortium. Aids. 2021;35(15):25032511.CrossRefGoogle ScholarPubMed
Morrison, M, Mourby, M, Gowans, H, Coy, S, Kaye, J. Governance of research consortia: challenges of implementing responsible research and innovation within Europe. Life Sciences, Society and Policy. 2020;16(1):13.CrossRefGoogle Scholar
Turner, JR, Baker, R. Collaborative research: techniques for conducting collaborative research from the science of team science (SciTS). Adv Dev Hum Resour. 2020;22(1):7286.CrossRefGoogle Scholar
Glasgow, RE, Emmons, KM. How can we increase translation of research into practice? Types of evidence needed. Annu Rev Public Health. 2007;28(1):413433.CrossRefGoogle ScholarPubMed
Greene, SM, Hart, G, Wagner, EH. Measuring and improving performance in multicenter research consortia. JNCI Monographs. 2005;2005(35):2632.CrossRefGoogle Scholar
Fagan, J, Eddens, KS, Dolly, J, Vanderford, NL, Weiss, H, Levens, JS. Assessing research collaboration through co-authorship network analysis. J Res Adm. 2018;49(1):7699.Google ScholarPubMed
Holtrop, JS, Rabin, BA, Glasgow, RE. Qualitative approaches to use of the RE-AIM framework: rationale and methods. BMC Health Serv Res. 2018;18(1):177.CrossRefGoogle ScholarPubMed
Panter-Brick, C. Energizing partnerships in research-to-policy projects. Am Anthropol. 2022;124(4):751–766.CrossRefGoogle Scholar
Shaxson L. Achieving policy impact: Guidance note. London: DEGRP. http://degrp.squarespace.com/impact/. Published 2016. Accessed December 20, 2022.Google Scholar
Smith, B. Generalizability in qualitative research: misunderstandings, opportunities and recommendations for the sport and exercise sciences. Qual Res Sport Exerc Health. 2018;10(1):137149.CrossRefGoogle Scholar
LeWinn, KZ, Caretta, E, Davis, A, Anderson, AL, Oken, E. Program collaborators for environmental influences on child health O. SPR perspectives: environmental influences on child health outcomes (ECHO) program: overcoming challenges to generate engaged, multidisciplinary science. Pediatr Res. 2022;92(5):12621269.CrossRefGoogle Scholar
Figure 0

Figure 1. The PRISM framework [12] with details specific to our ECHO program research.

Figure 1

Figure 2. Organizational structure of the environmental influences on child health outcomes (ECHO) program. HHEAR = human health exposure analysis resource; IDeA =states institutional development award states; NIH = National Institutes of Health; PI = principal investigator; PRO = person-reported outcomes (from LeWinn et al. 2022 [40]).

Figure 2

Table 1. RE-AIM measures in ECHO (Year 6 GOITs)

Figure 3

Table 2. Interview participant characteristics

Figure 4

Table 3. Representative quotations within the a priori PRISM domains

Figure 5

Table 4. Goal A: enrollment and retention of participants

Figure 6

Table 5. Goal D: publication and dissemination

Figure 7

Table 6. Goal C: biospecimens and assays