Hostname: page-component-857557d7f7-h6shg Total loading time: 0 Render date: 2025-11-21T23:13:07.220Z Has data issue: false hasContentIssue false

Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative

Published online by Cambridge University Press:  14 October 2025

Christophe Maes
Affiliation:
Faculty of Medicine and Health Sciences, Department of Public, Ghent University, Ghent, Belgium
Dipak Kalra
Affiliation:
European Institute for Innovation through Health Data, Ghent, Belgium
Tracy Acito
Affiliation:
Regeneron Pharmaceuticals, Terrytown, NY, USA
Nadir Ammour
Affiliation:
Sanofi Recherche and Développement, Paris, France
Paul Basset
Affiliation:
Sanofi Recherche and Développement, Paris, France
Sarah Burge
Affiliation:
Cambridge University Hospitals NHS Foundation Trust , Cambridge,UK
Peter Castleyn
Affiliation:
European Institute for Innovation through Health Data, Ghent, Belgium
Ross Caldow
Affiliation:
Regeneron Pharmaceuticals, Terrytown, NY, USA
Camille Couvert
Affiliation:
Sanofi Recherche and Développement, Paris, France
Amy Cramer
Affiliation:
Johnson & Johnson, Beerse, Belgium
Chris Harrison
Affiliation:
AstraZeneca One MedImmune Way Gaithersburg, MD, USA
Joeri Holtzem
Affiliation:
Johnson & Johnson, Beerse, Belgium
Pavitra Mariappan
Affiliation:
AstraZeneca One MedImmune Way Gaithersburg, MD, USA
Paul Jacobs
Affiliation:
Regeneron Pharmaceuticals, Terrytown, NY, USA
Lars Fransson
Affiliation:
AstraZeneca R&D Gothenburg , Sweden
Veronique Berthou
Affiliation:
Sanofi Recherche and Développement, Paris, France
Laurice Jackson
Affiliation:
Eli Lilly & Company Lilly Corporate Center, Indianapolis, IN, USA
Nancy Wetzel
Affiliation:
Eli Lilly & Company Lilly Corporate Center, Indianapolis, IN, USA
Christopher Thompson
Affiliation:
Eli Lilly & Company Lilly Corporate Center, Indianapolis, IN, USA
Sharon Klein
Affiliation:
Eli Lilly & Company Lilly Corporate Center, Indianapolis, IN, USA
Robert Green
Affiliation:
Johnson & Johnson, Beerse, Belgium
Fakhry Kaoukdji
Affiliation:
AstraZeneca R&D Gothenburg , Sweden
Michael Ward
Affiliation:
Eli Lilly & Company Lilly Corporate Center, Indianapolis, IN, USA
Felix Nensa
Affiliation:
Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany
Joe Lengfellner
Affiliation:
Memorial Sloan Kettering Cancer Center, New York, NY, USA
Anna Patruno
Affiliation:
Memorial Sloan Kettering Cancer Center, New York, NY, USA
Dawn Snow
Affiliation:
Sanofi Recherche and Développement, Paris, France
Isabel Virchow
Affiliation:
Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany
Angela Fritsche
Affiliation:
Mayo Clinic Comprehensive Cancer Center Clinical Trials Office, Rochester, MN, USA
Pascal Coorevits
Affiliation:
Faculty of Medicine and Health Sciences, Department of Public, Ghent University, Ghent, Belgium
Mats Sundgren*
Affiliation:
European Institute for Innovation through Health Data, Ghent, Belgium
*
Corresponding author: Mats Sundgren; Email: mats.sundgren@i-hd.eu
Rights & Permissions [Opens in a new window]

Abstract

eSource – particularly EHR-to-EDC – is an emerging paradigm in clinical research that enables automated transfer of electronic health record (EHR) data into electronic data capture (EDC) systems, with the potential to reduce site burden, improve data quality and accelerate oncology clinical trial workflows. However, widespread implementation remains limited due to technical, regulatory and operational barriers. To address these challenges, the European Institute for Innovation through Health Data (i~HD) launched the eSource Scale-Up Task Force in 2024. This multi-stakeholder initiative brings together leading oncology centres and pharmaceutical sponsors to establish a consensus-driven roadmap for eSource adoption. Central to this effort are three foundational resources: readiness criteria for early adopters, a performance indicator framework for monitoring success and an operational playbook to guide implementation. This article provides a structured overview of the Task Force’s objectives, collaborative model and outputs, with specific attention to its focus on interoperability, regulatory alignment and real-world validation. While initially developed for oncology, the Task Force’s framework is applicable across therapeutic areas characterized by data-intensive workflows.

Information

Type
Perspective
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Impact statement

Manual data workflows remain one of the most persistent barriers to efficient, high-quality oncology research – consuming staff time, introducing errors and delaying patient access to innovative treatments. This article addresses those systemic inefficiencies through a roadmap for eSource technology adoption, co-created by a cross-industry consortium of leading hospitals and pharmaceutical sponsors.

The framework outlined here enables hospitals and sponsors to automate EHR-to-EDC data transfer, reducing site burden and improving data integrity. More importantly, it facilitates faster, safer and more inclusive trials – laying the foundation for precision medicine studies that rely on complex genomic, imaging, and real-world datasets. By aligning with global regulatory expectations and providing practical tools validated by early adopters, this initiative empowers research centres to implement eSource sustainably and at scale.

The broader value of this work lies in its transferability across therapeutic areas – extending beyond oncology to fields such as neurology and rare diseases. It illustrates how collaborative governance, implementation science and interoperability standards can converge to bridge the longstanding divide between clinical care and research. This shift is vital not only for improving trial efficiency but also for ensuring that diverse patient populations gain faster and more equitable access to cutting-edge therapies.

Introduction: rising complexity

Oncology clinical trials are entering an era of profound transformation, driven by the exponential growth of clinical data, increasingly stringent regulatory requirements and the persistent inefficiencies of manual data handling. Traditional trial workflows require research teams to extract, transcribe and validate patient data from EHRs into EDC systems – a duplicative process that affects more than half of all trial data elements and demands extensive verification, often consuming substantial operational resources (Coulter, Reference Coulter2023; Hamidi et al., Reference Hamidi, Eisenstein, Garza, Morales, Edwards, Rocca, Cramer, Singh, Stephenson-Miles, Syed, Wang, Lanham, Facile, Pierson, Collins, Wei and Zozus2024). As precision medicine accelerates the inclusion of genomic, imaging, biomarker and real-world data in cancer studies, the sheer volume and complexity of data per patient have become overwhelming for research sites and sponsors (Sundgren et al., Reference Sundgren, Santiago and Lengfellner2024).

The adoption of eSource technology – particularly EHR-to-EDC data integration – has emerged as a transformative solution in clinical research (Cramer et al., Reference Cramer, King, Buckley, Casteleyn, Ennis, Hamidi, Rodrigues, Snyder, Vattikola and Eisenstein2024) (Figure 1). The term eSource, as used by regulators such as the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) and widely adopted across the pharmaceutical industry, refers to the direct, automated and regulatory-compliant transfer of data from hospital EHR systems to clinical study databases. This approach significantly reduces site burden, improves data accuracy and accelerates trial timelines. Built on interoperability standards such as HL7® FHIR® (Fast Healthcare Interoperability Resources) and SMART on FHIR APIs, eSource also enables structured and, increasingly, AI-assisted extraction of clinical data (Chopra et al., Reference Chopra, Annu, Shin, Munjal, Priyanka and Emran2023; Nashwan and Hani, Reference Nashwan and Hani2023; Chakrabarty and Mahajan, Reference Chakrabarty and Mahajan2024), including from unstructured sources such as clinician notes, pathology reports and radiology narratives.

Figure 1. eSource technology vs existing methods.

Despite its potential, the adoption of eSource across the industry remains fragmented. Persistent barriers include a lack of system interoperability, evolving regulatory interpretations and varying degrees of site technical and organizational readiness. Many institutions face challenges aligning their EHR systems with sponsor EDC platforms, while others struggle with limited staffing or training to implement new workflows. As a result, scalable, industry-wide transformation has proven difficult and a unified framework for implementation has been lacking.

To address these barriers, the European Institute for Innovation through Health Data (i~HD) launched the eSource Scale-Up Task Force in Q1 2024. This impartial, cross-industry consortium unites healthcare providers, pharma sponsors and regulatory experts in a shared mission to accelerate eSource adoption across oncology research centres.

In this Perspective article, we outline the collaborative governance and implementation roadmap developed by the i~HD Task Force to guide scalable adoption of eSource trials. We present how the Task Force is bridging the gap between innovation and implementation by synthesizing insights from pilot programs, institutional leaders and implementation science. The article introduces a strategic roadmap anchored by practical tools and real-world validation to support eSource scale-up in oncology – and eventually across other data-intensive therapeutic areas. It builds on empirical findings from six large oncology centres, which documented significant operational improvements, including a 99% reduction in transcription errors and over 50% reduction in site burden (Sundgren et al., Reference Sundgren, Andrews, Burge, Bush, Fritsche, Nensa and Lengfellner2025).

The challenge: redundant workflows and data burden

The operational demands of oncology clinical trials are rapidly intensifying, driven by the evolution of personalized medicine, real-world data and novel biomarkers. As the volume and complexity of clinical data grow, so too do the burdens placed on research sites and sponsors. At the heart of this issue lies a fundamental inefficiency: the manual extraction, transcription and validation of data from EHRs into EDC systems. This duplication affects more than half of all trial data and consumes significant resources that could otherwise be redirected toward scientific advancement and patient benefit.

Increasing data complexity in modern oncology trials. Today’s oncology trials are characterized by their data intensity. The rise of precision oncology has led to the routine integration of complex datasets such as genomic profiles, radiological imaging and patient-reported outcomes alongside standard clinical data. In modern oncology studies, the data burden per patient has expanded dramatically. For example, Phase I oncology protocols now collect over 27,000 data points per patient – more than six times the average in non-oncology trials – with Phase III oncology studies collecting more than twice as many data points as their non-oncology counterparts (Tufts CSDD, 2022). Each of these data elements must be documented, reviewed and often verified manually.

This growing complexity is exacerbated by siloed health IT systems, non-standardized documentation and the use of unstructured data. Clinical notes, radiology reports and pathology results are frequently embedded in free-text formats, requiring human interpretation and re-entry into EDC systems. This not only prolongs data entry, but increases the potential for transcription errors, inconsistencies and missing information – all of which threaten data quality and regulatory compliance.

The burden of redundant data entry. Clinical research coordinators (CRCs), who play a critical role in trial execution, carry the weight of redundant data handling. Multiple studies, each with distinct case report forms (CRFs) and timelines, place extraordinary pressure on site personnel. As highlighted in recent interviews with CRCs at Memorial Sloan Kettering Cancer Center and other eSource Champion sites, manual data entry is not just time-consuming – it detracts from high-value activities like patient engagement, protocol adherence and real-time data monitoring (the opportunity.

The burden extends beyond staff time. Redundant workflows necessitate extensive source data verification (SDV) by sponsors and contract research organizations (CROs), further inflating timelines and costs (Hamidi et al., Reference Hamidi, Eisenstein, Garza, Morales, Edwards, Rocca, Cramer, Singh, Stephenson-Miles, Syed, Wang, Lanham, Facile, Pierson, Collins, Wei and Zozus2024). Estimates from EHR-to-EDC initiatives suggest an average of five minutes per data point is required for manual entry and verification in oncology trials – translating to many thousands of hours per study. In addition to transcription itself, manual data entry generates substantial downstream workload, including query resolution, data reconciliation and extensive data review cycles (Ehidiamen and Oladapo, Reference Ehidiamen and Oladapo2024) – activities that are estimated to account for up to 25%–40% of total data management costs in oncology trials (Hamidi et al., Reference Hamidi, Eisenstein, Garza, Morales, Edwards, Rocca, Cramer, Singh, Stephenson-Miles, Syed, Wang, Lanham, Facile, Pierson, Collins, Wei and Zozus2024).

Compliance and data integrity challenges. Regulatory agencies worldwide have emphasized the importance of high-quality, traceable clinical data. Standards such as ALCOA+ (Attributable, Legible, Contemporaneous, Original, Accurate,and Complete) and compliance frameworks like the Global Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) underscore the necessity of data that is not only accurate but also secured and auditable (Ehidiamen and Oladapo, Reference Ehidiamen and Oladapo2024). However, when data is manually transcribed from EHRs into sponsor systems, the audit trail can become fragmented. Ensuring contemporaneous and original data becomes significantly more difficult, especially when EHR systems are not designed to support regulatory-grade documentation workflows. Furthermore, discrepancies between EHRs and EDC entries introduce potential risks during inspections and audits. The lack of integration between clinical care and research systems often leads to conflicting information, data reconciliation delays and regulatory red flags.

Impact on timelines, cost and trial viability. Delays caused by data collection and verification bottlenecks can significantly impact a trial’s critical path. Time lost to manual workflows translates to slower patient enrolment, delayed database lock and prolonged time to regulatory submission. For sponsors in competitive oncology indications, such delays can result in missed market opportunities and reduced investor confidence. For patients, lengthy timelines increase the risk for patient safety identification and limited access to breakthrough therapies. From a financial standpoint, the inefficiencies of current processes scale dramatically with trial size. In multi-centre, global oncology trials, duplicated processes across dozens or even hundreds of sites can add millions in operational costs – resources that could otherwise support trial expansion, diversity initiatives, or exploratory endpoints. Compounding these challenges, recent NIH funding cuts in the U.S. – including reductions in indirect cost reimbursements from 60% to 15% – have significantly constrained research infrastructure at cancer centers, further amplifying the need for more efficient, digitally supported trial operations (Busiek, Reference Busiek2025; Rhodes, Reference Rhodes2025).

Fragmented stakeholder ecosystem. Lastly, the complexity of oncology trials is compounded by a fragmented ecosystem of stakeholders. Clinical sites, sponsors, CROs, EHR vendors, EDC providers and regulatory bodies often operate with misaligned goals and disconnected systems. Without a unified framework for interoperability, data flow remains linear, manual and error-prone. Attempts to streamline data collection often stall at the site level due to a lack of technical infrastructure, unclear regulatory guidance, or insufficient support from sponsors. As a result, even when sites are willing to adopt innovative solutions like eSource, they may lack the resources or organizational mandate to implement them effectively (Cramer et al., Reference Cramer, King, Buckley, Casteleyn, Ennis, Hamidi, Rodrigues, Snyder, Vattikola and Eisenstein2024).

The opportunity: eSource and EHR-to-EDC integration

Amid the rising complexity of oncology clinical trials, the adoption of eSource technology – particularly EHR to EDC integration – has emerged as a practical, scalable and transformative solution. eSource enables the direct, automated transfer of clinical data from hospital EHR systems into study databases, eliminating redundant manual data transcription, minimizing errors and accelerating timelines. More than a technical upgrade, eSource represents a paradigm shift in how clinical data is collected, managed and validated across the research ecosystem.

At its core, eSource leverages modern interoperability standards, such as HL7®, FHIR® and SMART on FHIR APIs, to securely map and transfer structured clinical data from source systems into EDC platforms. These standards allow seamless connectivity between disparate health IT environments, ensuring that data remains consistent, traceable and compliant with regulatory expectations. Importantly, eSource is not a replacement for research staff, but rather a tool that enhances efficiency and liberates site personnel from low-value, repetitive tasks.

The advantages of EHR-to-EDC integration are compelling. Automation significantly reduces the risk of human error associated with manual entry, while simultaneously ensuring higher data quality and completeness. Regulatory agencies such as the FDA, EMA and MHRA (Medicines and Healthcare products Regulatory Agency) have increasingly supported eSource approaches, recognizing their ability to uphold ALCOA+ principles and streamline clinical trial oversight. With proper validation, eSource implementations can also reduce the burden of SDV by sponsors and CROs, offering direct cost and time savings.

The eSource transfer process involves moving structured EHR data into EDC systems or sponsor databases for clinical studies, including but not limited to randomized clinical trials (RCTs). This data typically includes laboratory results, vital signs, medications, diagnoses and demographics, standardized through coding systems like ICD, SNOMED CT and LOINC. The process is governed by rigorous quality assurance, patient consent and adherence to global eSource regulatory guidelines. Importantly, eSource enhances existing workflows without replacing manual entry in scenarios where human validation or interpretation remains necessary.

Case studies from early adopters underscore these benefits. At Mayo Clinic, for example, the deployment of eSource tools has allowed staff to focus more on patient interaction and trial coordination, rather than manual transcription. At City of Hope, transcription errors have been virtually eliminated and average data entry time per subject has decreased from 15 minutes to under 5 minutes. These real-world gains highlight the scalability and reproducibility of eSource when implemented within a structured and collaborative framework (Sundgren et al., Reference Sundgren, Andrews, Burge, Bush, Fritsche, Nensa and Lengfellner2025).

Technological innovation is further expanding the capabilities of eSource. Artificial intelligence (AI) and machine learning (ML) (Adamson et al., Reference Adamson, Waskom, Blarre, Kelly, Krismer, Nemeth, Gipetti, Ritten, Harrison, Ho, Linzmayer, Bansal, Wilkinson, Amster, Estola, Benedum, Fidyk, Estévez, Shaphiro and Cohen2023) are increasingly applied to extract meaning from unstructured clinical data, such as radiology reports, clinician notes and pathology narratives. This is particularly relevant in oncology, where critical information often resides in free-text formats that are difficult to standardize manually.

Despite these promising advances, eSource adoption at scale has been limited by stakeholder fragmentation, inconsistent technical infrastructure, and unclear implementation pathways. Although pilot initiatives such as EHR2EDC (EIT Health), TransCelerate’s eSource project and the FDA’s Real-World Evidence guidance have demonstrated feasibility, the field still lacks a unified, cross-industry roadmap (Claerhout et al., Reference Claerhout, Kalra, Mueller, Singh, Ammour, Meloni, Blomster, Hopley, Kafatos, Garvey, Kuhn, Lewi, Vannieuwenhuyse, Marchal, Patel, Schindler and Sundgren2019; Ammo et al., Reference Ammour, Griffon, Djadi-Prat, Chatellier, Lewi, Todorovic, Gomez de la Cámara, Garcia Morales, Testoni, Nanni, Schindler, Sundgren, Garvey, Victor, Cariou and Daniel2023; Mueller et al., Reference Mueller, Herrmann, Cichos, Remes, Junker, Hastenteufel and Mundhenke2023). Common challenges include harmonizing data formats, ensuring interoperability across vendor platforms, securing regulatory confidence and articulating the return on investment for long-term adoption.

In summary, eSource and EHR-to-EDC integration offer a future-ready solution to the operational challenges of modern oncology trials. By improving data quality, reducing administrative burden and accelerating research timelines, this approach has the potential to transform how clinical trials are conducted. Realizing this potential requires not only technological readiness but also the coordinated engagement of stakeholders across the research ecosystem.

A strategic response: the i~HD eSource Scale-Up Task Force

Based on the experience of the core team members of the task force, supported by numerous papers and publications referenced in this article, the promise of eSource to streamline clinical trials and reduce burdens for research sites and sponsors is supported by growing evidence from early adopters. Yet, despite its potential, large-scale adoption remains elusive – hindered by fragmented implementation strategies, regulatory ambiguity and varied technical readiness across clinical research sites. To address these systemic challenges, the European Institute for Innovation through Health Data (i~HD) launched the eSource Scale-Up Task Force in 2024.

This cross-industry initiative brings together key stakeholders – academic research centres, hospitals, sponsors and regulatory experts – to drive a unified, scalable approach to eSource implementation, starting with oncology and expanding to other therapeutic areas. i~HD’s experience in data interoperability, multi-stakeholder engagements and governance through initiatives like EHR4CR, EHR2EDC and EU-PEARL (Dupont et al., Reference Dupont, Beresniak, Kalra, Coorevits and De Moor2018; Ammo et al., Reference Ammour, Griffon, Djadi-Prat, Chatellier, Lewi, Todorovic, Gomez de la Cámara, Garcia Morales, Testoni, Nanni, Schindler, Sundgren, Garvey, Victor, Cariou and Daniel2023; Lombardo et al., Reference Lombardo, Couvert, Kose, Begum, Spiertz, Worrel, Didden, Sforzini, Todorovic, Lewi, Brown, Vaterkowski, Gullet, Amashi-Hartoonian, Griffon, Pais, Rodriguez Navarro, Kremer, Maes, Tan, Moinat, Ferrier, Pariante, Kalra, Ammour and Kalko2023) laid a strong foundation for the Task Force. Its neutral, vendor-agnostic stance ensures collaborative engagement and practical, evidence-driven progress. Membership is invitation-based and designed to foster open dialogue while maintaining confidentiality where required.

While the framework was co-developed with oncology sites and sponsors, its structure – anchored in readiness criteria, KPIs and phased playbook guidance – was intentionally designed to be domain-agnostic. Transfer to other therapeutic areas does not require a new model, but rather adaptation of disease-specific data elements, workflows and regulatory considerations. For example, oncology-centric metrics such as imaging or biomarker data flows can be replaced with cardiology-specific endpoints or neurology-focused assessments while preserving the same governance, interoperability and implementation structures.

The Task Force’s core members include Cambridge University Hospitals, Mayo Clinic, Memorial Sloan Kettering Cancer Center and University Hospital of Essen, alongside leading pharmaceutical companies – AstraZeneca, Johnson & Johnson, Lilly, Regeneron and Sanofi (Figure 2). These organizations bring operational and strategic expertise across clinical operations, informatics and data science. The selection of these sponsors and hospitals as core team members is based on their demonstrated leadership: they are either actively conducting eSource-enabled trials or are in the advanced stages of implementing EHR-to-EDC integration. Their hands-on experience ensures that the Task Force’s tools and recommendations are grounded in real-world operational contexts.

Figure 2. The purpose and core members of the i~HD Scale Up Task Force.

The Task Force focuses on several high-impact domains:

  • Site Enablement: Building institutional capacity through readiness assessments, workflow alignment and staff training.

  • Data Interoperability: Standardizing clinical data exchange using HL7® FHIR® and guidance on common data elements (CDE) to facilitate cross-platform integration.

  • Regulatory Engagement: Aligning efforts with global authorities (e.g., FDA, EMA) to interpret eSource guidance under Good Clinical Practices (GCP), GDPR, HIPAA and International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) frameworks.

  • Implementation Science: Capturing lessons learned from early adopters and embedding them into repeatable, context-sensitive models.

A distinctive feature of the Task Force is the creation of dedicated reference groups – one for sites and one for industry sponsors. Open to organizations progressing toward eSourcing, these groups function as communities of practice where members validate tools, troubleshoot barriers and co-develop scale-up strategies. As membership grows in 2025, they will play a pivotal role in guiding broader adoption. In parallel, the Technical Vendor Reference Group (launching Q3 2025) will convene EHR vendors, EDC providers, middleware developers and system integrators to ensure digital infrastructures align with Task Force deliverables and scale effectively across diverse ecosystems.

The Task Force is also developing modular deliverables, including operational frameworks and a set of Playbook annexes. These annexes serve as practical supplements to the core Playbook, offering detailed templates, checklists and guidance documents to help sites and sponsors address common barriers in implementation. Examples include contracting workflows, ethics and IRB review alignment, value case articulation and strategies for integrating AI tools to manage unstructured data. Together, these resources are designed to ensure that eSource adoption is scalable, sustainable and compliant across diverse trial ecosystems.

Governance is rooted in transparency and neutrality. i~HD leads the initiative with a clear mandate to build consensus while ensuring data protection and regulatory integrity. Its structure enables sponsors and sites to align on shared implementation models without privileging any specific technology or commercial entity.

In summary, the i~HD eSource Scale-Up Task Force is forging a collaborative, standards-aligned pathway to modernize clinical trial execution. By engaging early adopters, enabling cross-sector learning and producing reusable implementation tools, the initiative lays a strong foundation for a digitally integrated research infrastructure – transforming not just oncology trials, but the future of clinical research at large.

Delivering impact: tools, key performance indicators (KPIs) and the eSource playbook

To accelerate the scale-up of eSource in oncology trials, the i~HD eSource Scale-Up Task Force has produced three foundational deliverables that form the basis of a structured roadmap for implementation. These tools address both strategic alignment and operational execution, allowing stakeholders to transition from pilot projects to scalable, repeatable adoption across diverse settings (Table 1).

Table 1. A summary table of the white papers to visually reinforce the deliverables and their scope (Reference to White Papers: https://www.i-hd.eu/our-programmes/esource-for-scaling-up-clinical-trials-programme/publications)

The first white paper, Minimum Success Criteria for Early Adopters, establishes baseline readiness conditions for institutions and sponsors considering eSource deployment. Developed through cross-sector workshops and validated by early adopters, the guide outlines key criteria across four domains: organizational structure, technical capabilities, regulatory compliance and operational capacity. The checklist-style tool has helped sites and sponsors assess their maturity level, identify gaps and initiate focused enablement planning.

The second white paper, Selected KPIs for eSource Trials, offers a metrics-driven framework for evaluating the effectiveness of eSource implementation. It defines eight core KPIs – including accuracy of data transfer, completeness of data mapping, SDV reduction, site efficiency, CRC satisfaction and mapping reusability – that provide sponsors and sites with objective measures to benchmark impact and guide continuous improvement.

The third and most comprehensive deliverable, The eSource Playbook, provides a step-by-step operational guide to implementing EHR-to-EDC integration at both sponsor and site levels. Organized into five key phases – preparation, planning, setup, execution and post-implementation review – the Playbook includes decision support tools, role-based workflows and recommendations for managing structured and unstructured data. The Playbook also introduces a shared catalogue of Common Data Elements (CDE) and highlights interoperability standards aligned with HL7® FHIR®.

Importantly, each of these white papers has been reviewed and validated by core Task Force members, including hospitals and sponsors actively engaged in eSource implementation. Early adopter sites have already used the readiness criteria to benchmark institutional capacity, piloted the KPI framework to monitor efficiency and data quality and provided feedback that shaped the Playbook’s phased guidance. This collaborative validation ensures that the tools are grounded in real-world operational experience rather than theory alone.

In addition to these core resources, the Playbook is supported by a growing set of annexes, developed to address deeper implementation needs. These include templates for data flow architecture, contracting and vendor management, regulatory validation protocols, Institutional Review Board (IRB)/ethics communication toolkits and AI integration strategies. As such, the Playbook functions as a living document, continuously refined by lessons from ongoing trials and expanded through member contributions.

Each white paper is publicly available through the i~HD platform and designed for modular adoption. Together, these white papers provide a structured, scalable roadmap for eSource adoption. They support cross-functional alignment, foster regulatory confidence and offer practical tools that enable sponsors and sites to move from aspiration to execution. By following this phased approach, institutions can scale eSource adoption confidently and consistently – advancing a new standard in oncology research and beyond.

Conclusion: scaling for the future of clinical research

The growing complexity and cost of oncology clinical trials have underscored the limitations of manual data workflows. Manual EHR-to-EDC transcription, once standard practice, now delays timelines, increases site burden and compromises data quality. eSource technology offers a scalable, modern alternative – automating data flow, improving accuracy and enhancing operational efficiency.

To overcome barriers to adoption, the i~HD eSource Scale-Up Task Force convened a multi-stakeholder network of hospitals and sponsors, including their clinical operations, informatics and digital health experts. The Task Force developed a vendor-neutral implementation framework anchored by three foundational tools: readiness criteria, performance KPIs and a phased operational Playbook. These resources support the transition from fragmented pilot projects to sustainable, system-wide implementation (Figure 3).

Figure 3. The 2025 Scope of the i~HD Scale Up Task Force.

Although designed for oncology, the framework is adaptable to other high-data therapeutic areas, including neurology, cardiology and rare diseases. Expanded Playbook annexes address interoperability, AI-supported data extraction and ethics review, aligning with evolving global regulatory expectations. As adoption grows, this initiative supports a shift toward digitally enabled, patient-centred clinical trials.

The question is no longer whether eSource will transform clinical trials – but how quickly. If scaled effectively, eSource technologies will not only optimize today’s research – but also enable studies that might otherwise never happen, accelerating access to therapies for patients who need them most.

Open peer review

To view the open peer review materials for this article, please visit http://doi.org/10.1017/pcm.2025.10004.

Data availability statement

This is a perspective article that synthesizes previously published work and implementation experience; no datasets were created or analyzed. All sources cited are publicly available in the References.

Acknowledgements

The authors thank the European Institute for Innovation through Health Data (i~HD) for facilitating the eSource Task Force meetings and coordination activities. The authors also acknowledge the contributions of participating hospitals and sponsor organizations who shared their experiences and insights to support the development of the frameworks presented in this article.

Author contribution

Conceptualization: Mats Sundgren, Christophe Maes, Dipak Kalra; Methodology: Tracy Acito, Angela Fritsche, Joseph Lengfellner, Sarah Burge; Validation: Amy Cramer, Lars Fransson, Felix Nensa, Pavithra Mariappan; Writing – Original Draft: Mats Sundgren, Christophe Maes, Sarah Burge, Peter Casteleyn; Writing – Review & Editing: Chris Harrison, Ross Caldow, Peter Casteleyn, Michael Ward; Regulatory and Interoperability Input: Nadir Ammour, Camille Couvert, Michael Ward; Project Administration: Veronique Berthou, Joeri Holtzem, Peter Casteleyn; Clinical Input: Laurie Jackson, Anna Patruno, Isabel Virchow; Implementation Strategy and Coordination: Peter Casteleyn, Fakhry Kaoukdji, Pascal Coorevits; Operational Tools and Playbook Review: Nancy Wetzel, Sharon Klein, Christopher Thompson, Dawn Snow, Robert Green; Review Support: Paul Jacobs; Supervision: Mats Sundgren, Dipak Kalra. All authors reviewed and approved the final manuscript for submission.

Financial support

The authors received no external funding for this work. Institutional salaries and routine departmental resources supported authors’ time.

Competing interests

The authors declare no conflicts of interest relevant to the content of this manuscript. All contributions reflect the views and experiences of the authors in their roles as clinical investigators, sponsor representatives, or healthcare professionals involved in implementing eSource solutions. None of the authors are affiliated with, or represent, EHR2EDC vendors or data integration technology providers.

Ethical standard

This article does not report on original research involving human participants or animals. As such, ethical approval and informed consent were not required.

References

Adamson, B, Waskom, M, Blarre, A, Kelly, J, Krismer, K, Nemeth, S, Gipetti, J, Ritten, J, Harrison, K, Ho, G, Linzmayer, R, Bansal, T, Wilkinson, S, Amster, G, Estola, E, Benedum, CM, Fidyk, E, Estévez, M, Shaphiro, W and Cohen, AB (2023) Approach to machine learning for extraction of real-world data variables from electronic health records. Frontiers in Pharmacology 14, 1180962.Google Scholar
Ammour, N, Griffon, N, Djadi-Prat, J, Chatellier, G, Lewi, M, Todorovic, M, Gomez de la Cámara, A, Garcia Morales, MT, Testoni, S, Nanni, O, Schindler, C, Sundgren, M, Garvey, A, Victor, T, Cariou, M and Daniel, C (2023) TransFAIR study: A European multicentre experimental comparison of EHR2EDC technology to the usual manual method for eCRF data collection. BMJ Health and Care Informatics 30 (1): 18.Google Scholar
Busiek, J (2025). What Cuts to NIH Funding Mean for Cancer Patients and Their Families. UC Research News. Available at: https://vcresearch.berkeley.edu/news/what-cuts-nih-funding-mean-cancer-patients-and-their-families. Accessed June 15, 2025.Google Scholar
Chakrabarty, N and Mahajan, A (2024) Imaging analytics using artificial intelligence in oncology: A comprehensive review. Clinical Oncology 36, 498513.Google Scholar
Chopra, H, Annu, S, Shin, DK, Munjal, K, Priyanka, D and Emran, TB (2023) Revolutionizing clinical trials: The role of AI in accelerating medical breakthroughs. International Journal of Surgery 109, 42114220.Google Scholar
Claerhout, B, Kalra, D, Mueller, C, Singh, G, Ammour, N, Meloni, L, Blomster, J, Hopley, M, Kafatos, G, Garvey, A, Kuhn, P, Lewi, M, Vannieuwenhuyse, B, Marchal, B, Patel, K, Schindler, C and Sundgren, M (2019) Federated electronic health records research technology to support clinical trial protocol optimization. Evidence from EHR4CR and the InSite platform. J Biomed Inform 90, 103090.Google Scholar
Cramer, AE, King, LS, Buckley, MT, Casteleyn, P, Ennis, C, Hamidi, M, Rodrigues, GMC, Snyder, DC, Vattikola, A and Eisenstein, EL (2024) Defining methods to improve eSource site start-up practices. Contemporary Clinical Trials Communications 42, 101391.Google Scholar
Dupont, D, Beresniak, A, Kalra, D, Coorevits, P and De Moor, G (2018) Value of hospital electronic health records for clinical research: Contribution of the European project EHR4CR. Medical Science (Paris) 34, 972977.Google Scholar
Ehidiamen, AJ and Oladapo, OO (2024) The role of electronic data capture systems in clinical trials: Streamlining data integrity and improving compliance with FDA and ICH/GCP guidelines. World Journal of Biology Pharmacy and Health Sciences 20, 321334.Google Scholar
Hamidi, M, Eisenstein, EL, Garza, MY, Morales, KJT, Edwards, EM, Rocca, M, Cramer, A, Singh, G, Stephenson-Miles, KA, Syed, M, Wang, Z, Lanham, H, Facile, R, Pierson, JM, Collins, C, Wei, H and Zozus, M (2024) Source data verification (SDV) quality in clinical research: A scoping review. Journal of Clinical and Translational Science 8, e101.Google Scholar
Lombardo, G, Couvert, C, Kose, M, Begum, A, Spiertz, C, Worrel, C, Didden, EM, Sforzini, L, Todorovic, M, Lewi, M, Brown, M, Vaterkowski, M, Gullet, N, Amashi-Hartoonian, N, Griffon, N, Pais, R, Rodriguez Navarro, S, Kremer, A, Maes, C, Tan, EH, Moinat, M, Ferrier, JG, Pariante, CM, Kalra, D, Ammour, N and Kalko, S (2023) Electronic health records (EHRs) in clinical research and platform trials: Application of the innovative EHR-based methods developed by EU-PEARL. Journal of Biomedical Informatics 148, 104553.Google Scholar
Mueller, C, Herrmann, P, Cichos, S, Remes, B, Junker, E, Hastenteufel, T and Mundhenke, M (2023) Automated electronic health record to electronic data capture transfer in clinical studies in the German health care system: Feasibility study and gap analysis. Journal of Medical Internet Research 25, e47958.Google Scholar
Nashwan, AJ and Hani, SB (2023) Transforming cancer clinical trials: The integral role of artificial intelligence in electronic health records for efficient patient recruitment. Contemporary Clinical Trials Communications 36, 101223.Google Scholar
Rhodes, C (2025) Navigating the Impact of NIH Cancer Research Funding Cuts. UAB Institute for Human Rights Blog. Available at: https://sites.uab.edu/humanrights/2025/04/23/navigating-the-impact-of-nih-cancer-research-funding-cuts/.Google Scholar
Sundgren, M, Santiago, G and Lengfellner, J. (2024) Streamlining clinical trials with eSource: Insights from MSK. Applied Clinical Trials 36(8), 2226. Available at: https://www.appliedclinicaltrialsonline.com/view/streamlining-clinical-trials-with-esource-insights-from-msk.Google Scholar
Sundgren, M, Andrews, L, Burge, S, Bush, M, Fritsche, A, Nensa, F, and Lengfellner, J. (2025) Scaling eSource-enabled clinical trials: Challenges, opportunities, and strategic outlook for oncology research Centers. Applied Clinical Trials 37 (x), 2228. Available at: https://www.appliedclinicaltrialsonline.com/view/scaling-esource-enabled-clinical-trials-hospital-perspectives.Google Scholar
Figure 0

Figure 1. eSource technology vs existing methods.

Figure 1

Figure 2. The purpose and core members of the i~HD Scale Up Task Force.

Figure 2

Table 1. A summary table of the white papers to visually reinforce the deliverables and their scope (Reference to White Papers: https://www.i-hd.eu/our-programmes/esource-for-scaling-up-clinical-trials-programme/publications)

Figure 3

Figure 3. The 2025 Scope of the i~HD Scale Up Task Force.

Author comment: Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative — R0/PR1

Comments

Dear Dr Grazia Iaffaldano,

I am pleased to submit our manuscript entitled “Accelerating eSource Scale‑Up in Oncology Clinical Trials: The i‑HD Task Force Initiative” for your consideration at Precision Medicine. Our work aligns closely with the journal’s mission to advance precision approaches in healthcare through innovative integration of health data and technology

libguides.cam.ac.uk

+2

cambridge.org

+2

cmj.scholasticahq.com

+2

.

The manuscript explores how eSource—direct EHR-to-EDC data capture—can significantly improve clinical trial efficiency and regulatory compliance by optimizing both structured and unstructured health data via AI, NLP, and federated data models. Drawing on early implementations at leading oncology centers, the paper offers evidence of operational gains and outlines a collaborative multi-stakeholder strategy to scale this technology. It addresses key journal themes: bridging clinical innovation and data science, leveraging electronic health records, and enabling precision medicine through robust informatics infrastructure.

We believe this manuscript is ideally suited for Precision Medicine’s audience of clinicians, researchers, and industry leaders. It not only presents original data and strategic insights but also contributes a timely framework for adopting eSource within regulated settings. Our findings promise to guide future research and implementation in precision oncology and beyond.

Highlights include:

Empirical analysis of eSource deployments at major oncology centers

Integration of unstructured EHR data using NLP and OMOP CDM

A multi-disciplinary roadmap for safe, scalable deployment

An Impact Statement and Graphical Abstract are included per journal guidelines

cambridge.org

+1

esp.as-pub.com

+1

cambridge.org

+1

academia.stackexchange.com

+1

. All authors meet authorship criteria, declare no conflicts of interest, and affirm originality and exclusive submission. Funding sources and ethical approvals are detailed in the manuscript.

Thank you for your consideration. We would be delighted to address any questions or provide additional materials.

Warm regards,

Mats Sundgren, PhD

Review: Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative — R0/PR2

Conflict of interest statement

Reviewer declares none.

Comments

This manuscript addresses a critical bottleneck in modern clinical research—manual and fragmented data workflows in oncology trials—and proposes a pragmatic, collaborative, and scalable solution through the i-HD eSource Scale-Up Task Force initiative. The paper is grounded in real-world experience from leading global institutions. It offers concrete, ready-to-use tools (readiness criteria, KPIs, and an operational Playbook), making it both timely and actionable. Its focus on interoperability, regulatory alignment, and cross-sector engagement positions it as a high-impact contribution to the field. Given the urgency to modernize trial infrastructures amidst rising data complexity, this Perspective offers a visionary yet realistic roadmap that can influence research practices well beyond oncology.

Review: Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative — R0/PR3

Conflict of interest statement

Reviewer declares none.

Comments

The manuscript describes the formation of an eSource Task Force with the goal of developing a framework for how clinical trial sites can transition from the traditional, manual approaches currently used for data acquisition and management to a more automated or electronic approach (eSource), specifically direct EHR-to-EDC interoperability, to support automated data extraction. There is no doubt that there is a need for such a framework, and, if effective and scalable, of such a framework’s potential to positively impact and optimize clinical research operations across trial sites. The work described would likely be of great interest to readers. However, there are several items that need to be addressed prior to publication. In general, the focus of the paper should be made clear (either to describe the need for and formation of the Task Force and its objectives and/or to describe the development and validation of the eSource Framework). It is also recommended that the Task Force’s next steps (anticipated milestones and/or additional deliverables and any known timelines) be included and described in more detail (can be at the end of the manuscript) so that the readers know what to expect now that the eSource Framework has been developed.

<p>

<b><u>Grammar & Punctuations:</b></u> There are a few minor grammar and punctuation items that can be corrected (e.g., missing or unnecessary commas and use of past, present, or future tenses). For example, take a look at the “A Strategic Response: The i~HD eSource Scale-Up Task Force” section and review the use of tenses throughout. Some things that have happened (the formation of the group, decisions already made, etc.) should be stated using past tense.

<p>

Page 3, Line 19: In the 2nd paragraph of the “Introduction” – the last sentence: commas can be added around the term “increasingly” so that it reads “…eSource also enables structured and, increasingly, AI-assisted extraction of…”

<p>

Page 4, Lines 44–45: In the 1st paragraph of the “Impact on Timelines, Cost, and Trial Viability” – the 3rd sentence: There are 2 items (1) the comma after “sponsors” can be removed so that it reads “For sponsors in competitive oncology indications, such delays...” AND (2) the comma after “market opportunities” can be removed and replaced by “and” so that it reads “…Remove the comma here and replace with an “and”: “...missed market opportunities and reduced investor confidence.”

<p>

Page 5, Lines 38–39: In the 2nd paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” – the first sentence: update to add commas “...standards, such as HL7 FHIR and SMART on FHIR APIs, to securely...”

<p>

Page 6, Lines 41–42: In the 1st paragraph of “A Strategic Response: The i~HD eSource Scale-Up Task Force” – the sentence that mentions the 3 initiatives: remove the word “such as” so that it reads: “…through initiatives like EHR4CR, EHR2EDC and EU-PEARL…”

<p>

Page 7, Line 44: In the paragraph above Figure 3 of “A Strategic Response: The i~HD eSource Scale-Up Task Force” – add a comma after “challenges” so that it reads “These resources will address challenges, such as contracting…”

<p>

<b><u>Figures & Tables:</b></u>

<p>

For Figure 1 – It is recommended that a “note” or “legend” be added as part of this figure to define abbreviations used in the image.

<p>

For Figure 1 – It is recommended that the term “Clinical Database” at the end of each method (Existing Methods and Data Automation Technology flows) be updated to something that more clearly indicates this is the study database for research and not a database used for clinical care.

<p>

For Figure 1 – It is recommended that the “Data Automation Technology” flow be updated to also start with the “Study Visit” as the first step in the flow, which then moves to the “EHR Data Entry” activity, since eSource does not change that process.

<p>

For Figure 2 – A minor correction to the 1st bullet of the Purpose to update to: “Drive and scale the adoption of…”

<p>

For Figure 2 – It is recommended that a “note” or “legend” be added as part of this figure to define/describe the purpose of the “Core Members” vs. the “Reference Groups” vs. the “Technology Networks” and/or provide a more detailed description of these roles in the main body of the paper.

<p>

<b><u>Citations & References:</b></u>

<p>

In the References list, there are 2 references that were not used/cited in the paper and should be removed: (1) Lengfellner & Yeatman, 2025 (Page 10, Lines 46–47); and (2) Passut, 2021 (Page 11, Lines 3–5).

<p>

Page 11, Line 19: In the References list, the last reference listed as “University of California” should be updated to include the author (Julia Busiek), and the corresponding citation used in the “Impact on Timelines, Cost, and Trial Variability” section (Page 4, Line 53) should be corrected accordingly (to from “University of California, 2025” to “Busiek, 2025”).

<p>

Page 4, Line 53: In the “Impact on Timelines, Cost, and Trial Variability” section, the UAB citation needs to be corrected: from “UAB Institute for Human Rights Blog, 2025” to “Rhodes, 2025”.

<p>

<b><u>General Comments:</b></u>

<p>

Page 2, Lines 38–40: In the last paragraph of the Abstract, it is mentioned that “This paper highlights how collaborative governance, phased implementation, and shared operational standards can enable sustainable scaleup of eSource technologies and foster digitally integrated clinical research infrastructures.” This implies that the manuscript will describe not only the formation, purpose, and deliverables of the Task Force and the resulting Framework, but that it will also demonstrate how it can be successfully implemented and scaled up beyond the initial use case. However, this is not demonstrated in this particular article. There is no mention of the actual implementation of the framework – only brief mention that it was an objective and that it was likely developed. There is also no mention of the evaluation of its success nor of any attempts to apply this to other TAs outside of oncology or other to other sites outside of the Core Members. It is suggested that this sentence be removed or modified to more clearly articulate the focus of this paper.

<p>

Page 2, Lines 44–52: In the Impact Statement, the authors start by stating that the paper will “present a scalable framework for implementing eSource technologies…” However, while the article mentions a framework was a deliverable and that white papers were published, the actual framework itself is not clearly described. (Unless Figure 3 – and maybe in combination with the “Key Features” in Table 1 – is/are meant to represent the framework – in which case, this should be made clearer.) If the primary objective is to present the final framework, then this needs to be more clearly stated throughout and a section describing the framework in more detail should be included. If the primary objective is to focus more on the Task Force – the need for such a group, its formation, its purpose, goals, and objectives, etc. – then the Impact Statement should be updated to indicate that.

<p>

The article starts off by talking through some of the complexities of running oncology trials and mentions that the i~HD Task Force and eSource Framework were initially focused on and/or specific to addressing the challenges faced in the oncology domain. It is then noted that the framework could be applied to other domains. However, in describing the development of the framework, it is not clear if that “oncology” focus shifted once the Task Force started putting all the pieces together (so the resulting framework did end up being more “generic” or broad), or if the framework was created with the oncology focus and “tips” or “amendments” or similar have been (or will be) created for how to “translate” the framework to other domains/therapeutic areas (TAs). Additionally, the statement was made that “While grounded in oncology, the model is transferable to other high-data-density therapeutic areas.” However, there is no real explanation as to how to go about transferring it to other TAs, or even on what elements would need to be modified or reevaluated or considered when translating to other TAs.

<p>

Page 2, Line 60: It is suggested that the “Introduction” header be updated to simply “Introduction” and the “Rising Complexity in Oncology Trials” piece be removed - especially since this heading text is repeated as a header for the 2nd paragraph in “The Challenge” section that follows.

<p>

Page 3, Lines 5–8: In the 1st paragraph of the “Introduction” – the 2nd sentence: “Traditional trial workflows require research teams to extract, transcribe, and validate patient data from EHRs into EDC systems—a duplicative process that accounts for over 50% of clinical trial data and requires significant verification effort, often consuming substantial operational resources (Kalankesh and Monaghesh, 2024).” – There are 2 questions: (1) the part that reads “…a duplicative process that accounts for over 50% of clinical trial data…” does not make sense. Do you mean that the “duplicative process” accounts for a 50% increase in time required to complete tasks? Or that over 50% of the data in oncology (and other data-intensive domains) are data that are part of a more complicated workflow (manual extraction, transcription, and validation) with redundancies? AND (2) Is this the correct citation? This article by Kalankesh and Monaghesh is a systematic literature review about EHR use in clinical trials. There is nothing in this article that discusses the analysis of any quantitative data that would account for this “50%” value presented here. One could also argue that this particular systematic literature review is relatively weak – as the search criteria is very broad, the numbers presented in the results do not match the figure displayed (nor does it seem remotely accurate based on the very broad search criteria), and there is practically no synthesis of the identified articles – which is the main point of conducting and writing up a review. It is recommended that this citation be replaced with one that contains this information and can back up the “50%” value in some way; or to modify the sentence so that a specific percentage is not included (and remove the citation completely).

<p>

Page 3, Lines 17–21: In the 2nd paragraph of the “Introduction” – the last sentence: The 3 citations come at the end of the sentence, and by citing these articles here, it implies that they discuss standards-based (or FHIR-based) eSource data extraction. They do focus on AI for unstructured data, but all 3 do not necessarily focus on AI-based eSource or AI to support eSource extraction, so it does not seem like they fit here. It may be more appropriate to move them to come after the text: “...increasingly, AI-assisted (citations) extraction of clinical data...”

<p>

Page 3, Lines 52–54: In the 1st paragraph of “The Challenge” – the last sentence: “This duplication affects more than half of the total trial data and contributes to roughly 20% of total study costs—resources that could be redirected toward scientific advancement and patient benefit (Kalankesh and Monaghesh, 2024).” – Again, this citation does not seem to be appropriate to back up the data presented in this sentence. If an article was identified by this systematic review that contains this “20%” value, that article should be cited instead of this review. Otherwise, it is recommended that this sentence be removed or rewritten to exclude the percentage and the citation should be deleted (and removed from the reference list, if it is no longer used in the article).

<p>

Page 4, Lines 3–6: In the 1st paragraph of “Increasing Data Complexity in Modern Oncology Trials” – the last 2 sentences: “In some advanced trials, the number of data points collected per patient has increased exponentially—from 10,000 in traditional Phase III studies to over 100,000 in trials involving genomic and digital health data. Each of these data points must be documented, reviewed, and often verified manually (Altomare et al., 2024).” – There are 2 items: (1) It would seem more likely or appropriate that a citation would be needed to back up the “10,000 to over 100,000 data points” claim, as opposed to being used for the last sentence in this paragraph. AND (2) That said, Is this Altamore et al. citation the correct or appropriate citation? (Is this citation even needed at all?) In reviewing the Altomare et al. abstract (unable to find a full text version), it does not appear to match what is written in these 2 sentences. It is recommended that the Altomare et al. citation be removed and the reference deleted, as this is the only time it is used in this manuscript.

<p>

Page 4, Line 19: In the 1st paragraph of “The Burden of Redundant Data Entry” – the last sentence: Is this meant to point to the Sundgren et al., 2025 citation (rather than the current 2024 citation), as the 2025 article mentions the “high-value tasks” (or maybe cite both if both are relevant).

<p>

Page 4, Line 28: In the 2nd paragraph of “The Burden of Redundant Data Entry” – the last sentence: “In addition to transcription itself, manual data entry generates substantial downstream workload, including query resolution, data reconciliation, and extensive data review cycles—activities that are estimated to account for up to 30–40% of total data management costs in oncology trials (Hamidi et al., 2024; Ehidiamen and Oladapo, 2024). – There are 2 items: (1) The Hamidi et al. article mentions SDV-related costs (TA-agnostic) as being attributable to between 25%-40% of clinical trial costs. It is recommended that the 30-40% mentioned in the sentence be updated to align with the citated article. AND (2) The Ehidiamen & Oladapo article does not discuss data management costs in oncology trials. It is a non-therapeutic area specific (TA-agnostic) paper on EDC use for/in clinical trials. If this is meant to refer to the “substantial downstream workload”, it is recommended that this be moved to follow the text “...extensive data review cycles (E & O citation)--activities that are...” Otherwise, it is recommended that this citation be removed.

<p>

Page 4, Lines 31–35: In the “Compliance and Data Integrity Challenges” paragraph – the 2nd sentence: “Standards such as ALCOA+ (Attributable, Legible, Contemporaneous, Original, Accurate, and Complete) and compliance frameworks like the Global Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) underscore the necessity of data that is not only accurate but secured and auditable (Ehidiamen and Oladapo, 2024).” – Is this the correct or appropriate citation for this sentence? There is mention of GDPR and HIPAA in this article, but it points out to other references re: those requirements. There is no mention of ALCOA+. Since this article is not the direct source for info on these regulations, it is not recommended that it be used as a citation for this sentence. Better to cite the actual regulations or an article (or two) that specifically discuss these regulations. If the E & O citation is removed here and removed from the previous section (based on the previous comment), it is recommended that it also be removed from the Reference list, as those are the only 2 places it is cited.

<p>

Page 4, Lines 45–46: In the “Impact on Timelines, Cost, and Trial Viability” paragraph – the 4th sentence: “For patients, lengthy timelines increase the risk for patient safety identification as well as limited access to breakthrough therapies.” – This sentence does not read clearly. Perhaps update to: “...increase the risk for patient safety identification and limit access to breakthrough therapies.” OR “...increase the risk for patient safety identification, as well as limit access to breakthrough therapies.”

<p>

Page 5, Lines 45–51: In the 3rd paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” – There are 3 items: (1) In the 2nd sentence re: automation resulting in “higher data quality and completeness” – If available, it may be worth adding a citation or two to support this statement to further emphasize that this has been shown/demonstrated and is not just something we “hope” or “assume” will happen. AND (2) In the 3rd sentence re: FDA, EMA, and MHRA support for eSource – Can you point to guidances or press releases or something similar from these agencies that express their increased support? AND (3) In the last sentence re: “direct cost and time savings” – If available, it may be worth adding a citation or two to support this statement, again, to show or demonstrate that sites have experienced these improvements.

<p>

Page 5, Lines 53–54: In the 4th paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” – the 1st sentence: Consider not limiting the statement to only RCTs. The “eSource transfer process” is applicable to all research requiring extensive amounts of data from outside sources (not just RCTs).

<p>

Page 6, Lines 3–8: In the 5th paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” – There are 3 items: (1) In the 2nd sentence re: Mayo Clinic – This might benefit from a citation, if available. AND (2) In the 3rd sentence re: City of Hope – This might also benefit from a citation, if available. AND (3) In the last sentence –Unless these institutions utilized this particular “structured and collaborative framework” (it is not clear by the term “early adopters” if that means of the framework or of eSource in general) and (ideally) have the data to back up their improvements, this statement about “scalability and reproducibility” cannot be made. Also, one could argue that you would need more than 2 examples and a wider variety of site types for this to truly be the case.

<p>

Page 6, Lines 15–21: In the 7th paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” – the 2nd sentence that mentions EHR2EDC, TransCelerate, and FDA RWE – There are 3 items: (1) If the citations included are meant to be citations for the EHR2EDC (Mueller et al, and Ammour et al), TransCelerate eSource Project, and FDA RWE, it is recommended that they be moved to come right after the corresponding project. AND (2) Also – the Claerhout et al. article seems to be about EHR4CR, which is not mentioned here, but is mentioned in the 1st paragraph of “A Strategic Response” section. (Was not sure if this should be moved there.) AND (3) Are there citations available for the TransCelerate eSource project (even if just a webpage)? Could also point to the FDA’s RWE guidance as a citation here.

<p>

Page 6, Line 23: In the last paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” there is mention of “a scalable solution” – Refer to comment above re: scalability. It is recommended that the term “scalable” be removed here and leaving it as: “...offer a future-ready solution to...” The scalability piece is something you could claim about the implementation framework. But, as it stands, and based on what has been presented in this article so far, there is not sufficient evidence to support the scalability claim.

<p>

Page 6, Lines 33–34: In the 1st paragraph of “A Strategic Response: The i~HD eSource Scale-Up Task Force” – the 1st sentence: Not sure if this is the best phrase to use because, without sufficient evidence presented to back this up (reduced burden, streamlining of clinical trials), one could argue that it is not so clear. **Not arguing that there is not promise, only emphasizing the need for evidence for readers to see that this is not a statement or assumption, but a truth.

<p>

Page 6, Line 46: When describing the Task Force, the term “core members” is used. Are there “non-core” members as well? You mention that the Task Force brought together a variety of key stakeholders, including regulatory experts (and Regulatory Engagement is further emphasized in the description of the “high-impact domains of focus”), but there are no regulators listed in this “core” member group. And what about technology companies (e.g., EHR and EDC vendors)? One could assume they were excluded as “core members” to keep things vendor neutral, but do they get included later as part of the “Technology Networks” mentioned in Figure 2?

<p>

Page 7: When elaborating on the content displayed in Figure 2 (and about the Task Force formation in general) – It may be worth adding details on how these groups (Core Members vs. Reference Groups vs. Technology Networks) were identified and selected to be part of the Task Force. Was there a “call to action” or application process? Did i~HD leadership select these members randomly or based on XYZ criteria? More details are needed on the reference groups. How sites/sponsors/members of these groups get selected? Do they meet regularly? What are their required tasks/objectives and/or how do they get engaged/incorporated into the Task Force activities?

<p>

Page 7, Lines 25–36: In the 3rd paragraph of “A Strategic Response: The i~HD eSource Scale-Up Task Force” (right after Figure 2) – The bullet re: Implementation Science mentions “early adopters” – this brings to mind the question: Is/was the goal to only implement at high-impact sites? or large AMCs? The core members include heavy hitters that typically have the resources and personnel on board (champions). While they can help with major components/aspects of eSource implementation (and are very necessary stakeholders to include), they are not necessarily indicative of what many other sites might need or encounter on their implementation journeys. How is/was this accounted for?

<p>

Page 7, Lines 43–47: In the 5th paragraph of “A Strategic Response: The i~HD eSource Scale-Up Task Force” (right before Figure 3) – More details are needed on the “modular deliverables,” particularly on what is meant by “playbook annexes” to clearly articulate to the reader what they are and what they are/will be used for.

<p>

Page 8, Lines 20–29: In the last 2 paragraphs of “A Strategic Response: The i~HD eSource Scale-Up Task Force” (right after Figure 3) – both are a little confusing. Re: Governance paragraph – Governance goes beyond technology use/selection. Also need more details on how the structure enables the shared implementation models. Re: the Summary paragraph – Not sure if this paragraph belongs here (in this section). So far, after reading this section, there are still many questions about the Task Force – its formation, its operations, its anticipated deliverables/milestones…etc. So it is hard to have a summary statement here about how it is forging a pathway to modernize clinical trial execution. It may be better suited for later in the manuscript, after all the details about the Task Force and its activities have been presented.

<p>

Page 8, Line 58 and Page 9, Lines 7 and 13: Re: the 3 White Papers – It would be helpful to the reader if a citation for each was noted after each paper is mentioned (with a corresponding reference in the Reference list). Or, the citation could be included in Table 1 within the White Paper column.

<p>

Pages 8 and 9: Also re: the 3 White Papers – Were these documents used and “tested” or “validated” by any of the core members (hospitals) or other sites? This is not made clear in this manuscript.

<p>

Page 9, Line 30: In the last paragraph of the “Delivering Impact” section – there is a citation (Adamson et al, 2023) that does not seem to fit. This citation does not seem to match what is being stated in the preceding sentence. It has nothing to do with the white papers listed. It is recommended that the citation be removed and the reference be deleted from the Reference List, as this is the only time it is used in the paper.

<p>

Page 9, Line 44: In the 2nd paragraph of the “Conclusion” – There is mention of “digital health experts”. Based on the “core members” mentioned, it does not seem like these experts are represented. Are these “digital health experts” already part of or members of the hospitals and/or sponsors, or are they meant to be a different entity completely?

Review: Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative — R0/PR4

Conflict of interest statement

Reviewer declares none.

Comments

1. The authors conducted a study to establish a consensus-driven roadmap for eSource adoption. Key deliverables were: readiness criteria for early adopters, a performance indicator framework for monitoring success, and an operational playbook to guide implementation. The eSource Scale-Up Task Force has conducted important work. However, the presentation of that work in this manuscript needs to be improved.

2. The initial 3 ½ text pages appear to be an elongated introduction. This section needs to be reduced to 1 ½ pages at most.

3. The introduction section should address four issues:

1. Background of the research question

2. Previous research in the area

3. Problems with past research

4. What you did to fix those problems

4. Critical sections in the introductory material are presented without appropriate references. Are these the opinions of Task Force members? If so, they should be identified as such.

5. The presentation of methods and results covers only 3 pages, including figures. This section needs more detail. Readers will want to know how the Task Force approached their work and what were the important findings.

6. This study relies upon the expertise of Task Force members. Merely citing organizational affiliations is not sufficient to establish their expertise for readers. More information on their relevant eSource and other experiences is needed. For example, how many eSource studies have they conducted?

7. How did the Task Force develop their three deliverables? Were there teams? Did the teams meet? If so, how many meetings? Who led this work? More details are needed to assure readers of the correctness of the methods used.

8. This study’s three deliverables should be referenced with URLs. Nonetheless, the authors should describe the content of these deliverables in depth in their manuscript.

Recommendation: Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative — R0/PR5

Comments

Dear Authors,

the decision has been made to accept the paper, however the reivewers would like to see full attribution and COI statements for the authors, and several changes and alterations (of a grammar, clarity and other nature below)

These changes should be relatively straightforward to make.

The manuscript describes the formation of an eSource Task Force with the goal of developing a framework for how clinical trial sites can transition from the traditional, manual approaches currently used for data acquisition and management to a more automated or electronic approach (eSource), specifically direct EHR-to-EDC interoperability, to support automated data extraction. There is no doubt that there is a need for such a framework, and, if effective and scalable, of such a framework’s potential to positively impact and optimize clinical research operations across trial sites. The work described would likely be of great interest to readers. However, there are several items that need to be addressed prior to publication. In general, the focus of the paper should be made clear (either to describe the need for and formation of the Task Force and its objectives and/or to describe the development and validation of the eSource Framework). It is also recommended that the Task Force’s next steps (anticipated milestones and/or additional deliverables and any known timelines) be included and described in more detail (can be at the end of the manuscript) so that the readers know what to expect now that the eSource Framework has been developed.

<p>

Grammar & Punctuations: There are a few minor grammar and punctuation items that can be corrected (e.g., missing or unnecessary commas and use of past, present, or future tenses). For example, take a look at the “A Strategic Response: The i~HD eSource Scale-Up Task Force” section and review the use of tenses throughout. Some things that have happened (the formation of the group, decisions already made, etc.) should be stated using past tense.

<p>

Page 3, Line 19: In the 2nd paragraph of the “Introduction” – the last sentence: commas can be added around the term “increasingly” so that it reads “…eSource also enables structured and, increasingly, AI-assisted extraction of…”

<p>

Page 4, Lines 44–45: In the 1st paragraph of the “Impact on Timelines, Cost, and Trial Viability” – the 3rd sentence: There are 2 items (1) the comma after “sponsors” can be removed so that it reads “For sponsors in competitive oncology indications, such delays...” AND (2) the comma after “market opportunities” can be removed and replaced by “and” so that it reads “…Remove the comma here and replace with an “and”: “...missed market opportunities and reduced investor confidence.”

<p>

Page 5, Lines 38–39: In the 2nd paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” – the first sentence: update to add commas “...standards, such as HL7 FHIR and SMART on FHIR APIs, to securely...”

<p>

Page 6, Lines 41–42: In the 1st paragraph of “A Strategic Response: The i~HD eSource Scale-Up Task Force” – the sentence that mentions the 3 initiatives: remove the word “such as” so that it reads: “…through initiatives like EHR4CR, EHR2EDC and EU-PEARL…”

<p>

Page 7, Line 44: In the paragraph above Figure 3 of “A Strategic Response: The i~HD eSource Scale-Up Task Force” – add a comma after “challenges” so that it reads “These resources will address challenges, such as contracting…”

<p>

Figures & Tables:

<p>

For Figure 1 – It is recommended that a “note” or “legend” be added as part of this figure to define abbreviations used in the image.

<p>

For Figure 1 – It is recommended that the term “Clinical Database” at the end of each method (Existing Methods and Data Automation Technology flows) be updated to something that more clearly indicates this is the study database for research and not a database used for clinical care.

<p>

For Figure 1 – It is recommended that the “Data Automation Technology” flow be updated to also start with the “Study Visit” as the first step in the flow, which then moves to the “EHR Data Entry” activity, since eSource does not change that process.

<p>

For Figure 2 – A minor correction to the 1st bullet of the Purpose to update to: “Drive and scale the adoption of…”

<p>

For Figure 2 – It is recommended that a “note” or “legend” be added as part of this figure to define/describe the purpose of the “Core Members” vs. the “Reference Groups” vs. the “Technology Networks” and/or provide a more detailed description of these roles in the main body of the paper.

<p>

Citations & References:

<p>

In the References list, there are 2 references that were not used/cited in the paper and should be removed: (1) Lengfellner & Yeatman, 2025 (Page 10, Lines 46–47); and (2) Passut, 2021 (Page 11, Lines 3–5).

<p>

Page 11, Line 19: In the References list, the last reference listed as “University of California” should be updated to include the author (Julia Busiek), and the corresponding citation used in the “Impact on Timelines, Cost, and Trial Variability” section (Page 4, Line 53) should be corrected accordingly (to from “University of California, 2025” to “Busiek, 2025”).

<p>

Page 4, Line 53: In the “Impact on Timelines, Cost, and Trial Variability” section, the UAB citation needs to be corrected: from “UAB Institute for Human Rights Blog, 2025” to “Rhodes, 2025”.

<p>

General Comments:

<p>

Page 2, Lines 38–40: In the last paragraph of the Abstract, it is mentioned that “This paper highlights how collaborative governance, phased implementation, and shared operational standards can enable sustainable scaleup of eSource technologies and foster digitally integrated clinical research infrastructures.” This implies that the manuscript will describe not only the formation, purpose, and deliverables of the Task Force and the resulting Framework, but that it will also demonstrate how it can be successfully implemented and scaled up beyond the initial use case. However, this is not demonstrated in this particular article. There is no mention of the actual implementation of the framework – only brief mention that it was an objective and that it was likely developed. There is also no mention of the evaluation of its success nor of any attempts to apply this to other TAs outside of oncology or other to other sites outside of the Core Members. It is suggested that this sentence be removed or modified to more clearly articulate the focus of this paper.

<p>

Page 2, Lines 44–52: In the Impact Statement, the authors start by stating that the paper will “present a scalable framework for implementing eSource technologies…” However, while the article mentions a framework was a deliverable and that white papers were published, the actual framework itself is not clearly described. (Unless Figure 3 – and maybe in combination with the “Key Features” in Table 1 – is/are meant to represent the framework – in which case, this should be made clearer.) If the primary objective is to present the final framework, then this needs to be more clearly stated throughout and a section describing the framework in more detail should be included. If the primary objective is to focus more on the Task Force – the need for such a group, its formation, its purpose, goals, and objectives, etc. – then the Impact Statement should be updated to indicate that.

<p>

The article starts off by talking through some of the complexities of running oncology trials and mentions that the i~HD Task Force and eSource Framework were initially focused on and/or specific to addressing the challenges faced in the oncology domain. It is then noted that the framework could be applied to other domains. However, in describing the development of the framework, it is not clear if that “oncology” focus shifted once the Task Force started putting all the pieces together (so the resulting framework did end up being more “generic” or broad), or if the framework was created with the oncology focus and “tips” or “amendments” or similar have been (or will be) created for how to “translate” the framework to other domains/therapeutic areas (TAs). Additionally, the statement was made that “While grounded in oncology, the model is transferable to other high-data-density therapeutic areas.” However, there is no real explanation as to how to go about transferring it to other TAs, or even on what elements would need to be modified or reevaluated or considered when translating to other TAs.

<p>

Page 2, Line 60: It is suggested that the “Introduction” header be updated to simply “Introduction” and the “Rising Complexity in Oncology Trials” piece be removed - especially since this heading text is repeated as a header for the 2nd paragraph in “The Challenge” section that follows.

<p>

Page 3, Lines 5–8: In the 1st paragraph of the “Introduction” – the 2nd sentence: “Traditional trial workflows require research teams to extract, transcribe, and validate patient data from EHRs into EDC systems—a duplicative process that accounts for over 50% of clinical trial data and requires significant verification effort, often consuming substantial operational resources (Kalankesh and Monaghesh, 2024).” – There are 2 questions: (1) the part that reads “…a duplicative process that accounts for over 50% of clinical trial data…” does not make sense. Do you mean that the “duplicative process” accounts for a 50% increase in time required to complete tasks? Or that over 50% of the data in oncology (and other data-intensive domains) are data that are part of a more complicated workflow (manual extraction, transcription, and validation) with redundancies? AND (2) Is this the correct citation? This article by Kalankesh and Monaghesh is a systematic literature review about EHR use in clinical trials. There is nothing in this article that discusses the analysis of any quantitative data that would account for this “50%” value presented here. One could also argue that this particular systematic literature review is relatively weak – as the search criteria is very broad, the numbers presented in the results do not match the figure displayed (nor does it seem remotely accurate based on the very broad search criteria), and there is practically no synthesis of the identified articles – which is the main point of conducting and writing up a review. It is recommended that this citation be replaced with one that contains this information and can back up the “50%” value in some way; or to modify the sentence so that a specific percentage is not included (and remove the citation completely).

<p>

Page 3, Lines 17–21: In the 2nd paragraph of the “Introduction” – the last sentence: The 3 citations come at the end of the sentence, and by citing these articles here, it implies that they discuss standards-based (or FHIR-based) eSource data extraction. They do focus on AI for unstructured data, but all 3 do not necessarily focus on AI-based eSource or AI to support eSource extraction, so it does not seem like they fit here. It may be more appropriate to move them to come after the text: “...increasingly, AI-assisted (citations) extraction of clinical data...”

<p>

Page 3, Lines 52–54: In the 1st paragraph of “The Challenge” – the last sentence: “This duplication affects more than half of the total trial data and contributes to roughly 20% of total study costs—resources that could be redirected toward scientific advancement and patient benefit (Kalankesh and Monaghesh, 2024).” – Again, this citation does not seem to be appropriate to back up the data presented in this sentence. If an article was identified by this systematic review that contains this “20%” value, that article should be cited instead of this review. Otherwise, it is recommended that this sentence be removed or rewritten to exclude the percentage and the citation should be deleted (and removed from the reference list, if it is no longer used in the article).

<p>

Page 4, Lines 3–6: In the 1st paragraph of “Increasing Data Complexity in Modern Oncology Trials” – the last 2 sentences: “In some advanced trials, the number of data points collected per patient has increased exponentially—from 10,000 in traditional Phase III studies to over 100,000 in trials involving genomic and digital health data. Each of these data points must be documented, reviewed, and often verified manually (Altomare et al., 2024).” – There are 2 items: (1) It would seem more likely or appropriate that a citation would be needed to back up the “10,000 to over 100,000 data points” claim, as opposed to being used for the last sentence in this paragraph. AND (2) That said, Is this Altamore et al. citation the correct or appropriate citation? (Is this citation even needed at all?) In reviewing the Altomare et al. abstract (unable to find a full text version), it does not appear to match what is written in these 2 sentences. It is recommended that the Altomare et al. citation be removed and the reference deleted, as this is the only time it is used in this manuscript.

<p>

Page 4, Line 19: In the 1st paragraph of “The Burden of Redundant Data Entry” – the last sentence: Is this meant to point to the Sundgren et al., 2025 citation (rather than the current 2024 citation), as the 2025 article mentions the “high-value tasks” (or maybe cite both if both are relevant).

<p>

Page 4, Line 28: In the 2nd paragraph of “The Burden of Redundant Data Entry” – the last sentence: “In addition to transcription itself, manual data entry generates substantial downstream workload, including query resolution, data reconciliation, and extensive data review cycles—activities that are estimated to account for up to 30–40% of total data management costs in oncology trials (Hamidi et al., 2024; Ehidiamen and Oladapo, 2024). – There are 2 items: (1) The Hamidi et al. article mentions SDV-related costs (TA-agnostic) as being attributable to between 25%-40% of clinical trial costs. It is recommended that the 30-40% mentioned in the sentence be updated to align with the citated article. AND (2) The Ehidiamen & Oladapo article does not discuss data management costs in oncology trials. It is a non-therapeutic area specific (TA-agnostic) paper on EDC use for/in clinical trials. If this is meant to refer to the “substantial downstream workload”, it is recommended that this be moved to follow the text “...extensive data review cycles (E & O citation)--activities that are...” Otherwise, it is recommended that this citation be removed.

<p>

Page 4, Lines 31–35: In the “Compliance and Data Integrity Challenges” paragraph – the 2nd sentence: “Standards such as ALCOA+ (Attributable, Legible, Contemporaneous, Original, Accurate, and Complete) and compliance frameworks like the Global Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) underscore the necessity of data that is not only accurate but secured and auditable (Ehidiamen and Oladapo, 2024).” – Is this the correct or appropriate citation for this sentence? There is mention of GDPR and HIPAA in this article, but it points out to other references re: those requirements. There is no mention of ALCOA+. Since this article is not the direct source for info on these regulations, it is not recommended that it be used as a citation for this sentence. Better to cite the actual regulations or an article (or two) that specifically discuss these regulations. If the E & O citation is removed here and removed from the previous section (based on the previous comment), it is recommended that it also be removed from the Reference list, as those are the only 2 places it is cited.

<p>

Page 4, Lines 45–46: In the “Impact on Timelines, Cost, and Trial Viability” paragraph – the 4th sentence: “For patients, lengthy timelines increase the risk for patient safety identification as well as limited access to breakthrough therapies.” – This sentence does not read clearly. Perhaps update to: “...increase the risk for patient safety identification and limit access to breakthrough therapies.” OR “...increase the risk for patient safety identification, as well as limit access to breakthrough therapies.”

<p>

Page 5, Lines 45–51: In the 3rd paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” – There are 3 items: (1) In the 2nd sentence re: automation resulting in “higher data quality and completeness” – If available, it may be worth adding a citation or two to support this statement to further emphasize that this has been shown/demonstrated and is not just something we “hope” or “assume” will happen. AND (2) In the 3rd sentence re: FDA, EMA, and MHRA support for eSource – Can you point to guidances or press releases or something similar from these agencies that express their increased support? AND (3) In the last sentence re: “direct cost and time savings” – If available, it may be worth adding a citation or two to support this statement, again, to show or demonstrate that sites have experienced these improvements.

<p>

Page 5, Lines 53–54: In the 4th paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” – the 1st sentence: Consider not limiting the statement to only RCTs. The “eSource transfer process” is applicable to all research requiring extensive amounts of data from outside sources (not just RCTs).

<p>

Page 6, Lines 3–8: In the 5th paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” – There are 3 items: (1) In the 2nd sentence re: Mayo Clinic – This might benefit from a citation, if available. AND (2) In the 3rd sentence re: City of Hope – This might also benefit from a citation, if available. AND (3) In the last sentence –Unless these institutions utilized this particular “structured and collaborative framework” (it is not clear by the term “early adopters” if that means of the framework or of eSource in general) and (ideally) have the data to back up their improvements, this statement about “scalability and reproducibility” cannot be made. Also, one could argue that you would need more than 2 examples and a wider variety of site types for this to truly be the case.

<p>

Page 6, Lines 15–21: In the 7th paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” – the 2nd sentence that mentions EHR2EDC, TransCelerate, and FDA RWE – There are 3 items: (1) If the citations included are meant to be citations for the EHR2EDC (Mueller et al, and Ammour et al), TransCelerate eSource Project, and FDA RWE, it is recommended that they be moved to come right after the corresponding project. AND (2) Also – the Claerhout et al. article seems to be about EHR4CR, which is not mentioned here, but is mentioned in the 1st paragraph of “A Strategic Response” section. (Was not sure if this should be moved there.) AND (3) Are there citations available for the TransCelerate eSource project (even if just a webpage)? Could also point to the FDA’s RWE guidance as a citation here.

<p>

Page 6, Line 23: In the last paragraph of “The Opportunity: eSource and EHR-to-EDC Integration” there is mention of “a scalable solution” – Refer to comment above re: scalability. It is recommended that the term “scalable” be removed here and leaving it as: “...offer a future-ready solution to...” The scalability piece is something you could claim about the implementation framework. But, as it stands, and based on what has been presented in this article so far, there is not sufficient evidence to support the scalability claim.

<p>

Page 6, Lines 33–34: In the 1st paragraph of “A Strategic Response: The i~HD eSource Scale-Up Task Force” – the 1st sentence: Not sure if this is the best phrase to use because, without sufficient evidence presented to back this up (reduced burden, streamlining of clinical trials), one could argue that it is not so clear. **Not arguing that there is not promise, only emphasizing the need for evidence for readers to see that this is not a statement or assumption, but a truth.

<p>

Page 6, Line 46: When describing the Task Force, the term “core members” is used. Are there “non-core” members as well? You mention that the Task Force brought together a variety of key stakeholders, including regulatory experts (and Regulatory Engagement is further emphasized in the description of the “high-impact domains of focus”), but there are no regulators listed in this “core” member group. And what about technology companies (e.g., EHR and EDC vendors)? One could assume they were excluded as “core members” to keep things vendor neutral, but do they get included later as part of the “Technology Networks” mentioned in Figure 2?

<p>

Page 7: When elaborating on the content displayed in Figure 2 (and about the Task Force formation in general) – It may be worth adding details on how these groups (Core Members vs. Reference Groups vs. Technology Networks) were identified and selected to be part of the Task Force. Was there a “call to action” or application process? Did i~HD leadership select these members randomly or based on XYZ criteria? More details are needed on the reference groups. How sites/sponsors/members of these groups get selected? Do they meet regularly? What are their required tasks/objectives and/or how do they get engaged/incorporated into the Task Force activities?

<p>

Page 7, Lines 25–36: In the 3rd paragraph of “A Strategic Response: The i~HD eSource Scale-Up Task Force” (right after Figure 2) – The bullet re: Implementation Science mentions “early adopters” – this brings to mind the question: Is/was the goal to only implement at high-impact sites? or large AMCs? The core members include heavy hitters that typically have the resources and personnel on board (champions). While they can help with major components/aspects of eSource implementation (and are very necessary stakeholders to include), they are not necessarily indicative of what many other sites might need or encounter on their implementation journeys. How is/was this accounted for?

<p>

Page 7, Lines 43–47: In the 5th paragraph of “A Strategic Response: The i~HD eSource Scale-Up Task Force” (right before Figure 3) – More details are needed on the “modular deliverables,” particularly on what is meant by “playbook annexes” to clearly articulate to the reader what they are and what they are/will be used for.

<p>

Page 8, Lines 20–29: In the last 2 paragraphs of “A Strategic Response: The i~HD eSource Scale-Up Task Force” (right after Figure 3) – both are a little confusing. Re: Governance paragraph – Governance goes beyond technology use/selection. Also need more details on how the structure enables the shared implementation models. Re: the Summary paragraph – Not sure if this paragraph belongs here (in this section). So far, after reading this section, there are still many questions about the Task Force – its formation, its operations, its anticipated deliverables/milestones…etc. So it is hard to have a summary statement here about how it is forging a pathway to modernize clinical trial execution. It may be better suited for later in the manuscript, after all the details about the Task Force and its activities have been presented.

<p>

Page 8, Line 58 and Page 9, Lines 7 and 13: Re: the 3 White Papers – It would be helpful to the reader if a citation for each was noted after each paper is mentioned (with a corresponding reference in the Reference list). Or, the citation could be included in Table 1 within the White Paper column.

<p>

Pages 8 and 9: Also re: the 3 White Papers – Were these documents used and “tested” or “validated” by any of the core members (hospitals) or other sites? This is not made clear in this manuscript.

<p>

Page 9, Line 30: In the last paragraph of the “Delivering Impact” section – there is a citation (Adamson et al, 2023) that does not seem to fit. This citation does not seem to match what is being stated in the preceding sentence. It has nothing to do with the white papers listed. It is recommended that the citation be removed and the reference be deleted from the Reference List, as this is the only time it is used in the paper.

<p>

Page 9, Line 44: In the 2nd paragraph of the “Conclusion” – There is mention of “digital health experts”. Based on the “core members” mentioned, it does not seem like these experts are represented. Are these “digital health experts” already part of or members of the hospitals and/or sponsors, or are they meant to be a different entity completely?

Decision: Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative — R0/PR6

Comments

No accompanying comment.

Author comment: Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative — R1/PR7

Comments

Dear Prof. Dominiczak,

On behalf of my co-authors, I am pleased to resubmit our revised manuscript entitled “Accelerating eSource Scale-Up in Oncology Clinical Trials: The i~HD Task Force Initiative” (PCM-2025-0055) to Cambridge Prisms: Precision Medicine.

We are grateful for the insightful comments from the reviewers and handling editor, which have greatly strengthened the manuscript. In this revision, we clarified the scope and focus of the paper, addressed all points regarding references, grammar, and clarity, and expanded descriptions of the Task Force’s deliverables, validation, and next steps. We have also included the required author contribution, conflict of interest, and data availability statements.

We believe the revised version is now well aligned with the journal’s standards and addresses all feasible reviewer requests. We thank you for considering our work and look forward to your decision.

Sincerely,

Mats Sundgren, PhD

(on behalf of all co-authors)

Review: Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative — R1/PR8

Conflict of interest statement

Reviewer declares none.

Comments

I have no additional comments.

Recommendation: Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative — R1/PR9

Comments

The reviews for your submission are favorable and we can proceed with publication.

Decision: Accelerating eSource scale-up in oncology clinical trials: The i~HD Task Force initiative — R1/PR10

Comments

No accompanying comment.