A paired training curriculum and internal monitoring program for clinical research regulatory compliance in the emerging era of the single Institutional Review Board

Background Academic health systems and their investigators are challenged to systematically assure clinical research regulatory compliance. This challenge is heightened in the emerging era of centralized single Institutional Review Boards for multicenter studies, which rely on monitoring programs at each participating site. Objective To describe the development, implementation, and outcome measurement of an institution-wide paired training curriculum and internal monitoring program for clinical research regulatory compliance. Methods Standard operating procedures (SOPs) were developed to facilitate investigator and research professional adherence to institutional policies, federal guidelines, and international standards. An SOP training curriculum was developed and implemented institution-wide. An internal monitoring program was launched, utilizing risk-based monitoring plans of pre-specified frequency and intensity, assessed upon Institutional Review Boards approval of each prospective study. Monitoring plans were executed according to an additional SOP on internal monitoring, with monitoring findings captured in a REDCap database. Results We observed few major violations across 3 key domains of clinical research conduct and demonstrated a meaningful decrease in the rates of nonmajor violations in each, over the course of 2 years. Conclusion The paired training curriculum and monitoring program is a successful institution-wide clinical research regulatory compliance model that will continue to be refined.


Introduction
The conduct of clinical research at academic health centers offers great potential benefit with regard to advancing knowledge, improving future patient outcomes, developing and advancing the careers of faculty, trainees, and research professionals, and enhancing institutional reputation. However, with these great potential benefits come significant potential risks-to participants (e.g., patients), investigators, and institutions. It is an ethical responsibility of the clinical investigator and the institution that engages in clinical investigation to adopt a comprehensive approach toward safeguarding participants in clinical research, and assuring quality of the research process.
Presently, for-cause and random audits are the norm for clinical research regulatory compliance monitoring. At academic centers, these audits are typically performed by the institution's Office of Human Subjects Research or equivalent (ie, the "IRB office") as well as (e.g., in cancer studies) by a Cancer Center, where one exists. As a rule, audit findings are not shared beyond the institution, apart from any applicable reportability requirements to the study sponsor, the United States Food and Drug Administration (FDA), and/or the federal Office of Human Research Protections. By contrast, FDA publically posts on its Web site Warning Letters However, aggregate data on change in frequency and severity of violations over time have not been published for the current paradigm of random and for-cause audits. The success of such an approach should, therefore, be questioned, and an alternative approach of routine, systematic internal monitoring is worthy of evaluation. Such an alternative approach is particularly timely given the recent National Institutes of Health (NIH) policy on centralized "single Institutional Review Boards" (sIRBs) for multicenter studies [3]. The desired benefit of the sIRB model is greater efficiency and consistency in multicenter study implementation; however, the diminution of local IRBs' oversight role for site-specific study conduct requires that the local institution has in place (or develops) an adequate monitoring program for site participation in such studies.
In this communication, our objective is to describe the development, implementation, and outcome measurement of an institution-wide paired training curriculum and internal monitoring program for clinical research regulatory compliance, as a major component of one pediatric academic health center's solution to the challenge of reducing systematic risks to clinical research participants, investigators, and institutions. However, we propose this solution as a model for potential adoption by other academic health centers. Although the training curriculum and monitoring program leverage an established, robust, centralized institutional infrastructure model (a multiunit Clinical and Translational Research Organization [CTRO] described previously [4]) designed to support investigators in the conduct of both investigator-initiated and pharmaceutical industry-driven clinical research studies, a CTRO or similar centralized infrastructure for clinical research execution is neither necessary nor sufficient for the training curriculum and monitoring program to fulfill the mandate for enhanced clinical research oversight.

Methods Setting
Johns Hopkins All Children's (JHAC) Hospital is a pediatric academic health system whose center is located in St. Petersburg, FL, and home to one of three stand-alone Children's Hospitals in the state of Florida. Since 2014, JHAC has averaged~220 active prospective studies across a total of~60 principal investigators (PIs) in child health and disease at any given time, and has devoted an institutionally-supported total of 1.2 full-time equivalency (FTE) toward the ongoing implementation of the training curriculum and the internal monitoring effort around these studies.

Development and Implementation of Standard Operating Procedures (SOPs) and Associated Training Curriculum
The first step in the development of the training curriculum and monitoring program was the development and implementation of a cadre of SOPs (topics listed in Table 1) that are institutionally tailored to facilitate investigator, trainee, and clinical research professional adherence to institutional policies, federal guidelines, and international standards in clinical research. Next, we developed 3 curricular components for SOP training, each of which included both didactic and interactive components: Regulatory Affairs/Quality Assurance cambridge.org/jcts An enrolled study participant did not meet all eligibility criteria within protocol-specified timeframe (or authorization for protocol exception not obtained from sponsor and Institutional Review Board (IRB) before enrollment) Signed informed consent document not present in study participant's research chart, or lacks signature or date of signature before enrollment by a study participant or legally authorized representative or the appropriately authorized study team member who obtained informed consent Informed consent not obtained in a language comprehended by a study participant or legally authorized representative (or an appropriate IRB-approved translated form was not used) Informed consent document used was not the current IRB-approved version Re-consent of study participant was not obtained in a circumstance and/or timeframe required by the IRB Eligibility source documentation missing on an enrolled study participant Informed consent document contains signatures and dates of signatures before enrollment from study participant or legally authorized representative and appropriately authorized study team member, but has errors in placement of signatures or has missing initials or dates Consent form does not bear a unique subject identifier label Signed informed consent document not scanned into the electronic health record, if the study participant is a patient of the health system Study intervention (drug/device regimen) or related protocol-specified treatments not administered according to study protocol, with regard to correct drug/device, dose, or route of administration, or not documented Systemic incompleteness or inaccuracies in documentation of study intervention and/or related protocolspecified treatments Study intervention (drug or device regimen) not ordered by appropriately credentialed clinician member of the study team Inadequately justified protocol deviation with regard to timing of study intervention or related protocolspecified treatments Inadequately justified protocol deviation with regard to prohibited medications Systemic incompleteness or inaccuracies in documentation of chain of custody of investigational drug/device (includes documentation of initial inventory relative to shipping log, if supply received from sponsor/sponsor designee; also includes documentation of destruction of investigational drug/device or its return to sponsor/ sponsor designee) Inadequately justified protocol deviation with regard to return/destruction of unused investigational product to Sponsor/Sponsor designee.
Study intervention and related treatment documented, but documentation is incomplete-unless judged to be systemic in nature Protocol deviation involving timing of study intervention or related protocol-specified treatments justified, but not documented/reported Protocol deviation involving prohibited medications justified, but not documented/reported Incomplete or inaccurate documentation of chain of custody of investigational drug or device (includes documentation of initial inventory relative to shipping log, if supply received from sponsor or sponsor designee; also includes documentation of destruction or return to sponsor or sponsor designeeunless judged to be systemic in nature Protocol deviation involving return/destruction of unused investigational product to sponsor/sponsor designee justified, but not documented/reported * If the IRB-approved protocol specifies that a given AE is not reportable, then lack of documentation, determination, or reporting of that AE does not constitute a violation. Rounds (bi-monthly); "Brown-Bag" Training Series on SOPs (monthly); and a Training Mini-Retreat on Investigator Responsibilities in FDA-Regulated Drug and Device Trials (annually, by institute and department). The first component was initiated in 2014, the second in 2015, and the third in early 2016.
An internal monitoring program, codified in an additional SOP and associated tools, was designed and launched at the end of 2014. The internal monitoring procedure begins with the assessment of monitoring category (Table 2) and associated monitoring frequency and scope/ intensity for a given study. This assessment is performed at the time of IRB approval, and is largely based on the IRB's risk categorization of the study. A corresponding Monitoring Plan is then drafted for the study, using a template provided as an appendix to the SOP on Internal Monitoring. The study's PI and the Chief Research Officer both review and sign-off on the Plan, with copies provided to the primary clinical research coordinator (CRC) as well as to the regulatory research assistant, to whom responsibility is designated for filing the plan in the study's Regulatory Binder. For studies conducted under an Investigational New Drug or Investigational Device Exemption, an additional monitoring visit is conducted by the auditing/monitoring staff of the Office of Human Subjects Research ("IRB office"), or via the CTRO-based internal monitoring program in the case of a trial that utilizes an external sIRB.
Approximately three months before the first, and each subsequent monitoring episode, the PI and CRC are notified via email that a monitoring episode is planned, and a period of one to two days (depending on pre-specified scope) is collaboratively scheduled in which the PI and/or CRC will be available to address any questions from the monitor during the monitoring process. Following completion of the monitoring episode and its write-up (using a standardized documentation tool provided as an Appendix to the SOP on Internal Monitoring), a wrap-up meeting is conducted with the PI, CRC, monitor, Director of Research Operations, and the Chief Research Officer, to review the observations and provide any necessary focused re-training for the PI and CRC on SOP components that relate to the nature of violations observed. Any violations that meet requirements for reportability to the IRB are then duly reported to the IRB by the PI with support from the CRC and regulatory research assistant.

Database Design, Data Collection, and Outcomes Analysis
In late 2014, just before the launch of the monitoring program, a database was designed on a web-based electronic data-capture system (REDCap) and implemented for capture of discrete data on the observations from each monitoring episode, for each study monitored. Data collection included (but was not limited to) the following: IRB approval number, investigator name, institute or department primary affiliation, therapeutic area, study monitoring category (see Development and Implementation of Standard Operating Procedures (SOPs) and Associated Training Curriculum section and Table 2), and numerator and denominator data for each domain monitored (number of research participants monitored for a given domain, number of monitored participants for whom violations were found for that domain, respectively). Violations were further categorized as "major" Versus "nonmajor," as shown in Table 3, modeled from the National Cancer Institute Cancer Therapy Evaluation Program criteria [5]. The 3 pre-specified domains for initial program evaluation were eligibility enforcement or informed consent process documentation; adverse event determination, documentation, and reporting; and investigational drug procedural and environmental controls and accountability. Statistical analyses compared frequencies of violations for a given domain between calendar years 2015 and 2016, using Yates' χ 2 testing, corrected for continuity; in the case of cell values <5 in 2 × 2 tables, Fisher's exact test was instead employed. A p-value of <0.05 was established as the a priori threshold for statistical significance (ie, α level).

Results
During the evaluation period of calendar years 2015 and 2016, 23 unique studies were internally monitored via the program, across 13 therapeutic areas. This represented the number of prospective studies that had undergone de novo IRB approval or approval of a change-in-research submission, and had reached the pre-specified interval(s) for monitoring. Fig. 1 provides a breakdown of studies monitored, by institute and department. Therapeutic areas represented by these studies included the following: bone marrow transplantation, n = 1; hemostasis and thrombosis, n = 2; hematological malignancies, n = 3; hernia repair, n = 1; central nervous system (CNS) tumors, n = 2; stroke, n = 1; neonatology, n = 3; appendicitis or cholecystitis, n = 2; chest wall deformity, n = 2; congenital or acquired heart disease, n = 1; immunodeficiency, n = 3; infectious disease, n = 1; and multidisease studies, n = 1.

Discussion
In this report, we have described the development, implementation, and outcome measurement of an innovative, institution-wide, paired training curriculum, and internal monitoring program for clinical research regulatory compliance, as a major component of one pediatric academic health center's solution to the challenge of reducing systematic risks to clinical research participants, investigators, and institutions. The paired training curriculum and monitoring program reflect the institution's continual efforts to prioritize and optimize patient safety. We have measured a very low rate of major findings postimplementation, and demonstrated a meaningful decrease-over a short period of two years of monitoring-in the rates of nonmajor violations across each of three key domains of clinical research conduct: eligibility criteria enforcement and informed consent process documentation; adverse event determination, documentation, and reporting; and investigational drug procedural and environmental controls and accountability among clinical trials that involve investigational drugs. Given the relatively small number of monitoring episodes conducted to date, statistical significance was demonstrated only for the decline in adverse event violations, despite the substantive relative reductions in violations across all domains evaluated. Nevertheless, encouraged by these results, we are continuing to support this initiative, and as of mid-2017 we are implementing an additional monitoring visit systematically after the enrollment of the first patient in all category-2 and -3 studies. This refinement to the monitoring program, we believe, will reduce the number of minor violations identified to date that pertain to missing or incomplete standardized documents specified in our SOPs, such as eligibility checklists and master adverse event logs. As further data accrue from the monitoring program, we will seek to identify systemic trends that yield opportunities for additional refinements to the monitoring program, targeted institution-wide re-training on corresponding SOPs, and further optimization of SOPs, as warranted.
We believe that the training curriculum and monitoring program are scalable, in which our example of 1.2 FTE devoted to a program that involves on average 220 prospective studies among 60 investigators, could be increased or decreased commensurately with the size of the clinical research faculty and prospective study portfolio of a given academic health center at a given time. Although a CTRO or similar centralized infrastructure for clinical research execution is neither necessary nor sufficient for the training curriculum and monitoring program to fulfill the mandate for enhanced clinical research oversight, it is our opinion (informed, in part, by experience) that the presence of a CTRO is a key facilitator of the achievement of metrics of success, such as those reported here. It is our aim to continue to monitor outcomes of the curriculum and program, and to continually refine the program as needed in order to optimize the metrics of success. children and parents who generously and courageously participate in clinical trials at JHAC and elsewhere, mostly for the prospect of benefit to future pediatric patients.