Hostname: page-component-89b8bd64d-72crv Total loading time: 0 Render date: 2026-05-08T11:40:36.340Z Has data issue: false hasContentIssue false

Indices of clinical research coordinators’ competence

Published online by Cambridge University Press:  24 July 2019

Carlton A. Hornung*
Affiliation:
Consortium of Academic Programs in Clinical Research Department of Medicine, University of Louisville School of Medicine, Louisville, KY, USA
Phillip A. Ianni
Affiliation:
Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor, MI, USA
Carolynn T. Jones
Affiliation:
College of Nursing, The Ohio State University, Columbus, OH, USA
Elias M. Samuels
Affiliation:
Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor, MI, USA
Vicki L. Ellingrod
Affiliation:
Michigan Institute for Clinical and Health Research, University of Michigan, Ann Arbor, MI, USA College of Pharmacy, University of Michigan, Ann Arbor, MI, USA
*
Address for correspondence: C. A. Hornung, PhD, MPH, Department of Medicine, University of Louisville, 18613 John Connor Rd., Cornelius, NC 28031, USA. Email: CAHornung@Louisville.edu
Rights & Permissions [Opens in a new window]

Abstract

Introduction:

There is a clear need to educate and train the clinical research workforce to conduct scientifically sound clinical research. Meeting this need requires the creation of tools to assess both an individual’s preparedness to function efficiently in the clinical research enterprise and tools to evaluate the quality and effectiveness of programs that are designed to educate and train clinical research professionals. Here we report the development and validation of a competency self-assessment entitled the Competency Index for Clinical Research Professionals, version II (CICRP-II).

Methods:

CICRP-II was developed using data collected from clinical research coordinators (CRCs) participating in the “Development, Implementation and Assessment of Novel Training In Domain-Based Competencies” (DIAMOND) project at four clinical and translational science award (CTSA) hubs and partnering institutions.

Results:

An exploratory factor analysis (EFA) identified a two-factor structure: the first factor measures self-reported competence to perform Routine clinical research functions (e.g., good clinical practice regulations (GCPs)), while the second factor measures competence to perform Advanced clinical functions (e.g., global regulatory affairs). We demonstrate the between groups validity by comparing CRCs working in different research settings.

Discussion:

The excellent psychometric properties of CICRP-II and its ability to distinguish between experienced CRCs at research-intensive CTSA hubs and CRCs working in less-intensive community-based sites coupled with the simplicity of alternative methods for scoring respondents make it a valuable tool for gauging an individual’s perceived preparedness to function in the role of CRC as well as an equally valuable tool to evaluate the value and effectiveness of clinical research education and training programs.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Association for Clinical and Translational Science 2019
Figure 0

Fig. 1. Scree plot of the eigenvalues. Abbreviation: CICRP-II, Competency Index for Clinical Research Professionals, version II.

Figure 1

Table 1. Twenty CICRP items administered to DIAMOND CTSA sites (N = 95)

Figure 2

Table 2. Correlations between factors with alternative scoring methods (DIAMOND Data; N = 95)

Figure 3

Table 3. Statistical characteristics with alternative scoring methods (DIAMOND Data; N = 95)

Figure 4

Table 4. Characteristics of CRCs in the JTF and Diamond survey data

Figure 5

Table 5. Self-assessed competency of CRCs on CICRP-I and CICRP-II. [JTF (N = 81) and DIAMOND Surveys (N = 95)]