We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A promising approach to assessing research impact draws on the Translational Science Benefits Model (TSBM), an evaluation model that tracks the applied benefits of research in four domains: Clinical and Medical; Community and Public Health; Economic; and Policy and Legislative. However, standardized methods to verify TSBM benefit data, to aid in aggregating impact data within quantitative summaries, do not currently exist.
Methods:
A panel of 11 topic experts participated in a modified Delphi process for establishing content and face validity of a set of criteria for verifying qualitative TSBM data. Two survey rounds were completed by panelists, with a moderated discussion in between rounds to discuss criteria not reaching consensus. Criteria with panel consensus at or above 70% in the survey rounds were confirmed as validated.
Results:
Criteria fell into 9 categories: Content Relevant, Project Related, Who, Reach, What, How, Novel, Documented Evidence, and When. The Delphi process yielded 197 total criteria across the 30 benefits characterized by the TSBM (range = 5–8 criteria per benefit).
Discussion:
The results of this Delphi process lay the foundation for developing a TSBM coding tool for evaluating and quantifying TSBM data. Standardizing this process will enable data aggregation, group analysis, and the comparison of research impact across contexts.
Although organizational climate may affect faculty’s mentoring behaviors, there has not been any way to measure that climate. The purpose of this study was to test the reliability and validity of two novel scales to measure organizational mentoring climate importance and availability at two public research universities.
Methods:
We developed 36 content-valid mentoring climate items in four dimensions: Structure, Programs/Activities, Policies/Guidelines, and Values. In total, 355 faculty completed an anonymous, structured, online survey asking about the importance (very important to very unimportant) and availability (no, don’t know, yes) of each of the items. We conducted reliability analyses and construct validity testing using exploratory common factor analysis, principal axis factoring, and oblique rotation.
Results:
The majority of the predominantly female, White non-Hispanic, senior, tenure-track faculty were not currently mentoring another faculty or being mentored. Analyses demonstrated a 15-item solution for both the Organizational Mentoring Climate Importance (OMCI) and the Availability (OMCA) Scales, with three factors each: Organizational Expectations, Mentor–Mentee Relationships, and Resources. Standardized Cronbach alphas ranged from 0.74 to 0.90 for the subscales, and 0.94 (OMCI) and 0.87 (OMCA) for the full scales. Faculty rated all items as somewhat to very important; however, perceived availability was very low ranging from mentor training programs (40%) to guidelines for evaluating mentoring success or managing conflict (2.5%).
Conclusions:
The scales will allow studying of how organizational climate may affect mentoring behavior and whether climate can be changed to improve faculty mentoring outcomes. We provide recommendations for furthering the science of organizational mentoring climate and culture.
Although the science of team science is no longer a new field, the measurement of team science and its standardization remain in relatively early stages of development. To describe the current state of team science assessment, we conducted an integrative review of measures of research collaboration quality and outcomes.
Methods:
Collaboration measures were identified using both a literature review based on specific keywords and an environmental scan. Raters abstracted details about the measures using a standard tool. Measures related to collaborations with clinical care, education, and program delivery were excluded from this review.
Results:
We identified 44 measures of research collaboration quality, which included 35 measures with reliability and some form of statistical validity reported. Most scales focused on group dynamics. We identified 89 measures of research collaboration outcomes; 16 had reliability and 15 had a validity statistic. Outcome measures often only included simple counts of products; publications rarely defined how counts were delimited, obtained, or assessed for reliability. Most measures were tested in only one venue.
Conclusions:
Although models of collaboration have been developed, in general, strong, reliable, and valid measurements of such collaborations have not been conducted or accepted into practice. This limitation makes it difficult to compare the characteristics and impacts of research teams across studies or to identify the most important areas for intervention. To advance the science of team science, we provide recommendations regarding the development and psychometric testing of measures of collaboration quality and outcomes that can be replicated and broadly applied across studies.
We describe the effectiveness of community outreach and engagement in supporting recruitment for the US National Children’s Vanguard Study between 2009 and 2012.
Methods
Thirty-seven study locations used 1 of 4 strategies to recruit 18–49-year-old pregnant or trying to conceive women: (1) Initial Vanguard Study used household-based recruitment; (2) Direct Outreach emphasized self-referral; (3) Enhanced Household-Based Recruitment enhanced Initial Vanguard Study strategies; and (4) Provider-Based Recruitment recruited through healthcare providers. Outreach and engagement included advance letters, interactions with healthcare providers, participation in community events, contacts with community organizations, and media outreach.
Results
After 1–2 years, 41%–74% of 9844 study-eligible women had heard about the National Children’s Vanguard Study when first approached. Women who heard were 1.5–3 times more likely to consent. Hearing via word-of-mouth or the media most frequently predicted consent. The more sources women heard from the higher the odds of consent.
Conclusions
We conclude that tailored outreach and engagement facilitate recruitment in cohort studies.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.