Hostname: page-component-848d4c4894-pftt2 Total loading time: 0 Render date: 2024-05-20T11:35:02.830Z Has data issue: false hasContentIssue false

Impact of financial compensation on enrollment and participation in a remote, mobile-app based research study

Published online by Cambridge University Press:  05 April 2024

Shelby Meier
Affiliation:
Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, USA
Alex Cheng
Affiliation:
Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, USA Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, USA
Maeve Tischbein*
Affiliation:
Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, USA
Cathy Shyr
Affiliation:
Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, USA
Rebecca N. Jerome
Affiliation:
Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, USA
Terri L. Edwards
Affiliation:
Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, USA
Mary Stroud
Affiliation:
Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, USA
Consuelo H. Wilkins
Affiliation:
Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, USA Office of Health Equity, Vanderbilt University Medical Center, Nashville, TN, USA
Paul A. Harris
Affiliation:
Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, USA Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, USA
*
Corresponding author: M. Tischbein; Email: maeve.tischbein@vumc.org
Rights & Permissions [Opens in a new window]

Abstract

Background:

There is no consensus on how to determine appropriate financial compensation for research recruitment. Selecting incentive amounts that are reasonable and respectful, without undue inducement, remains challenging. Previously, we demonstrated that incentive amount significantly impacts participants’ willingness to complete various hypothetical research activities. Here we further explore this relationship in a mock decentralized study.

Methods:

Adult ResearchMatch volunteers were invited to join a prospective study where interested individuals were given an opportunity to view details for a study along with participation requirements, then offered a randomly generated compensation amount between $0 and $50 to enroll and participate. Individuals agreeing to participate were then asked to complete tasks using a remote mobile application (MyCap), for two weeks. Tasks included a weekly survey, a daily gratitude journal and daily phone tapping task.

Results:

Willingness to participate was 85% across all incentive levels but not significantly impacted by amount. Task completion appeared to increase as a function of compensation until a plateau at $25. While participants described the study as low burden and reported that compensation was moderately important to their decision to join, only 31% completed all study tasks.

Conclusion:

While offering compensation in this study did not have a strong effect on enrollment rate, this work provides insight into participant motivation when joining and participating in studies employing mobile applications.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

The collective success of healthcare research efforts in the United States (U.S.) relies on the ability of research teams to recruit and retain participants in studies. Numerous recruitment strategies are described in the literature [Reference Huynh, Johns, Liu, Vedula, Li and Puhan1Reference Gardner, Albarquoni, El Feky, Gillies and Treweek5]. The use of financial incentives, as an approach for compensating participant time and effort as well as a show of respect for their contribution to the healthcare research, is one mechanism that receives considerable interest in terms of improving recruitment [Reference Dunn, Kim, Fellows and Palmer6Reference Permuth-Wey and Borenstein8]. While some researchers have explored interaction between demographic factors such as income or race and ethnicity on compensation preferences [Reference Halpern, Chowdhury and Bayes9Reference Kurt, Kincaid and Semler11], there remains a need for further exploration of this issue, including variability in response and preferences among different demographic groups. There is also no clear consensus regarding the best approach for determining financial compensation. This lack of guidance and nascent evidence base present a challenge to researchers seeking to determine respectful incentives that improve study enrollment, engagement, and retention, but do not provide undue inducement [Reference Halpern, Chowdhury and Bayes9Reference Kurt, Kincaid and Semler11].

The Recruitment Innovation Center (RIC), funded by the National Center for Advancing Translational Sciences (NCATS), develops evidence-based recruitment and retention solutions to improve the quality of clinical trials. Previously, we evaluated the relationship between financial incentive amount and hypothetical willingness to participate in various research scenarios [Reference Bickman, Domenico and Byrne12]. We determined that willingness to participate was positively correlated with compensation amount and that higher-burden tasks generally required higher compensation amounts. While our previous study effectively queried participants about their willingness to participate in a variety of research tasks, it was limited in that all scenarios were hypothetical. Additionally, participant follow-through and actual performance of presented study tasks were not assessed.

The current work expands on our previous efforts by both assessing the relationship between compensation amount and willingness to join a research study, as in our original work, and the added dimension of participant adherence to study tasks within a decentralized mock study.

Methods

Study population

Participants were recruited from ResearchMatch, a national, nonprofit, volunteer-to-researcher matching platform that includes more than 150,000 volunteers [Reference Harris, Scott, Lebo, Hassan, Lightner and Pulley13]. Individuals aged 18+ years with no reported health conditions (i.e., “healthy individuals”) were invited to join. The racial and ethnic enrollment goal for this study was based on the makeup of the U.S. (59% White, 14% Black or African American or African, 19% Hispanic or Latino, 6% Asian, and 2% American Indian or Alaska Native) [14]. To ensure a racially and ethnically diverse sample, the demographic makeup of respondents was reviewed after each wave of study invitations was sent. Imbalances in enrollment of underrepresented groups in this study were iteratively targeted in subsequent invitation waves (Appendix Table 1).

Study design

This study was approved by the Vanderbilt Institutional Review Board as exempt research (IRB #221043). Participants used MyCap [Reference Harris, Swafford and Serdoz15], a participant-facing data collection mobile application that securely transmits data to and from REDCap [Reference Harris, Taylor, Thielke, Payne, Gonzalez and Conde16,Reference Harris, Taylor and Minor17], to perform study tasks. We implemented study tasks of varying frequency and type for participants to complete: a weekly Gratitude Adjective Checklist [Reference Mccullough, Emmons and Tsang18], a daily gratitude journal, and a daily tapping task [Reference Harris, Swafford and Serdoz15]. Figure 1 details the participant flow for this study. Volunteers who responded positively to the study invitation were immediately redirected to our REDCap-based survey for enrollment, which included a brief demographic questionnaire that queried age group, gender identity, race and ethnicity, educational level, employment status, and annual household income. All respondents who completed this questionnaire were included in the denominator for participant enrollment rate.

Figure 1. Participant flow for the study. (A) Schematic showing participant experience from invitation to study completion. The random compensation offer was generated after participant demographics were collected (dotted box outline) and was shown to volunteers at the same time as the study description. (B) Enrollment flow from invitation to enrollment and downloading the study app, MyCap.

Respondents were then provided an e-Consent form describing the study and a randomly generated promised compensation amount between $0 and $50, with each intermediate price point increasing by increments of $5. Respondents were informed that they would need to complete all tasks in order to receive compensation. After reviewing the e-Consent form and compensation amount, respondents were asked whether or not they would join the study. The randomly generated amount communicated to participants was an IRB-approved deception, as all participants agreeing to participate in the study and download MyCap were compensated equally at the end of the study ($50). As it was important to know if the amount of money offered impacted decisions to join and adhere to the study, volunteers consented to join a study with a stated purpose of understanding “how paying people for being in a study affects their willingness to join and their participation throughout the study” and told that the study contained element of deception, which would be revealed upon completion. After the study, participants received an email explaining the nature of the deception.

For those agreeing to join, a custom REDCap external module was used to randomize participants to compensation amounts stratified by gender (woman, man, and non-woman and non-man identities), race/ethnicity (Black, White, and non-Black and non-White racial and ethnic identities) and income (<$65,000/year, ≥ $65,000/year, no answer). These groupings were determined by our study team to ensure randomization was relatively balanced across these demographic categories.

Upon indicating they would like to join and participate in the decentralized data collection study, participants were asked to download MyCap and complete study tasks in this app over a 14-day period. Participants were then asked to complete an optional survey on their experience one week after the 14-day task period. This follow-up survey explored perceived study burden, the impact of compensation on their decision to join the study, whether the amount of compensation offered was believed to be fair, and, if not, what amount they thought to be fair. Qualitative questions about the MyCap app were also asked.

To gain a better understanding of the reasons people opted NOT to join the study, we asked volunteers who declined the study to share their reasoning. Respondents who did not join the data collection study were offered the opportunity to enter a drawing for a $50 gift card.

Data analysis

We sought to assess the impact of offered compensation on participant willingness to join (herein, referred to as enrollment for the purposes of this mock study), with a logistic regression with enrollment in the study as the outcome. While other logistic regression studies follow a “10 to 1” rule, where 10 samples are needed for every independent variable in the regression, we were more conservative and aimed to recruit 15 participants per degree of freedom [19]. With 11 price points ($0 - $50, $5 increments), three race categories, four age categories, and income as an ordinal variable, we sought at least 300 ResearchMatch respondents (i.e., volunteers that provided demographic information, read the study description and randomly generated compensation offer, and expressed if they wanted to participate in the study; Aim 1 in Fig. 1). The number of respondents was not limited to 300 volunteers; study invitations were sent in waves and enrollment concluded following the wave of invitations in which 300 participants were obtained. The primary null hypothesis of this aim was that there is no statistically significant association between offered financial incentive and willingness to participate in the study.

For assessing the impact of compensation offered on dataset completeness (i.e., participants downloading MyCap and taking part in study tasks; Aim 2 in Fig. 1A), the contents of participant responses for each study task were not analyzed; rather we looked at the presence or absence of a response. A 2-sample test for equality of proportions with continuity correction was used to assess the proportion of participants that said yes to the study invitation and downloaded MyCap for each race/ethnicity category as compared to white participants.

Loess curves were used to visualize the effect of incentive amount on study participation rate and dataset completeness. We ran a logistic regression to determine whether any factors contributed significantly to the participation rate. We used a one-sided, two-sample test for equality of proportions with continuity correction to compare the proportion of tasks completed between two compensation amounts. To assess the effect of study compensation across task types (daily vs. weekly study tasks), we used a logistic mixed effects model with a random intercept. Specifically, we regressed task completion (yes/no) on compensation amount, task type (daily/weekly frequency) and an interaction term between compensation amount and task type. To analyze retention among participants who agreed to join the study, we plotted Kaplan-Meier curves for the last day that each participant completed any study task by compensation amount [Reference Li, Halabi and Selvarajan20,Reference Pratap, Neto and Snyder21]. All statistical analyses were run using R version 4.3.0 with data pulled directly through the REDCap Application Programming Interface.

Results

We sent a total of 9,236 invitations to ResearchMatch volunteers and received 492 expressions of interest. Of those interested, 413 were enrolled (i.e., said “yes” to joining) in the study (Fig. 1). One participant withdrew after enrollment; no reason for withdrawal was given. Of the 412 remaining enrolled participants, 286 downloaded the required MyCap study app. We noted an increased proportionality of Black (65%, 95% CI 60%–88%) and Asian (76%, 95% CI 55%–74%) respondents that downloaded MyCap relative to White participants (55%, 95% CI 49%–61%); however, only the latter group reached statistical significance (p = 0.03). Table 1 summarizes participant demographic data.

Table 1. Demographics of participants. Self-reported characteristics of ResearchMatch volunteers that responded to the study invitation, participated in or declined participation.

* Numbers do not tally as respondents were able to select all categories that they felt applied to them.

For all price points, the enrollment rate remained high (∼80%–90%; Fig. 2A) with a high degree of overlap in the 95% confidence intervals, and there was insufficient evidence to substantiate a statistically significant relationship between enrollment and compensation (p > 0.05 based on a Wald test). No factors, including age, race, income, and promised compensation amount, were statistically significant contributors to the participant’s decision to join our study in the logistic regression.

Figure 2. Participant enrollment and adherence to mock study tasks. (A) Rate of participant enrollment in the study with the line representing the Loess curve for rate of enrollment by promised compensation amount for the whole study group. Dots are mean values at each promised compensation amount, and bars are 95% Wilson confidence intervals. n = 486 (as enrollment rate was calculated from those who said yes OR no). (B) Mean task completion; red line representing the Loess curve of total task completion by compensation amount, n = 286. Bars depicting the 95% Wilson confidence interval. (C) Proportion of participants that did or did not complete ALL study tasks for each promised compensation amount, n = 286.

Between $0 and $25, task completion increased from 60 to 80%, and this difference was statistically significant (p < 2e–16 based on a two-sample proportion test). For promised compensation offers greater than $25, task completion plateaued around 80% (Fig. 2B). When separated by weekly (gratitude adjective checklist) or daily tasks (tapping tasks and gratitude journal entries), the effect of compensation was not statistically different (p = 0.09). Overall, 31% of participants who agreed to participate completed all study tasks. Only one incentive amount ($40) had more than 50% of participants complete all tasks (54%; Fig. 2C). When evaluating participant retention (defined as the last day of recorded study activity) using Kaplan-Meier curves we observed that retention was not equal between compensation groups (p = 0.0019).

Post-study participant perspectives

After the 14-day study period, all participants were invited to complete a brief questionnaire about their study experience. We focused on participant perceptions around burden, the importance of compensation, and additional motivating factors related to enrollment in this study. Of the 286 participants who enrolled and downloaded MyCap, 265 responded to this optional questionnaire. The majority of participants (n = 193, 73%) found the study to be low burden (rated ≤ 30 on a slider scale from 0 to 100) (Fig. 3A). Even with this subjectively rated “low burden” study, compensation was of moderate importance in the decision-making process (rated between 31 and 69 on a slider scale from 0 to 100) (Fig. 3A). While most study participants believed that their compensation offer was fair, a small number of participants (n = 16, data not shown) disagreed. All participants who believed their offer was unfair were asked to suggest a fair compensation amount for the study. The amounts suggested ranged from $15–$200, with the average being $80.96 (Fig. 3B) and one participant saying any amount other than $0 was fair. Participants were also asked to share any motivating factors that did not involve compensation. Desire to contribute to greater scientific knowledge and help researchers understand the importance of compensation in clinical trials were most frequently selected by participants (Fig. 3C). 35 of the 265 respondents indicated that compensation was the only factor contributing to their decision.

Figure 3. Participant experience survey responses. (A) Box and whisker plot for responses around perceived study burden and importance of compensation when joining this study, n = 265. (B) Box and whisker plot of compensation amounts suggested by participants that believed their initial compensation offer was unfair, n = 13 (3 respondents did not provide a suggested compensation amount). Dots are individual response values. (C) Additional reasons participants chose to be in this study (participants could select more than one answer when responding), n = 265.

Respondents who did not complete all steps to join the study or actively declined participation

Of the 492 expressions of interest in our study from the initial ResearchMatch invitees, 126 said yes to joining the study but did not download the required MyCap study app and 50 actively declined participation by selecting “no” when asked if they would join the study (Fig. 1B). Table 1 summarizes the demographics of those who actively declined participation.

We sent a brief survey to the 126 participants who did not download MyCap and received 25 responses (20%). This follow-up survey focused on perceived obstacles around downloading the app. Having to download the app itself was the main reason that 54% of the respondents reported they did not continue with the study. For participants who said downloading the app was NOT the main reason for their discontinuation in the study, additional reported obstacles such as forgetting about the study and technical difficulties were reported (data not shown).

For the 50 respondents who actively declined participation, we asked them to share their reasoning; all those who declined participation completed this optional follow-up. The most frequently selected reasons for actively declining participation were related to compensation amount, participant burden, and not wanting to download an app (Fig. 4A). We further investigated the compensation amount offered to respondents that had indicated “Compensation offer wasn't high enough.” The amount offered at enrollment was varied, but the majority received offers of $15 or less. The proportion of those who received offers of $15 or less and said no (15/50; 30%) was similar to the proportion of those who received offers of $15 or less and said yes (94/286; 33%). We also asked these respondents what compensation amount would have been acceptable, and found the mean suggested amount to be $60 (Fig. 4B). Respondents that selected “Other” were asked to clarify their reason in a free response text box. A common theme within those explanations was a dislike of the study containing elements of deception (the study description shown to participants in the e-Consent document included language letting them know there were elements of deception in the study, but those elements did not influence study activities or risk of the study and the elements of deception would be revealed after completing the study).

Figure 4. Reasons for study declination by participants. (A) Responses to the question “Would you please share any reasons why you didn't want to join the study?” from respondents that turned down the study, n = 50. Respondents could select all options that applied to them. (B) Compensation amount suggestions from respondents that believed their compensation offer was not high enough, n = 19.

Discussion

Summary of study findings

In this study, we built upon previous work [Reference Bickman, Domenico and Byrne12] by exploring the potential correlation between promised compensation amount and participant willingness to join a research study as well as adherence to a study task schedule. We found that level of compensation did not have a significant effect on enrollment as expected over the range tested. Future iterations of this study may increase the range of compensation or decrease the workload of the study to determine if there is more differentiation in enrollment rates by compensation amount. We also observed that as the promised compensation amount increased, the number of overall tasks completed appeared to likewise increase until participants were offered $25 or more (Figs. 2b,c). Though participants subjectively reported the mock study activities to be low burden, no promised compensation amount resulted in more than 54% of participants completing all study tasks, which is relatively concordant with the 44%–46% completion rates reported for other online studies [Reference Wu, Zhao and Fils-Aime22,Reference Meyer, Benjamens, Moumni, Lange and Pol23]. For researchers seeking to determe the “right” level of compensation or an estimation of participant engagement for a given compensation amount, this evidence-based approach using research participants as key informants may be a useful strategy.

Comparison of our findings to previous work study

In comparison to our previous work, where participation rates with increasing amounts of compensation nearly all hypothetically-proposed study tasks [Reference Bickman, Domenico and Byrne12], our key finding here differs: there was no significant effect of compensation amount on participant willingness to join. Both studies were recruited from ResearchMatch, but there were notable differences between the investigations. The original study focused entirely on hypothetical scenarios; participants were never asked to actually complete any tasks, but rather only consider completing a single task (i.e., Would you keep a daily record of how much water you drink for one week and discuss it with clinic staff for $X?). Focusing on a single task, rather than multiple activities within a study, could allow participants to consider their decision more clearly without having to weigh multiple options. Also, the tasks presented in the original study were a mix of remote and “in-person” study activities. For some participants, having to travel for a study visit could be a major burden and the amount of compensation promised may have figured more prominently in their decision. The current study was reported as generally low burden by participants and the compensation offered may have had less of an effect, potentially as there were few perceived obstacles in joining. For both studies we recognize the hypothetical or mock research tasks examined may not directly relate to a given participant’s health or healthcare and that participation or completion of study tasks may differ when volunteers are asked to report data that is more sensitive or of greater personal relevance. We may expand use of this platform to additional research applications and scenarios in the future to further add to our understanding of participant compensation expectations across diverse study requirements.

Research studies commonly compensate participants for various research-related tasks, but the appropriateness of compensation amounts remains a topic of debate. A meta-analysis of the effect of compensation on enrollment in randomized clinical trials showed that offering compensation significantly increased the rate of response and consent [Reference Abdelazeem, Abbas and Amin7]. Additionally, other investigators have reported compensation as a motivating factor for participants, but not in a way that suggested undue influence or inducement [Reference Largent, Eriksen, Barg, Greysen and Halpern24]. The RETAIN study [Reference Krutsinger, McMahon and Stephens-Shields25], led by investigators at the University of Pennsylvania, found that compensation significantly increased enrollment rates for a smoking cessation trial, but not for an ambulation intervention trial. In both trials, there was no evidence that compensation offers produced undue influence even with offers up to $500 (smoking cessation trial) and $300 (ambulation trial). They concluded that studies offering compensation are not unethical, but that the effects of incentives on enrollment may not be consistent across all clinical trials [Reference Halpern, Chowdhury and Bayes9]. In contrast to these findings, our data showed only a slight, nonsignificant, positive slope between promised compensation and enrollment rate. However, participants’ responses in the post-study experience survey showed that, subjectively, compensation was of moderate importance to participants. This demonstrates that, at least in this study design with this population, the amount of compensation may have mattered but did not have a major impact on a participant’s willingness to enroll. This finding is in line with the conclusions from the RETAIN study: the effects of compensation may vary between trials. Taken together, these data suggest the specific amount offered to a participant doesn't need to be exactly “perfect,” but that the act of offering some level of compensation is, itself, critical. This is supported in the literature, especially in studies where participants are expected to face co-payments or other obstacles to participation [Reference Abdelazeem, Abbas and Amin7,Reference Parkinson, Meacock and Sutton26,Reference Groth27], and as an approach for demonstrating respect for and value of the participants in the study [Reference Grady28]. While our evidence-based findings can help inform compensation decisions in clinical trial design, we recommend through our additional efforts through the RIC that study teams use Participant/Patient Advisory Groups to directly ask participants about adequate compensation for their specific study whenever possible.

Potential limitations

The study population was sourced from ResearchMatch and we recognize there is likely some degree of self-selection among the registry volunteers that were willing to participate in this study. Consistent with the ResearchMatch population, the study cohort is highly educated (>82% have at least some college-level or more education) and is employed full or part time (>60%). However, participant responses to our invitations skewed younger (majority <49 years of age), a trend that was enhanced further among those participants who proceeded to download MyCap. Further, ResearchMatch volunteers have already self-selected for interest in research by their initial joining of this community, thus are likely to have a history of research participation and associated positive attitudes. Overall, these characteristics may somewhat limit the generalizability of our results to a more heterogeneous population. Moreover, this study was only available in English. We acknowledge the need for a multimodal and multilingual approach to participant recruitment in order to mitigate selection bias inherent to any single strategy. Future efforts will seek to include populations more representative of the general public (e.g., CINT database [29]) as well as populations outside of research registries (which may more accurately reflect the attitudes of the general American population).

For this study, participants were told they would receive a random amount between $0-50 for participating when, actually, all who enrolled and downloaded MyCap received a $50 gift card. This “deception” method was to ensure ethical responsibility by compensating all participants equally for the same amount of participation. From a budgeting standpoint, the need to compensate every participant with the highest amount prevented us from testing a wider range of values (for example $0-$100) where we might have been able to detect a difference in enrollment rate. Deception was not a major reason endorsed by those declining the study, possibly due to the research-minded disposition of the ResearchMatch population and the low-risk nature of the study. Such research-mindedness may have also contributed to the lack of differences in enrollment based on compensation in this study. It is possible that the deception may have also had the opposite effect, artificially elevating the enrollment rate for lower dollar amounts as volunteers considering this study about study compensation may have suspected that they would get the full $50 regardless of what was offered in the consent form. However, we acknowledge that deception could be triggering for people from marginalized backgrounds that have been historically exploited, including undisclosed and harmful deception in past research [Reference Smirnoff, Wilets and Ragin30,Reference Scharff, Mathews, Jackson, Hoffsuemmer, Martin and Edwards31]. Participant concerns around deception remain a general and important consideration in the design of future and/or replicate studies, especially among populations where privacy and/or trust are of known concern.

By utilizing the MyCap study app, we were able to conduct this study in a fully remote environment. Since the onset of the COVID-19 pandemic, the literature indicates a growing number of studies incorporating remote aspects [Reference Brody, Convery, Kline, Fink and Fischer32Reference Pritchett, Patt, Thanarajasingam, Schuster and Snyder35]. While there is evidence that remote data collection reduces the burden for participants [Reference Hensen, Mackworth-Young and Simwinga36] and makes studies more accessible [Reference Tiersma, Reichman and Popok37], this does not mean there aren’t obstacles or challenges for study teams to consider when designing a remote trial. As demonstrated by this study, one of the top-reported reasons for study declination was “didn’t want to download app to mobile device’ and, of those that responded to the study invite but did not download MyCap, having to download the app was the main reason for not continuing with the study. Though mediation of app-related study declination was not examined here, the consideration of a participant’s willingness to download a study app and overcome technical difficulties as well as the provision of clear instructions (i.e., infographic, step-by-step instruction handout with images, or a short video) are in line with the findings and experience of the RIC.

This study did not investigate the effects of prorating payments or other engagement/retention strategies (such as reminders or gamification) that could impact a participant’s decision to enroll. Prorating payments (i.e., paying participants in small increments as they complete tasks, rather than one lump sum at the end of the study) is recommended to encourage participants to complete checkpoints, especially in longer studies, to mitigate any participant-incurred burdens related to costs incurred by their decision to remain in the study [Reference Grimwade, Savulescu and Giubilini38,39]. Our study was relatively short and rated by participants to be low burden, so it is possible that prorating payments would not have had any effect. Additionally, our study had a fairly low rate of study declination (∼10%) and participant-provided reasons for turning down the study indicated that it was due to the amount rather than the timing of the payment. Early engagement strategies, such as building trust, improving participant comprehension of the study, and appropriately framing risks and benefits have been shown to have a significantly positive effect on recruitment in some studies [Reference Wong, Song and Jiao40]. Our individual study relied heavily on previous work done by the ResearchMatch group to establish trust with our participants. While not a part of this study, we drew upon the experience of RIC to build trust by making the study easy to understand and engaging our Community Advisory Board as to the presentation and readability of the e-Consent document used. Additional studies could further explore how to best communicate elements of deception within a study without eroding any trust that is already built.

Conclusions

Together, this study supports compensation as an important factor considered by participants when choosing to enroll, but that the amount itself may be less important than anything.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/cts.2024.515.

Acknowledgments

We appreciate the thoughtful insights and feedback shared by the Recruitment Innovation Center Community Advisory Board around use of specific language and readability of the consent document used for this study.

We appreciate the thoughtful insight and guidance around presentation of collected demographic information by the Recruitment Innovation Center’s Diversity, Equity, and Inclusion Workgroup.

We appreciate the guidance and feedback around appropriateness of analytical approaches and presentation of data shared during the Vanderbilt Biostatistics Clinic, specifically from Cass Johnson, Jackson Resser, and Frank Harrell, PhD.

Author contributions

The authors confirm the contribution to the paper as follows: Study conception and design (SM, AC, RJ, TE, MS, CW, PH), data collection (AC, SM, MT), analysis and interpretation of results (SM, AC, MT, CS, RJ, TE, MS, CW, PH), draft manuscript preparation (SM, AC, MT, CS, RJ, TE, MS, CW, PH). SM takes responsibility for the manuscript as a whole.

Funding statement

This project was supported by award no. U24TR001579 and U24TR004432 from the National Center for Advancing Translational Sciences (NCATS) and the National Library of Medicine (NLM). Its contents are solely the responsibility of the authors and do not necessarily represent official views of NCATS, NLM, or the National Institutes of Health.

Competing interests

None.

References

Huynh, L, Johns, B, Liu, SH, Vedula, SS, Li, T, Puhan, MA. Cost-effectiveness of health research study participant recruitment strategies: a systematic review. Clin Trials Lond Engl. 2014;11(5):576583. doi: 10.1177/1740774514540371.CrossRefGoogle ScholarPubMed
Andrews, L, Davies, TH. Participant recruitment and retention from vulnerable populations in clinical trials is a matter of trust. Contemp Clin Trials. 2022;123:106969. doi: 10.1016/j.cct.2022.106969.CrossRefGoogle ScholarPubMed
Darko, EM, Kleib, M, Olson, J. Social media use for research participant recruitmentt: integrative literature review. J Med Internet Res. 2022;24(8):e38015. doi: 10.2196/38015.CrossRefGoogle ScholarPubMed
Raven-Gregg, T, Wood, F, Shepherd, V. Effectiveness of participant recruitment strategies for critical care trials: a systematic review and narrative synthesis. Clin Trials Lond Engl. 2021;18(4):436448. doi: 10.1177/1740774520988678.CrossRefGoogle ScholarPubMed
Gardner, HR, Albarquoni, L, El Feky, A, Gillies, K, Treweek, S. A systematic review of non-randomised evaluations of strategies to improve participant recruitment to randomised controlled trials. F1000Research. 2020;9:86. doi: 10.12688/f1000research.22182.1.CrossRefGoogle ScholarPubMed
Dunn, LB, Kim, DS, Fellows, IE, Palmer, BW. Worth the risk? Relationship of incentives to risk and benefit perceptions and willingness to participate in schizophrenia research. Schizophr Bull. 2009;35(4):730737. doi: 10.1093/schbul/sbn003.CrossRefGoogle ScholarPubMed
Abdelazeem, B, Abbas, KS, Amin, MA, et al. The effectiveness of incentives for research participation: a systematic review and meta-analysis of randomized controlled trials. PloS One. 2022;17(4):e0267534. doi: 10.1371/journal.pone.0267534.CrossRefGoogle ScholarPubMed
Permuth-Wey, J, Borenstein, AR. Financial remuneration for clinical and behavioral research participation: ethical and practical considerations. Ann Epidemiol. 2009;19(4):280285. doi: 10.1016/j.annepidem.2009.01.004.CrossRefGoogle ScholarPubMed
Halpern, SD, Chowdhury, M, Bayes, B, et al. Effectiveness and ethics of incentives for research participation. JAMA Intern Med. 2021;181(11):14791488. doi: 10.1001/jamainternmed.2021.5450.CrossRefGoogle ScholarPubMed
Kalbaugh, CA, Kalbaugh, JM, McManus, L, Fisher, JA. Healthy volunteers in US phase I clinical trials: sociodemographic characteristics and participation over time. PloS One. 2021;16(9):e0256994. doi: 10.1371/journal.pone.0256994.CrossRefGoogle ScholarPubMed
Kurt, A, Kincaid, H, Semler, L, et al. Impact of race versus education and race versus income on patients’ motivation to participate in clinical trials. J Racial Ethn Health Disparities. 2018;5(5):10421051. doi: 10.1007/s40615-017-0452-z.CrossRefGoogle ScholarPubMed
Bickman, L, Domenico, HJ, Byrne, DW, et al. Effects of financial incentives on volunteering for clinical trials: a randomized vignette experiment. Contemp Clin Trials. 2021;110:106584. doi: 10.1016/j.cct.2021.106584.CrossRefGoogle ScholarPubMed
Harris, PA, Scott, KW, Lebo, L, Hassan, N, Lightner, C, Pulley, J. ResearchMatch: a national registry to recruit volunteers for clinical research. Acad Med J Assoc Am Med Coll. 2012;87(1):6673. doi: 10.1097/ACM.0b013e31823ab7d2.CrossRefGoogle Scholar
U.S. Census Bureau QuickFacts: United States. https://www.census.gov/quickfacts/fact/table/US/PST045222. Accessed May 16, 2023.Google Scholar
Harris, PA, Swafford, J, Serdoz, ES, et al. MyCap: a flexible and configurable platform for mobilizing the participant voice. JAMIA Open. 2022;5(2):ooac047. doi: 10.1093/jamiaopen/ooac047.CrossRefGoogle ScholarPubMed
Harris, PA, Taylor, R, Thielke, R, Payne, J, Gonzalez, N, Conde, JG. Research electronic data capture (REDCap)—A metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377381. doi: 10.1016/j.jbi.2008.08.010.CrossRefGoogle ScholarPubMed
Harris, PA, Taylor, R, Minor, BL, et al. The REDCap consortium: building an international community of software platform partners. J Biomed Inform. 2019;95:103208. doi: 10.1016/j.jbi.2019.103208.CrossRefGoogle ScholarPubMed
Mccullough, ME, Emmons, RA, Tsang, JA. The grateful disposition: a conceptual and empirical topography. J Pers Soc Psychol. 2002;82(1):112127. doi: 10.1037//0022-3514.82.1.112.CrossRefGoogle ScholarPubMed
A simulation study of the number of events per variable in logistic regression analysis - ScienceDirect. https://www.sciencedirect.com/science/article/pii/S0895435696002363?via%3Dihub. Accessed February 19, 2024.Google Scholar
Li, SX, Halabi, R, Selvarajan, R, et al. Recruitment and retention in remote research: learnings from a large, decentralized real-world study. JMIR Form Res. 2022;6(11):e40765. doi: 10.2196/40765.CrossRefGoogle ScholarPubMed
Pratap, A, Neto, EC, Snyder, P, et al. Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants. Npj Digit Med. 2020;3(1):110. doi: 10.1038/s41746-020-0224-8.CrossRefGoogle ScholarPubMed
Wu, MJ, Zhao, K, Fils-Aime, F. Response rates of online surveys in published research: a meta-analysis. Comput Hum Behav Rep. 2022;7:100206. doi: 10.1016/j.chbr.2022.100206.CrossRefGoogle Scholar
Meyer, VM, Benjamens, S, Moumni, ME, Lange, JFM, Pol, RA. Global overview of response rates in patient and health care professional surveys in surgery: a systematic review. Ann Surg. 2022;275(1):e75e81. doi: 10.1097/SLA.0000000000004078.CrossRefGoogle Scholar
Largent, EA, Eriksen, W, Barg, FK, Greysen, SR, Halpern, SD. Perspectives on payment for research participation: a qualitative study. Ethics Hum Res. 2022;44(6):1422. doi: 10.1002/eahr.500147.CrossRefGoogle ScholarPubMed
Krutsinger, DC, McMahon, J, Stephens-Shields, AJ, et al. Randomized evaluation of trial acceptability by INcentive (RETAIN): study protocol for two embedded randomized controlled trials. Contemp Clin Trials. 2019;76:18. doi: 10.1016/j.cct.2018.11.007.CrossRefGoogle ScholarPubMed
Parkinson, B, Meacock, R, Sutton, M, et al. Designing and using incentives to support recruitment and retention in clinical trials: a scoping review and a checklist for design. Trials. 2019;20(1):624. doi: 10.1186/s13063-019-3710-z.CrossRefGoogle Scholar
Groth, SW. Honorarium or coercion: use of incentives for participants in clinical research. J N Y State Nurses Assoc. 2010;41(1):1113. https://pubmed.ncbi.nlm.nih.gov/20882821/ Google ScholarPubMed
Grady, C. Money for research participation: does in jeopardize informed consent? Am J Bioeth AJOB. 2001;1(2):4044. doi: 10.1162/152651601300169031.CrossRefGoogle ScholarPubMed
CintTM | Digital Insights Gathering Platform. CintTM . https://www.cint.com/. Accessed September 1, 2023.Google Scholar
Smirnoff, M, Wilets, I, Ragin, DF, et al. A paradigm for understanding trust and mistrust in medical research: the community VOICES study. AJOB Empir Bioeth. 2018;9(1):3947. doi: 10.1080/23294515.2018.1432718.CrossRefGoogle ScholarPubMed
Scharff, DP, Mathews, KJ, Jackson, P, Hoffsuemmer, J, Martin, E, Edwards, D. More than tuskegee: understanding mistrust about research participation. J Health Care Poor Underserved. 2010;21(3):879897. doi: 10.1353/hpu.0.0323.CrossRefGoogle ScholarPubMed
Brody, AA, Convery, KA, Kline, DM, Fink, RM, Fischer, SM. Transitioning to remote recruitment and intervention: a tale of two palliative care research studies enrolling underserved populations during COVID-19. J Pain Symptom Manage. 2022;63(1):151159. doi: 10.1016/j.jpainsymman.2021.06.017.CrossRefGoogle ScholarPubMed
Chatters, R, Cooper, CL, O’Cathain, A, et al. Learning from COVID-19 related trial adaptations to inform efficient trial design-a sequential mixed methods study. BMC Med Res Methodol. 2022;22(1):128. doi: 10.1186/s12874-022-01609-6.CrossRefGoogle ScholarPubMed
Naz-McLean, S, Kim, A, Zimmer, A, et al. Feasibility and lessons learned on remote trial implementation from testBoston, a fully remote, longitudinal, large-scale COVID-19 surveillance study. PloS One. 2022;17(6):e0269127. doi: 10.1371/journal.pone.0269127.CrossRefGoogle ScholarPubMed
Pritchett, JC, Patt, D, Thanarajasingam, G, Schuster, A, Snyder, C. Patient-reported outcomes, digital health, and the quest to improve health equity. Am Soc Clin Oncol Educ Book Am Soc Clin Oncol Annu Meet. 2023;43:e390678. doi: 10.1200/EDBK_390678.CrossRefGoogle ScholarPubMed
Hensen, B, Mackworth-Young, CRS, Simwinga, M, et al. Remote data collection for public health research in a COVID-19 era: ethical implications, challenges and opportunities. Health Policy Plan. 2021;36(3):360368. doi: 10.1093/heapol/czaa158.CrossRefGoogle Scholar
Tiersma, K, Reichman, M, Popok, PJ, et al. The strategies for quantitative and qualitative remote data collection: lessons from the COVID-19 pandemic. JMIR Form Res. 2022;6(4):e30055. doi: 10.2196/30055.CrossRefGoogle ScholarPubMed
Grimwade, O, Savulescu, J, Giubilini, A, et al. Payment in challenge studies: ethics, attitudes and a new payment for risk model. J Med Ethics. 2020;46(12):815826. doi: 10.1136/medethics-2020-106438.CrossRefGoogle Scholar
Protections (OHRP) O for HR. Attachment A - Addressing Ethical Concerns Offers of Payment to Research Participants. HHS.gov. Published September 30, 2019. https://www.hhs.gov/ohrp/sachrp-committee/recommendations/attachment-a-september-30-2019/index.html. Accessed July 17, 2023.Google Scholar
Wong, CA, Song, WB, Jiao, M, et al. Strategies for research participant engagement: a synthetic review and conceptual framework. Clin Trials. 2021;18(4):457465. doi: 10.1177/17407745211011068.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Participant flow for the study. (A) Schematic showing participant experience from invitation to study completion. The random compensation offer was generated after participant demographics were collected (dotted box outline) and was shown to volunteers at the same time as the study description. (B) Enrollment flow from invitation to enrollment and downloading the study app, MyCap.

Figure 1

Table 1. Demographics of participants. Self-reported characteristics of ResearchMatch volunteers that responded to the study invitation, participated in or declined participation.

Figure 2

Figure 2. Participant enrollment and adherence to mock study tasks. (A) Rate of participant enrollment in the study with the line representing the Loess curve for rate of enrollment by promised compensation amount for the whole study group. Dots are mean values at each promised compensation amount, and bars are 95% Wilson confidence intervals. n = 486 (as enrollment rate was calculated from those who said yes OR no). (B) Mean task completion; red line representing the Loess curve of total task completion by compensation amount, n = 286. Bars depicting the 95% Wilson confidence interval. (C) Proportion of participants that did or did not complete ALL study tasks for each promised compensation amount, n = 286.

Figure 3

Figure 3. Participant experience survey responses. (A) Box and whisker plot for responses around perceived study burden and importance of compensation when joining this study, n = 265. (B) Box and whisker plot of compensation amounts suggested by participants that believed their initial compensation offer was unfair, n = 13 (3 respondents did not provide a suggested compensation amount). Dots are individual response values. (C) Additional reasons participants chose to be in this study (participants could select more than one answer when responding), n = 265.

Figure 4

Figure 4. Reasons for study declination by participants. (A) Responses to the question “Would you please share any reasons why you didn't want to join the study?” from respondents that turned down the study, n = 50. Respondents could select all options that applied to them. (B) Compensation amount suggestions from respondents that believed their compensation offer was not high enough, n = 19.

Supplementary material: File

Meier et al. supplementary material

Meier et al. supplementary material
Download Meier et al. supplementary material(File)
File 36.3 KB