Hostname: page-component-857557d7f7-wf4rb Total loading time: 0 Render date: 2025-11-29T20:46:20.270Z Has data issue: false hasContentIssue false

Extending participant feedback beyond clinical studies: A modular system designed to connect researchers and participants

Published online by Cambridge University Press:  03 November 2025

Alicia Giordimaina Carmichael*
Affiliation:
Institute for Social Research, University of Michigan, Ann Arbor, MI, USA
Donna Walter
Affiliation:
Institute for Social Research, University of Michigan, Ann Arbor, MI, USA
Brandon Patric Labbree
Affiliation:
Institute for Social Research, University of Michigan, Ann Arbor, MI, USA
Boluwatife Dogari
Affiliation:
Institute for Social Research, University of Michigan, Ann Arbor, MI, USA
Natalie Leonard
Affiliation:
Taubman College of Architecture and Urban Planning, University of Michigan, Ann Arbor, MI, USA
Kathryn Ward
Affiliation:
Institute for Social Research, University of Michigan, Ann Arbor, MI, USA School of Social Work, University of Michigan, Ann Arbor, MI, USA
Xiaoya Geng
Affiliation:
Institute for Social Research, University of Michigan, Ann Arbor, MI, USA School of Social Work, University of Michigan, Ann Arbor, MI, USA
Medha Raju
Affiliation:
Anderson School of Management, University of California, Los Angeles, CA, USA
Jess Francis-Levin
Affiliation:
Institute for Social Research, University of Michigan, Ann Arbor, MI, USA
Richard Gonzalez
Affiliation:
Institute for Social Research, University of Michigan, Ann Arbor, MI, USA
*
Corresponding author: A.G. Carmichael; Email: almagior@umich.edu
Rights & Permissions [Opens in a new window]

Abstract

Introduction:

Declining participant engagement threatens human subjects research. Participant feedback systems (PFS) may combat this decline by empowering participants to evaluate their research experiences and share that feedback with researchers to identify targets for improvement. PFS signal that participant experiences are prioritized, making the request for feedback itself an intervention. PFS design work remains largely confined to clinical research. This exploratory study investigates the design parameters of extending PFS to nonclinical research. We conducted focus groups with nonclinical stakeholders: Experienced research participants (ERP) and research team members (RTM).

Methods:

ERP focus groups were organized by affinity (LGBTQIA+, BIPOC, persons with disabilities, neurodivergent, and a general group). RTM focus groups were organized by unit within the University of Michigan. Transcripts were analyzed using inductive thematic analysis.

Results:

Ten focus groups (ERP: 5, n = 25; RTM: 5, n = 26) identified key PFS design considerations: (1) motivations for feedback, (2) feedback collection, and (3) feedback delivery. ERP and RTM collectively preferred anonymous web-based surveys with six potential topic areas: communication, respect, being valued, receiving value, burden, and safety. Feedback delivery faced two key design tensions: balancing institutional standardization with study-specific insights and aligning leadership’s preference for high-level summaries with frontline staff’s need for detailed, real-time feedback.

Conclusion:

Expanding PFS to nonclinical research requires balancing centralization and study-specific flexibility. While centralization enhances consistency, the diversity of nonclinical studies necessitates adaptable implementation. A hybrid model is proposed to optimize feasibility. Future research should refine and test this model.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

Contemporary research ethics recognizes research participants as partners in knowledge creation [Reference Frank, Morton and Guise1]. This view positions engagement in research not as a one-time transaction, but as a potentially sustained and evolving partnership. Participant research engagement operates across both within-study domains (e.g., co-design [Reference Key, Furr-Holden and Lewis2], enrollment, main study activities, study exit) and between-study domains (e.g., evaluating new enrollment opportunities, maintaining contact). Framing participant research engagement as a long-term behavior (analogous to civic participation, health behaviors, or consumer behaviors) invites insights from broader theoretical frameworks.

Promoting sustained participant engagement is essential to the research enterprise, yet evidence suggests participant engagement is waning across disciplines. Although comprehensive data covering all forms of engagement are limited, two critical indicators show significant strain: recruitment and retention. Over 76% of clinical trials experience delays or discontinuation due to poor enrollment [Reference Fogel3,Reference Briel, Olu and von Elm4]. Outside clinical trials, robust case tracking in national surveys reveals similarly troubling trends [Reference Galea and Tracy57]. The Behavioral Risk Factor Surveillance System response rate fell from 71.4% in 1993 [8] to 44.6% in 2023 [9], the Health and Retirement Study declined from 81.6% in 1992 to 73.9% in 2020,[10] and the General Social Survey dropped from 82.4% in 1993 to 59.5% in 2018 [Reference Smith, Davern, Freese and Morgan11]. These declines persist despite well-funded mitigation efforts [Reference Groves and Heeringa12Reference Vayena, Blasimme and Sugarman16], and they appear to be compounded by declining public confidence in science and scientists [Reference Lupia, Allison, Jamieson, Heimberg, Skipper and Wolf17]. Together, these indicators point to deeper relationship problems between researchers and the broader public.

Leeper offers a compelling interpretation of this erosion of engagement as a common pool resource problem [Reference Leeper18]. Akin to overfishing a pond, when researchers recruit without regard for collective impact, they exhaust the pool of willing participants. Stewardship of this resource is necessary. Yet this metaphor has limits. People are not passive resources to be harvested. Reframing participants as active contributors shifts the focus. Knowledge itself becomes the shared resource, and scientific research is a form of collective action that produces this public good [Reference Ostrom19]. Participants contribute as “experts by experience” and researchers as “experts by training”[Reference Cheffey, Hill, McCullough and McCullough20]. Both framings underscore a common point: the sustainability of research engagement hinges on the quality of researcher–participant interactions.

Growing evidence supports this view. Participants’ past research experiences can shape their future engagement decisions, influencing both individual choices and broader community trust [Reference Price, Park and Brenner21Reference Svendsen, Veseth and McKay23]. Yet most research teams have few mechanisms for learning from participant experiences. The absence of structured feedback channels not only limits opportunities for improvement but may also reinforce the very disengagement that institutions hope to address.

Participant Feedback Systems (PFS) offer a promising solution. PFS are structured mechanisms that allow participants to evaluate their research experiences and share that feedback with researchers and institutions. Feedback may inform a range of improvements, from staff training and protocol refinement to institutional policy decisions. Moreover, research on consumer behavior suggests that the very act of requesting or providing feedback can positively affect future engagement, known as the “mere solicitation” and “mere measurement” effects [Reference Bone, Lemon and Voorhees24].

The clinical research field has led the way in developing PFS, most notably through Rhonda Kost’s pioneering Empowering the Participant Voice (EPV) project. EPV has improved participant satisfaction, enrollment, and response rates across several institutions [Reference Kost, Cheng and Andrews25]. Yet despite these promising developments in clinical trials, PFS have not meaningfully extended into nonclinical human subjects research – but why not?

Part of the challenge is conceptual. Clinical research benefits from the alignment between “patient” and “participant” roles, making it easier to adapt preexisting healthcare feedback models. In contrast, social scientists interact with feedback systems that engage different audiences, such as teaching evaluations, tenure reviews, and employee performance evaluations. These forms of feedback engender less intuitive mental “stepping stones” for PFS development. Structural diversity is another barrier. EPV succeeds in part because clinical trials share common rhythms that support standardization. Nonclinical research spans a broader range of methodologies, populations, and procedures, complicating efforts to design one-size-fits-all tools.

To address this gap, we conducted an exploratory study to inform a PFS model for nonclinical human subjects research. Drawing on Kost et al.’s stakeholder-driven approach, we conducted focus groups with two stakeholder groups: experienced research participants (ERPs) from nonclinical studies and nonclinical research team members (RTMs). We used both direct inquiry (asking attendees about preferences for feedback design) and indirect inquiry (inviting them to describe their own positive and negative research experiences). This indirect method proved especially valuable in ERP groups: abstract questions about feedback often yielded limited responses, while personal stories surfaced implicit preferences and rich contextual insights. Together, these approaches allowed us to identify core content domains for participant feedback and clarify practical PFS design tensions. The discussion section proposes an initial framework for extending PFS beyond clinical trials.

Materials and methods

Study design and recruitment

We conducted separate focus groups with two stakeholder groups, ERP and RTM. The University of Michigan (U-M) IRB-HSBS determined that the study was exempt from ongoing oversight (HUM00247823). All attendees received an Information Sheet aligned with standard informed consent.

A participant-centered approach guided session design, similar in spirit to patient-centered approaches [26]. Groups were held in person or remotely via Zoom; hybrid sessions included technical support and assistant moderators. Recruitment materials described PFS only in vague terms to avoid biasing the discussion. The moderators began each session with this framing: “The goal of the [PFS] is to improve the research experiences of all individuals and communities, to build trust, and create pathways for communicating with research teams.”

Attendees engaged in three moderator-led activities: (1) stakeholder mapping to identify those with a vested interest in PFS, (2) research experience journey mapping activities, and (3) open-ended discussion of opportunities to provide/gather feedback, good/bad research experiences (ERP only), topic areas and use of feedback, and feedback delivery structure (RTM only). Pretesting showed that asking directly what a PFS should include often yielded limited responses. In contrast, prompting reflection on personal experiences produced richer input, and thematic analysis of those reflections revealed relevant content areas for feedback instruments. This indirect approach was used in the final protocol. Journey maps were constructed as multi-layered visual aids (see Fig S1), annotated and referenced during discussion to ground input in real-world experiences. Whiteboards (digital or physical) were used to organize ideas.

Sessions were audio and video recorded for transcription. Attendees also completed a brief online follow-up survey (≈ 5 minutes) to report their demographics and provide additional insights (Table S2). Differences between ERP and RTM focus groups are addressed next.

Experienced research participants (ERP)

ERP focus groups were conducted March-July, 2024 (average duration: 1.5 hours; compensation: $50). ERP were recruited through the U-M Health Research Volunteer Portal [27] and an additional U-M participant registry, using purposive sampling to amplify historically marginalized voices. Eligibility criteria included: (1) age ≥ 18, (2) comfort communicating in written and spoken English, (3) lived within a drivable distance to Ann Arbor, MI, and (4) enrollment in a nonclinical U-M research study within the past 12 months.

Sessions were organized into affinity groups (AG) to amplify underrepresented voices: [1] LGBTQIA + community, [2] Black, indigenous, and persons of color (BIPOC), [3] persons with disabilities, [4] neurodivergent community, and [5] a general group open to all. To account for intersectionality, participants self-selected into focus groups based on identity and scheduling availability. AGs provided space to raise identity-specific concerns; the general group allowed for cross-cutting input. Future research could incorporate additional perspectives, such as those of caregivers and rural residents.

Research team members (RTM)

RTM focus groups were conducted March-August, 2024 (average duration: 1 hour; compensation: $30). Eligible RTM (1) were aged 18 or older, (2) worked on a nonclinical human subjects research team at U-M (Ann Arbor), (3) currently interacted with research participants or expected to within six months, and (4) belonged to a team with at least two members.

Sessions were organized around five U-M research entities: the School of Social Work, the School of Public Health, the Ross Business School, the University of Michigan Transportation Research Institute, and the Institute for Social Research. These teams used diverse methodologies (quantitative, qualitative, and mixed) and engaged a wide range of populations in both experimental and observational studies. Each enrolled team identified up to six members for the focus group, aiming to include a range of research roles, from principal investigators (PIs) to undergraduate research assistants.

Analysis

Transcripts were analyzed using inductive thematic analysis [Reference Braun and Clarke28]. ERP and RTM perspectives were compared to examine how lived experience (ERP) and research role (RTM) shaped design tensions. Recurring themes were iteratively refined; illustrative quotes appear in the results. ERP quotes include demographic context (e.g., AG membership), while RTM quotes only specify team roles to minimize reidentification risk in a small population.

Results

Findings are presented in four sections: (1) attendee demographics; (2) existing feedback practices and motivations; (3) feedback collection from research participants; and (4) feedback delivery to research teams. Existing practices and motivations are included as contextualizing factors to establish a baseline and clarify what each stakeholder group hopes to gain from the feedback process. Feedback collection (including content domains) and delivery are addressed because they represent core design decisions in building a PFS for nonclinical research.

Demographics

Ten focus groups were conducted, five for RTM (n = 26) and five for ERP (n = 25). Demographics are summarized in Table S3. While modest in size, the samples likely reached thematic saturation [Reference Guest, Bunce and Johnson29]. However, see Limitations and Future Directions for a discussion of how to expand on this work.

Research teams varied widely in composition. RTM self-identified one or more team roles, with 9 (39%) reporting as PIs, 8 (35%) as Research Assistants or Associates, 2 (8.7%) as Participant Coordinators (PC), and 2 (8.7%) as Data Managers or Analysts. Additional roles accounted for approximately 22% of responses, including Lab Managers (LM), Lab Directors (LD), Project Managers/Coordinators (PM), Technicians, Interviewers, and Research Leads.

ERP session attendance was as follows: LGBTQIA+ (n = 3), BIPOC (n = 6), Persons with Disabilities (n = 6), Neurodivergent (n = 5), and a general group (n = 5). Due to the intersectionality, the number of participants in each focus group does not directly reflect total AG representation (Table S3). For example, LGBTQIA + group attendance was small, six ERP identified as LGBTQIA+. Quotes are tagged by self-identified AG, not focus group assignment.

Contextual factors–existing feedback practices and motivations

Formal feedback practices were largely absent. Only one research team reported using a structured PFS, typically a single open-ended item appended to a research survey. RTM cited time constraints and limited technical support as barriers to implementation. ERP also reported a near-total absence of formal feedback opportunities. None reported encountering formal PFS, though some had offered unsolicited input to RTM. Instead, some described sharing their opinions through personal networks, affecting broader perceptions of research:

No one really asked for feedback, so. [long pause] And I may have been informally in my social circle coming home saying “this study was good” and “this study was less than quality.”

- 68, Woman, White; AG: None Identified

Three primary motivations for providing/soliciting feedback emerged from thematic analysis: (A) Refining Protocols and Training to Enhance Participant Experience, (B) Increasing Repeat Participation and Reducing Attrition, and (C) Producing Feasibility Evidence (Table 1). Understandably, (B) and (C) were only motivators to RTM.

Table 1. Motivations for gathering/providing feedback – themes and exemplars

Abbreviations - AG: Affinity Group; BIPOC: Black, Indigenous, Persons of Color; ERP: Experienced Research Participants; RTM: Research Team Members; PC: Participant Coordinator; PI: Principle Investigator; PM: Project Manager.

The introductory framing used in the focus groups (“to improve experiences, build trust, and create communication pathways”) may have influenced how some attendees articulated their motivations. Attendees expanded on this framework, offering examples of what PFS could accomplish. For example, both groups linked feedback to the goal of refining protocols and training to improve participant experiences (Table 1, Theme A). ERP also emphasized that improvements should be transparently shared with participants to demonstrate that their input was valued and acted upon (e.g., Table 1, Theme A, ERP Quote 1).

Feedback collection from participants

Preferred form

Few ERP felt comfortable giving feedback directly to RTM; most preferred anonymous methods to encourage candid responses, especially web-based surveys, with some interest in SMS surveys. RTM ideated strategies for distributing survey links, including automatic redirection after a main research web survey, links in thank you emails, or QR codes printed on debriefing handouts, displayed on TV screens at community meetings, or sent via postal mail. They noted that accessibility and participant preferences would likely vary across different populations and research contexts.

Core content areas

This section outlines core content areas identified for inclusion in a feedback survey. As described previously, we used both direct and indirect inquiry to explore feedback content: attendees were asked about their preferred survey topics, and ERP were invited to reflect on their positive and negative research experiences. The indirect approach proved especially productive; most input that shaped these content areas came from spontaneous reflections during journey mapping and related discussion (Fig. S1; Table S2).

Six key content areas emerged: (A) Communication, (B) Respect, (C) Burden, (D) Feeling Valued, (E) Receiving Value, and (F) Safety and Security (Table 2). Exemplar quotes in Table 2 are drawn exclusively from ERP to center participant perspectives. While RTM also contributed to theme development, their sessions generated many protocol-specific content areas for feedback that were not broadly generalizable, such as questions about satisfaction with waiting rooms, whether training on study-specific technology should occur in person vs. self-directed, the learning value of participation for student subject pools, or, in the following example, a question about who is providing relief to a caregiver so that the caregiver can attend a research appointment:

Table 2. Core survey content areas – themes and exemplars

Abbreviations - AG: Affinity Group; BIPOC: Black, Indigenous, Persons of Color; LGBTQ+: Lesbian, Gay, Bisexual, Transgender, Queer, and other identities.

PI-1: A consideration is caregivers are busy giving care. So, and can they leave their care partner for two hours, for like to get to [redacted: the study site] to do an in-person session? […] [overlapping speech]-

PI-2 (to PI-1): -[overlapping speech] I don’t know if you might find it useful to know from your [PI-1’s] participants, who’s taking care of the person that they’re used to taking care of while they’re in your study?

The need to accommodate such hyper-specificity through customization is addressed later.

Content theme A – communication

The theme “communication” (A) encompasses the quality and transparency of communication between participants and research teams throughout the research process. Attendees noted that the communication dynamics significantly influenced the overall research experience. RTM highlighted recurring mismatches between participants’ expectations and actual experiences. ERP similarly reported that good communication helped them feel informed and prepared, whereas poor communication left them feeling unsettled and disengaged.

Content theme B – respect

The “respect” (B) theme refers to the recognition and consideration of participants’ identities, needs, and contributions throughout the research process. ERP desired respect to be shown through both words and deeds. Expressions of gratitude were important for individuals whose participation was affected by chronic illness or disability. Respect also included acknowledging personal information shared beforehand (e.g., body size) and being prepared to recognize and accommodate diverse body types, gender identities, neurotypes, and disabilities.

Content theme C – being valued

The “being valued” (C) theme refers to participants feeling that their individual contributions are recognized as meaningful and not interchangeable or perfunctory. This theme connects to communication and respect, but emphasizes reciprocity, making participants feel like active partners in discovery rather than passive subjects. For example, ERP felt valued when RTM practiced active listening and connected their individual contributions to broader societal impacts.

Content theme D – receiving value

The “receiving value” (D) theme refers to participants feeling that they gain something worthwhile from their involvement in research. While financial incentives were a key extrinsic motivator, ERP emphasized that value extends well beyond payment. Compensation was seen as a necessary baseline, necessary but not sufficient for a positive experience. ERP also identified intrinsic motivations for participating in research, including social engagement, contributing to a greater purpose, and satisfying curiosity. Understanding the full spectrum of value participants seek, both extrinsic and intrinsic, is necessary to capture whether research teams are delivering this value.

Content theme E – burden

The “burden” (E) theme relates to the resource strain experienced by participants during the research process, whether physical, emotional, cognitive, or financial. ERP experienced overburden in various ways, often arising from boundaries being overstepped or unmet expectations regarding effort, discomfort, and time commitments. The repetitive nature of study tasks also contributed to fatigue and disengagement. Perceptions of burden were highly individual, and participants noted that accommodations from researchers could alleviate these challenges. Clear communication of needs and expectations helped reduce participants’ burden and demonstrated respect for their time.

Content theme F – safety and security

The “safety & security” (F) theme encompasses participants’ sense of physical, psychological, and informational protection throughout the research process. Data security concerns were prominent. ERP expressed significant worry about the potential misuse or breach of their personal and demographic information, including fears about data being “sold” or shared without consent. Although consent forms attempt to address such concerns, feedback surveys could assess the effectiveness of these trust-building measures.

Other considerations for feedback collection

Other key design elements of the feedback survey included transparency, autonomy, the balance between open-ended and closed-ended questions, compensation, and the inclusion of vulnerable populations.

Transparency and autonomy

ERP wanted transparency regarding who would receive their feedback and the purpose of the information collected. To tailor their feedback, ERP emphasized the need to know whether responses would be shared only with researchers they interacted with or distributed more widely. ERP also emphasized the importance of understanding why certain questions (particularly demographic ones) were included and how their answers would be utilized. ERP were mistrustful when a question’s relevance was unclear, particularly regarding sensitive topics like income. The inclusion of demographic questions in a feedback survey was also controversial among RTM; some viewed them as unnecessary, while others considered them essential for identifying disparities in research experiences.

Even when information-gathering intent was clear, ERP valued their autonomy to skip questions. When clear options to bypass sensitive questions were unavailable, some ERP abandoned surveys entirely. This created frustration, as ERP believed abandonment meant their partial responses would not be used.

Balance of open-ended vs. closed-ended questions

The balance between open-ended and closed-ended questions sparked mixed reactions among both RTM and ERP. Some ERP found excessive open-ended questions burdensome, while others vastly preferred them, feeling that scale-based questions diminished the value of their feedback or made the process perfunctory. RTM reactions also varied, often splitting along leadership versus operational roles as discussed later in “Feedback Delivery to Researchers.”

Compensation for feedback

Both RTM and ERP expressed that compensation for feedback was ideal from the perspectives of fairness and honoring participants’ labor value. However, offering material incentives was controversial for several reasons. RTM’s concerns included compensation potentially biasing feedback, fostering transactional relationships, encouraging superficial responses, or creating inequities in underfunded research projects where teams might not be able to afford compensation. RTM also raised legal and logistical challenges; tracking and reporting participant feedback incentives for tax and accounting purposes could compromise the perceived confidentiality (see Discussion for proposed solutions).

Vulnerable populations

Two of the five participating research teams routinely worked with vulnerable populations: minors under the age of 18 and individuals with cognitive decline. These teams recognized the value of gathering feedback directly from these groups, but struggled with ethical and regulatory challenges. These challenges were compounded by standard consent procedures, such as requiring signatures from parents or legally authorized representatives, which conflicted with the goal of maintaining anonymous feedback. These examples underscore the need for deeper reflection on whose voices PFS may miss. While no solutions emerged from the focus groups, raising awareness is a crucial step in expanding access (e.g., caregiver proxies, prospective permission from caregivers to solicit feedback from minors, and capacity-adapted tools).

Feedback delivery to researchers

Only RTM focus groups explored how participant feedback should be delivered to researchers. Preferences varied based on team roles, research methodologies, and participant populations.

Data access and format

RTMs had differing views on data access and reporting. While all desired some form of feedback summary, some also valued the ability to download raw data for independent analysis. Preferences differed regarding feedback types: staff with direct participant contact and supervisors found qualitative feedback more actionable, although they noted that such content could be overwhelming or conflicting without summarization. Other RTMs, especially those in leadership positions, preferred Likert scales for tracking trends. All agreed on the need for secure sign-on, update notifications, and role-based restrictions to protect both data and team members. There was sensitivity to the need for a moderator to ensure feedback was delivered constructively.

Delivery frequency and summary level

Preferences for feedback frequency and format varied by role. Most PIs preferred periodic, high-level summaries, whereas staff with day-to-day study responsibilities emphasized the need for immediacy and detail:

PI-1: …It could be a monthly newsletter, I mean, that one’s the easiest one.

PI-2: Maybe quarterly, too. I feel like that’s how long a lot of our studies run.

LD: But if there’s something alarming, though, we want to know. I mean, if [participants] say something, and they’re assuming that we’re going to see that feedback [we don’t want to] see it in three months.

Benchmarking, comparison, and data sharing

RTMs also desired both within-study and between-study comparisons to better interpret participant experiences, for example: comparing trends across demographic groups, monitoring changes over time, evaluating the impact of protocol improvements, and benchmarking against similar studies. Although these approaches imply some level of data sharing, necessary for benchmarking, RTMs expressed hesitation about feedback being used punitively by governing institutions, underscoring the need for safeguards.

Discussion

This exploratory study advances the development of PFS for nonclinical human subjects research. We identified key content domains for feedback surveys that differ meaningfully from those established in clinical trial research [Reference Kost and Correa da Rosa30]. Beyond content considerations, we identified two central design tensions: balancing feedback needs (1) across roles (leadership versus frontline staff) and (2) across organizational levels (centralized standardization versus decentralized customization). We propose a scalable PFS model and examine these tensions concerning its design.

Operationalizing a scalable PFS

To address needs across organizational levels, Figure 1 presents a hybrid PFS model anchored by a centralized Feedback Service Team (FST), which lowers implementation barriers and acts as an intermediary among research teams, participants, and institutional leadership. The system operates in two phases. In the preparation phase, the FST collaborates with research teams to add study-specific content into a Custom Module of the feedback survey. Customization addresses RTMs’ desire for feedback on hyper-specific content areas that are unique to their study. In parallel, a Core Survey module of universal topic areas is maintained for uniformity and comparison across studies. Distribution methods should be adapted to the study population to maximize response rates. In the collection and utilization phase, anonymous participant feedback is gathered and summarized in automated dashboards to support study-level improvements while reducing the burden on research teams. De-identified data are compiled into reports for study leadership and aggregated across studies into an institutional repository, enabling both targeted insights and broader organizational learning.

Figure 1. Feedback system design. The system operates in two phases: [A] preparation and [B] collection & utilization. During preparation, the feedback service team collaborates with the study team to (a) customize the survey, solicitation process, and dashboard. Once approved by the study team, feedback service team, and any involved ethical oversight bodies (e.g., IRB), the system can (b) advance to collection & utilization. In collection & utilization, the study team (c) distributes the survey to participants, who (d) submit anonymous feedback. This feedback is (e) securely stored in a database and (f) summarized in a dashboard. The study team (g) reviews the feedback to inform study improvements. To protect study team anonymity, the feedback service team (h) de-identifies data before adding it to an institutional repository. Finally, the repository data is (i) compiled into dashboards or reports for (j) Institutional Leadership. Symbols are original or modified from www.flaticon.com.

This structured yet flexible model supports feedback use at both micro and macro levels but requires careful attention to logistical, economic, ethical, and technical challenges. For example, participant compensation for feedback merits consideration; while compensation for the labor of feedback is desirable, RTMs raised concerns that payment could bias feedback, foster transactional relationships, or create inequities in research teams unable to afford compensation for feedback. Tracking and reporting incentives for tax and accounting purposes could also compromise the perceived confidentiality of feedback. Unconditional prepayment (wherein all participants are compensated upfront for feedback, regardless of completion) is a well-established, cost-effective method for boosting response rates [Reference Becker31] that could enhance anonymity by breaking the link between feedback submission and payment. Alternative approaches such as third-party management systems or voucher/token systems could also further protect confidentiality. Yet these mechanisms do not address the underlying inequities in research team funding, a problem that remains to be solved at the institutional level.

Specific recommendations for nonclinical feedback survey design

At the center of the proposed PFS is the feedback survey itself. The feedback survey should balance the informational needs of research teams against participant burden, autonomy, and trust. Our findings suggest the Core Survey module for a nonclinical feedback survey should center six content domains: Communication, Respect, Being Valued, Receiving Value, Burden, and Safety and Security. These domains appeared to be universal across the studied nonclinical research contexts. Within the Communication domain specifically, the feedback survey should assess general communication practices, rather than focusing on consent processes as clinical research feedback systems may. This broader view is necessary given that consent requirements and protocols vary substantially across the methodological spectrum of nonclinical research. Beyond content, nonclinical feedback survey design must also accommodate varying sensitivities among participants. To this end, the survey should provide a brief rationale to the participant for asking sensitive questions, and sensitive questions should be obviously skippable. Demographic questions, for instance, can be highly sensitive due to the potential for reidentification in smaller subpopulations. Offering both an explanation and an opt-out option for the demographic section in particular helps participants balance confidentiality concerns with the value of contributing to group representation (e.g., revealing disparities in research experiences). Similarly, open-ended questions should be easily bypassed given polarized opinions on their value versus participant burden. Collectively, these design choices signal that participant comfort and voice are prioritized alongside the needs of researchers and institutions.

Feedback needs by role

Establishing a robust survey instrument is one challenge; the other is operationalizing the system to effectively support diverse users. Thus, the model must consider the varied feedback needs across roles within research teams, which impacts data delivery and frequency. RTM in leadership roles preferred infrequent, summarized feedback, while frontline staff favored more frequent, detailed input. These patterns align with organizational research showing that individuals in lower-hierarchical roles often seek detailed feedback to reduce uncertainty in the performance of duties, while those in leadership prefer aggregated reports for strategic planning [Reference Ashford, Blatt and VandeWalle32]. This dynamic has implications for PFS design, including balancing closed-ended questions, which meet leadership needs for summary, with open-ended items that offer the nuanced detail valued by frontline staff, as well as tailoring feedback frequency to role.

PFS may also function as a form of performance evaluation for frontline staff, which raises important considerations. Feedback interventions can backfire, decreasing future performance if implemented without regard for framing, norms, and individual factors [Reference Kluger and DeNisi33]. Raw feedback may be unhelpful, or even harmful, without appropriate interpretation. Additionally, anonymous feedback may not be assignable to a specific RTM. If feedback can be individualized, a moderating layer is essential. While an FST can offer support, they lack insight into team dynamics and individual context. Supervisors are best positioned to contextualize PFS data, but nonclinical teams may lack this structure. Whether and how PFS should inform individual performance evaluation remains an open question and warrants further study.

Feedback needs by organizational level

Existing PFS models in clinical research demonstrate clear advantages of centralized systems and have been effective in improving participant experience and engagement [Reference Kost, Andrews and Chatterjee34,Reference Cheng, Bascompte Moragas and Thomas35]. However, as described in the Introduction, extending centralized PFS to nonclinical research presents challenges due to greater methodological heterogeneity, fewer regulatory requirements, and more diverse team structures. Our hybrid model offers one potential solution, but concerns remain. Informal conversations with social science colleagues revealed apprehension about institutional overreach and the possible misuse of centralized PFS data in high-stakes contexts, such as tenure reviews. Although these concerns were not raised in formal interviews, they reflect broader cultural dynamics that need to be addressed. The success of PFS depends not only on technical soundness but also on trust between all parties.

Importantly, although centralization can reduce barriers by providing resources, it is not essential for success. Decentralized PFS approaches, supported by shared templates and communities of practice, offer a viable alternative that may better align with broader academic values and culture. Ultimately, we must weigh the trade-offs between infrastructure and autonomy in designing PFS that are both effective and acceptable.

Limitations and future directions

This study was conducted at a single Midwestern R1 institution, which limits its generalizability. While focus groups were small and not nationally representative, the goal of this exploratory qualitative phase was transferability, not representativeness [Reference Carminati36]. Larger samples, multiple institutions, and a wider range of study types are needed to assess cross-context applicability.

Self-selection bias may also have influenced findings, as more engaged participants and research teams receptive to feedback were more likely to participate. Moreover, PFS cannot address challenges at earlier stages of the research lifecycle (e.g., recruitment), where survivor bias remains a concern. Strategies are needed to reach nonparticipants and never-participants.

The next steps in this research program include gathering input from stakeholders to refine the PFS model, prototyping individual PFS components, and evaluating design trade-offs (e.g., hesitancy regarding demographic questions and role-based feedback needs). Success metrics should be developed at both the institutional and team levels.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/cts.2025.10184.

Acknowledgements

This study would not have been possible without the generosity of the research participants and research teams who shared their time and experiences with us. Your voices shape our work.

The authors used AI language models ChatGPT (GPT-4o, OpenAI; https://chat.openai.co) and Claude (Sonnet 4, Anthropic; https://claude.ai) to assist with article revision in June-October 2025. Both tools were accessed in their standard configurations for the purpose of refinement and concision. All intellectual contributions and substantive decisions were made by the authors.

Author contributions

Alicia Giordimaina Carmichael: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Visualization, Writing-original draft, Writing-review and editing; Donna Walter: Conceptualization, Funding acquisition, Investigation, Project administration, Writing-review and editing; Brandon Patric Labbree: Formal analysis, Investigation, Validation, Writing-original draft, Writing-review and editing; Boluwatife Dogari: Formal analysis, Investigation, Writing-review and editing; Natalie Leonard: Formal analysis, Investigation, Writing-review and editing; Kathryn Ward: Formal analysis, Investigation, Visualization, Writing-review and editing; Xiaoya Geng: Data curation, Investigation, Validation, Writing-review and editing; Medha Raju: Formal analysis, Investigation, Writing-review and editing; Jess Francis-Levin: Funding acquisition, Investigation, Writing-review and editing; Richard Gonzalez: Conceptualization, Funding acquisition, Resources, Supervision, Writing-review and editing.

Funding statement

This project was supported by internal funding. The content is solely the responsibility of the authors and does not necessarily represent the official views of any academic institution.

Competing interests

The authors report no conflict of interest.

References

Frank, L, Morton, SC, Guise, JM, et al. Engaging patients and other non-researchers in health research: defining research engagement. J Gen Intern Med. 2020;35:307314. doi: 10.1007/s11606-019-05436-2.Google Scholar
Key, KD, Furr-Holden, D, Lewis, EY, et al. The continuum of community engagement in research: a roadmap for understanding and assessing progress. Prog Community Health Partnersh. 2019;13:427434.Google Scholar
Fogel, DB. Factors associated with clinical trials that fail and opportunities for improving the likelihood of success: a review. Contemp Clin Trials Commun. 2018;11:156164. doi: 10.1016/j.conctc.2018.08.001.Google Scholar
Briel, M, Olu, KK, von Elm, E, et al. A systematic review of discontinued trials suggested that most reasons for recruitment failure were preventable. J Clin Epidemiol. 2016;80:815. doi: 10.1016/j.jclinepi.2016.07.016.Google Scholar
Galea, S, Tracy, M. Participation rates in epidemiologic studies. Ann Epidemiol. 2007;17:643653. doi: 10.1016/j.annepidem.2007.03.013.Google Scholar
De Leeuw, E, De Heer, W. Trends in household survey nonresponse: a longitudinal and international comparison. In: Groves, RM, Dillman, DA, Eltinge, JL, Little, RJ, eds. Survey Nonresponse. New York, NY: Wiley; 2002: 4154.Google Scholar
National Research Council. Nonresponse in Social Science Surveys: A Research Agenda. Washington, DC: National Academies Press; 2013.Google Scholar
Centers for Disease Control and Prevention. 1993 BRFSS summary data quality report. (https://www.cdc.gov/brfss/annual_data/1993/pdf/1993SummaryDataQualityReport.pdf) Published 1993, Accessed February 10, 2025.Google Scholar
Centers for Disease Control and Prevention. 2023 BRFSS data quality report. (https://www.cdc.gov/brfss/annual_data/2023/pdf/2023-DQR-508.pdf) Published 2023. Accessed February 10, 2025. 2023.Google Scholar
HRS Staff. HRS core interview sample sizes and response rates Ann Arbor: MI: Survey Research Center, Institute for Social Research, University of Michigan: 2023 (https://hrs.isr.umich.edu/documentation/survey-design/response-rates) Accessed February 10, 2025.Google Scholar
Smith, TW, Davern, M, Freese, J, Morgan, SL. General Social Surveys, 1972-2022 [machine-readable data file]. Chicago, IL: NORC; 2023.Google Scholar
Groves, RM, Heeringa, SG. Responsive design for household surveys: tools for actively controlling survey errors and costs. J R Stat Soc Ser A Stat Soc. 2006;169:439457. doi: 10.1111/j.1467-985X.2006.00423.x.Google Scholar
Brick, JM, Tourangeau, R. Responsive survey designs for reducing nonresponse bias. J Off Stat. 2017;33:735752. doi: 10.1515/jos-2017-0034.Google Scholar
West, BT, Couper, MP, Axinn, WG, et al. Toward a new approach to creating population-representative data for demographic research. Demography. 2024;61:17591791. doi: 10.1215/00703370-11693878.Google Scholar
Wong, CA, Song, WB, Jiao, M, et al. Strategies for research participant engagement: a synthetic review and conceptual framework. Clin Trials. 2021;18:457465. doi: 10.1177/17407745211011068.Google Scholar
Vayena, E, Blasimme, A, Sugarman, J. Decentralised clinical trials: ethical opportunities and challenges. Lancet Digit Health. 2023;5:e390e394. doi: 10.1016/S2589-7500(23)00052-3.Google Scholar
Lupia, A, Allison, DB, Jamieson, KH, Heimberg, J, Skipper, M, Wolf, SM. Trends in US public confidence in science and opportunities for progress. Proc Natl Acad Sci U S A. 2024;121:e2319488121. doi: 10.1073/pnas.2319488121.Google Scholar
Leeper, TJ. Where have the respondents gone? Perhaps we ate them all. Public Opin Q. 2019;83:280288. doi: 10.1093/poq/nfz010.Google Scholar
Ostrom, E. Analyzing collective action. Agric Econ. 2010;41:155166. doi: 10.1111/j.1574-0862.2010.00497.x.Google Scholar
Cheffey, J, Hill, L, McCullough, C, McCullough, C. Can I facilitate a project when my memory lets me down?”: The challenges and rewards of co-producing a “Living Well with Dementia” course. FPOP Bull Psychol Older People. 2017;1:1925. doi: 10.53841/bpsfpop.2017.1.137.19.Google Scholar
Price, SM, Park, CH, Brenner, RA, et al. Participant retention in a longitudinal study: do motivations and experiences matter? Survey Practice. 2016;9:110. doi: 10.29115/sp-2016-0022.Google Scholar
Guillemin, M, Barnard, E, Allen, A, et al. Do research participants trust researchers or their institution? J Empir Res Hum Res Ethics. 2018;13:285294. doi: 10.1177/1556264618763253.Google Scholar
Svendsen, TS, Veseth, M, McKay, JR, et al. Securing participant engagement in longitudinal substance use disorder recovery research: a qualitative exploration of key retention factors. J Psychosoc Rehabil Ment Health. 2021;8:247259. doi: 10.1007/s40737-021-00222-y.Google Scholar
Bone, SA, Lemon, KN, Voorhees, CM, et al. Mere measurement plus”: how solicitation of open-ended positive feedback influences customer purchase behavior. J Mark Res. 2017;54:156170. doi: 10.1509/jmr.14.0232.Google Scholar
Kost, RG, Cheng, A, Andrews, J, et al. Empowering the participant voice (EPV): design and implementation of collaborative infrastructure to collect research participant experience feedback at scale. J Clin Transl Sci. 2024;8:e40. doi: 10.1017/cts.2024.19.Google Scholar
Patient-Centered Outcomes Research Institute About PCORI (https://www.pcori.org/about/about-pcori) Accessed February 13, 2025.Google Scholar
Michigan Institute for Clinical & Health Research. UMHealth Research Volunteer Portal (https://umhealthresearch.org) Accessed February 15, 2025.Google Scholar
Braun, V, Clarke, V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77101. doi: 10.1191/1478088706qp063oa.Google Scholar
Guest, G, Bunce, A, Johnson, L. How many interviews are enough? An experiment with data saturation and variability. Field Method. 2006;18:5982. doi: 10.1177/1525822X05279903.Google Scholar
Kost, RG, Correa da Rosa, J. Impact of survey length and compensation on validity, reliability, and sample characteristics for ultrashort-, short-, and long-research participant perception surveys. J Clin Transl Sci. 2018;2:3137. doi: 10.1017/cts.2018.18.Google Scholar
Becker, R. The effect of prepaid incentives on panelists’ response across survey modes in a sequential mixed-mode design. Qual Quant. 2025;59:2949. doi: 10.1007/s11135-024-01922-w.Google Scholar
Ashford, SJ, Blatt, R, VandeWalle, D. Reflections on the looking glass: a review of research on feedback-seeking behavior in organizations. J Manag. 2003;29:773799. doi: 10.1016/S0149-2063(03)00079-5.Google Scholar
Kluger, AN, DeNisi, A. The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull. 1996;119:254284. doi: 10.1037/0033-2909.119.2.254.Google Scholar
Kost, RG, Andrews, J, Chatterjee, R, et al. What research participants say about their research experiences in empowering the participant voice: outcomes and actionable data. J Clin Transl Sci. 2025;9:e43. doi: 10.1017/cts.2025.3.Google Scholar
Cheng, AC, Bascompte Moragas, E, Thomas, E, et al. Standards and infrastructure for multisite deployment of the research participant perception survey. JAMIA Open. 2025;8:ooaf017. doi: 10.1093/jamiaopen/ooaf017.Google Scholar
Carminati, L. Generalizability in qualitative research: a tale of two traditions. Qual Health Res. 2018;28:20942101. doi: 10.1177/1049732318788379.Google Scholar
Figure 0

Table 1. Motivations for gathering/providing feedback – themes and exemplars

Figure 1

Table 2. Core survey content areas – themes and exemplars

Figure 2

Figure 1. Feedback system design. The system operates in two phases: [A] preparation and [B] collection & utilization. During preparation, the feedback service team collaborates with the study team to (a) customize the survey, solicitation process, and dashboard. Once approved by the study team, feedback service team, and any involved ethical oversight bodies (e.g., IRB), the system can (b) advance to collection & utilization. In collection & utilization, the study team (c) distributes the survey to participants, who (d) submit anonymous feedback. This feedback is (e) securely stored in a database and (f) summarized in a dashboard. The study team (g) reviews the feedback to inform study improvements. To protect study team anonymity, the feedback service team (h) de-identifies data before adding it to an institutional repository. Finally, the repository data is (i) compiled into dashboards or reports for (j) Institutional Leadership. Symbols are original or modified from www.flaticon.com.

Supplementary material: File

Carmichael et al. supplementary material

Carmichael et al. supplementary material
Download Carmichael et al. supplementary material(File)
File 300.4 KB