Our democracy is in a state of crisis. With conflict profiteers stoking discord and polarization seemingly increasing, many people have low hopes for the U.S. election system, have considered political violence a viable option, and have engaged in a variety of practices that have led to a steady erosion of the nation’s democratic potential (Bergengruen Reference Bergengruen2022; Contreras Reference Contreras2023; Threats to American Democracy 2023; Willige Reference Willige2017). Many media developments also undermine productive forms of public discourse between citizens, relying upon adversarial and elite-driven framings that present political engagement as a zero-sum battle, inviting apathy or extremism (see Bail Reference Bail2022).
To address this context, over 500 “bridging organizations” nationally have committed to bringing citizens together to dialogue, deliberate, and build trust toward the public interest. Since 2010, one such organization is “Living Room Conversations” (LRC), a nonprofit that we partnered with in this study. LRC has developed a model for local and national conversations about public issues, bringing citizens together for online or offline conversations about public issues. These discussions typically involve four to seven people, run about 90 minutes in length, and provide spaces for working across differences.
Organizations such as LRC describe the views and actions of “conflict entrepreneurs” as a primary foil for their work. Conflict entrepreneurs are public figures who financially benefit from promoting political conflicts. With others, we believe that the term “conflict profiteer” more accurately characterizes the effect of these characters’ claims in our public life. In the U.S., our media systems are awash with figures who get rich by promoting tribal discourse, filtering all issues through an “us” vs. “them” mentality, and by peddling misinformation when it serves their interests (Carcasson Reference Carcasson2025; Molineaux Reference Molineaux2024). Most relevant to our purposes, according to such individuals, the U.S. is deeply polarized, divided by intractable differences, and even heading toward another “civil war” (e.g. Lozano Reference Lozano2024; Waters Reference Waters2025).
Working with a “Trust in Elections” discussion initiative developed by LRC, in this paper, we directly counter the picture constructed by such claims in U.S. politics. We explore an ongoing question: how can dialogue and trust-building work be improved to bridge differences between citizens and advance democracy? To our surprise, our findings called into question the very basis of bridging work.
In exploring the differences between diverse samples in this study, we were most struck by the commonalities between all the samples. We find that people in the U.S. are far less polarized than imagined, there is a great deal of trust in the election system, and a subsequent “reveal” strategy of this common ground thus becomes as, if not more, important than the need to “bridge” imagined chasms between citizens. With this strategy in place, we argue further that those interested in promoting democratic dialogue and trust-building should design interventions that involve more and different kinds of people in their work, focus on long-term impacts, and stretch definitions of participation from dialogue to civic action.
As such, scholars and practitioners should express greater hesitation over or outright reject the framings about distrust and polarization that are on offer in much U.S. public discourse. To truly bridge differences between citizens, such forms of public engagement should make their primary task an honest accounting of when, where, and what differences exist. With more of a focus on setting the conditions for dialogue (or pre-dialogue factors), greater opportunities are presented for bridging work to fulfill its democratic aspirations.
Trust and democracy
While LRC’s decision to focus on “Trust in Elections,” during the 2024 election cycle was in part a direct response to debates circulating in the aftermath of the 2020 U.S. presidential election, trust in government has been quite low for decades, with just under a quarter of all people in the U.S. (22% in May 2024) indicating that they trust the U.S. government to do what is right “just about always” or “most of the time” (“Public Trust in Government: 1958-2024” 2024).
While Republicans tend to be less trusting of the U.S. government than Democrats, the baseline for trust in politicians and government institutions has consistently eroded over time since the concept was first measured by the National Election Study in 1958 (“Public Trust in Government: 1958-2024” 2024). While trust toward local government and institutions trends higher than the outlook toward the national government, even local communities are experiencing a decline in public confidence (Deane Reference Deane2024; Hetherington Reference Hetherington2005). As an important marker of a stable and healthy democracy, interest in improving confidence in U.S. institutions and toward government officials has been a focus of political scientists, bridge-building organizations, and politicians for decades (Dalton Reference Dalton, Zmerli and van der Meer2017; Levi and Stoker Reference Levi and Stoker2000). Trust is an important precursor to political participation, engagement, and active involvement in civic and political life (Devine Reference Devine2024).
Not only has Americans’ faith in politicians eroded in our political and media environment, but Americans increasingly distrust that the government will adequately provide services and call into question the basic ability of our bureaucracy to function as intended (Deane Reference Deane2024). Coupled with a perceived declining confidence in the administration and security of U.S. election practices, mainstream news organizations, and declining social trust within and among members of a wide range of U.S. communities, one of the main goals of LRC and other bridge-building organizations is to try to understand the root causes of declining political and social trust. These organizations seek to repair faith in elections, institutions, elected officials, and even among members of local communities.
With this larger background in mind, extant evaluations demonstrate the individual impacts of LRC participants’ “application of tools to other parts of life,” “interest in systemic change spurred by mutual understanding and ‘humanizing the other,’” and community impacts such as “counter[ing] the narrative of the US being ‘hopelessly divided’” (Mosher-Williams and Myrick Reference Mosher-Williams and Myrick2020; Proven Model n.d., 1). Further analyses demonstrate how LRC’s model for political conversation has been “strongly associated with improved attitudes related to openness, tribal identity, and moral disdain” (Weaver et al. Reference Weaver, Brown, Brady, Carlquist, Dotson, Faldmo, Hall and Jeffrey2023), yet there is more to find out about the role reflective structured dialogues can play in promoting variables of interest under different conditions.
Working with LRC’s conversation guide on “Trust in Elections” (Appendix A; “Conversation Guide” 2023), we first sought to find out how those joining LRCs are the same or different than a baseline national sample (e.g. are those joining these events higher on levels of curiosity, trust in government, social trust, etc.?), providing insight into what might influence greater use of and improvements for building democratic dialogues. Thereafter, with our baseline sample in hand, we developed a method for seeking to understand LRC participants’ views both before and after the 2024 general election, looking for shifts in attitudes, behavioral intentions, and other variables of interest.
Methods
To operationalize this project, we created and implemented an experimental research design in spring and summer 2024. Our first step was to construct an online Qualtrics survey that tapped into participants’ trust in and behavioral intentions with elections. The survey incorporated measures covering polarization and bridging quotients from the Social Cohesion Impact Measure (SCIM) (“Bridging Movement” 2023; “Measure Your Impact”, 2023). Our experimental design consisted of the following conditions:
1) Condition 1 (May 2024–October 2024): Over six monthly waves, a baseline, representative national sample of people in the U.S. recruited via a Qualtrics consumer panel, who did not go through the LRC process yet received our standardized, one-time online survey and questions about trust in elections, what they would be willing to do or were planning to do, and the SCIM measures.
2) Condition 2 (August 2024–October 2024): With assistance from LRC, a sample of local LRC event participants across the U.S. received the standardized list of Qualtrics survey questions on trust in elections and the SCIM measures through a pretest and posttest survey. Local partnering organizations or individuals holding the LRCs shared with prospective participants that they would be invited to voluntarily take the compensated survey before and after the LRC on trust in elections via a Quick Response (QR) code or email link.
3) Condition 3 (October 2024–April 2025): Remaining responsive to the developing learning from this project over time, and given the results we gathered from Condition 2, we also implemented a posttest only LRC condition both before and after the election to continue ascertaining mechanisms for variables related to trust, electoral action, polarization, and bridging variable changes from LRC events. (See Appendix B for more on our research process and adaptations.)
Results
For the sake of scope and prioritizing our content, the following details the most important findings from our Qualtrics national sample, our LRC pre/posttest sample, the LRC posttest only sample (with before and after 2024 election conditions), and working with our nonprofit partner on this project over the course of a year.
First, although we carried out the Qualtrics consumer panel in six waves (May–October 2024), with N = 1245 validated participants/surveys in total, we did not find sufficient differences from month to month in the results we received. Despite some newsworthy election events during this time that we thought may result in some variation (including the U.S. presidential debates, the assassination attempt at a Trump rally in Butler, PA, and the switch from Biden to Harris at the top of the Democratic ticket), given the relative consistency of these findings we decided to treat the consumer panel results in summary form only. This initial result was itself revealing that there appeared to be more consistency and consensus among many different people in the U.S. than we’d expected.
We gathered 111 validated surveys from the LRC total. Given the sample size for each of the segmented conditions (e.g. the pretest/posttest had 29 participants, the posttest only condition before the election had 42 participants, and the after-election posttest results had 40 participants), the LRC results were treated as qualitative in nature and compared to the Qualtrics sample to explore contextual differences.
Second, our national, diverse sample exhibited high confidence in the administration of the 2024 election and its results (e.g. 37.6% indicated that they thought the elections “this November in the United States will be run and administered” somewhat well [3 on a 4-point scale], with 44.7% selecting very well [4 on a 4-point scale]). This ran counter to a lot of conflict profiteer-driven media/party narratives about how deeply divided citizens are on this issue. Average confidence in the administration of the 2024 election was only slightly higher among LRC participants, particularly among those who took the survey prior to election day (pretest M=3.66, SD=0.61; posttest M=3.52, SD=0.57; posttest only M=3.48, SD=0.59), while those who took the survey after the election (M = 3.25, SD = 0.98) were most on par with the Qualtrics general population sample (M = 3.21, SD = 0.87).
Political trust among LRC participants was slightly higher than that in the Qualtrics sample (M = 2.74, SD = 1.14), particularly among those who participated in the survey prior to election day (pretest M=3.17, SD=0.76; posttest M = 3.34, SD= 0.72; posttest only M = 2.93, SD = 0.87). LRC participants also exhibited slightly higher levels of social trust (pretest M = 3.66, SD= 0.61; posttest M = 3.79, SD = 0.49; posttest only M = 3.69, SD = 0.60) than those from the general Qualtrics sample (M = 3.07, SD = 1.09). Yet such differences were modest at best and suggest that across all study groups, the civic health of the nation is actually stronger (and more consistently so) than dominant narratives would suggest. And given the very results of the 2024 presidential election, this finding bore out in the general acceptance by citizens of what U.S. election infrastructures produced. Not only were Qualtrics respondents confident in the administration and outcome of the 2024 election, they also:
• Were relatively confident in the outcome of the 2020 election (46.3% very confident; 25.2% somewhat confident; M = 3.07, SD = 1.04); (LRC data: pretest M = 3.79, SD = 0.49; posttest M = 3.76, SD = 0.51; posttest only M = 3.69; SD = 0.56; posttest after election M = 3.5, SD = 0.82).
• Expressed high levels of political interest (46.6% follow government and public affairs most of the time; 30.5% some of the time; M = 3.10, SD = 0.60); (LRC data: pretest M = 3.69, SD = 0.60; posttest only M = 3.43; SD = 0.80; posttest after election M = 3.68, SD = 0.53).
• A majority (59.4%) expressed quite a lot of interest in the election and 69.1% indicated that they “definitely will vote” in the general election; M = 3.41, SD = 0.85); (LRC data: pretest M = 3.83, SD = 0.47; posttest only M = 3.83; SD = 0.44; posttest after election M = 3.80, SD = 0.61).
A study carried out during part of the period in which our project took place in 2024 involved another national sample of first-time voters deliberating on several issues (America in One Room 2024). Interestingly, a “proposal to protect against voter fraud was initially supported by both Republicans and Democrats. But after deliberation, it was rejected by both parties” (5). This suggests that both short and long-term attention to common ground is needed to do bridging work well. Common ground can be both an initial product to unveil and a goal to work toward in the very process of building trust and connections between citizens.
Third, although there were stark variations in response to questions about, for instance, the actions people were willing to take in the election, collectively all samples largely corresponded in those responses and expressed similar levels of confidence in the U.S. government, election administration, and social trust. Among all the consumer panel participants, the highest item was the likelihood of voting in the election (M = 8.24, SD = 2.75), while “sign up to be an election poll worker or become a poll observer” had the lowest result (M = 3.82, SD = 3.18). Most of the other actions had middling results (between a mean of 4 and 6). Largely parallel to the LRC conditions, consumer panel participants had a high likelihood of learning more about local election practices (M = 6.34, SD = 2.87) and seeking information from diverse news sources (M = 6.57, SD = 2.83), but less likelihood of engaging in more involved actions such as becoming a poll worker or encouraging a local group to talk about building trust in elections. Generally speaking, across all study groups, there was greater interest in participatory actions that could be completed independently rather than as part of a group and required less direct active engagement.
Fourth, one caveat is in order. While our results surprisingly revealed a great deal of common ground among all samples, we found that bridging organizations could move even further in this direction by involving more and different kinds of people in their work. In other words, at least in our LRC samples, to promote dialogues and build trust among the population, bridging organizations need to recruit beyond the “usual suspects” as audiences. In the pretest/posttest condition results, it was clear that LRC audiences had about a 3:1 (and in the case of the posttest before the election, 4:1) ratio of female to male-identifying participants, were more educated, and skewed as predominantly white in comparison to other racial/ethnic categories, and as compared to our national consumer panel. On average, across several civic engagement-related questions, the LRC samples evidenced only slightly more interest in government and public affairs, thought about elections, certainty about being registered to vote, likelihood of voting in the election, and faith in election administration. As a whole, there were far more commonalities than differences between these samples and our national baseline sample on just about all variables.
The one stark difference between our consumer panel and the LRC conditions related to party affiliation. By design, the consumer panel had level distributions of Democrats (507), Republicans (384), Independents (333), and something else (50). The LRC conditions leaned heavily Democrat and liberal, to the point where the posttest-only condition before the election had no Republicans whatsoever.
Implications
This project supports a growing body of scholarship and practice demonstrating that the use of the term “polarization” should not be used or channeled uncritically. Just as terms like “crisis” and “national emergency” are primarily rhetorical selections, drawing attention toward some meanings while attempting to stave off others, the extent to which people in the U.S. are in a deeply polarized “culture war” might be a first task that scholars and practitioners take on in this work. Who benefits from the selection of such terms and to what effect could be a starting point for even bringing people together in the first place. Several implications follow from these findings.
First, along with our national sample findings showing that there does not appear to be as much distrust in elections as conflict profiteers would have us believe, other work argues that there’s a fundamental need to provide an accurate picture to the electorate on such issues. Druckman et al. (Reference Druckman, Klar, Krupnikov, Levendusky and Ryan2022) find that,
Individuals misestimate the ideological extremity and political engagement of typical out-partisans. When partisans are asked about “Democrats” or the “Republican Party,” they bring to mind stereotypes of engaged ideologues and, hence, express contempt for the other party. The reality, however, is that such individuals are the exception rather than the norm. We show that when partisans learn that reality, partisan animus falls sharply; partisans do not have much animus toward the typical member of the other party. (1106)
The key is that partisans must “learn that reality,” however.
Without mapping this territory well, bridging work may end up wasting resources on the need for “persuasion” over and above an instigating “reveal” strategy spotlighting common ground (Druckman et al. Reference Druckman, Klar, Krupnikov, Levendusky and Ryan2022, par. 8). Otherwise, as Rose says, “You’re going to spend lots of money thinking you’re bridging divides that actually didn’t exist” based on the “intensity stoked by the mainstream press, social media, and outrage-for-profit entrepreneurs,” when research finds stunning levels of agreement on long-term values and priorities in the U.S. (pars. 11, 19, 24). Wu (Reference Wu2019) once highlighted this as the problem of the “supermajority.” Potential participants for bridging work need to learn where they might have agreement and consensus and where reasonable divergences of opinion exist.
Moreover, the ability to recruit participants for bridging work cannot be extricated from the expectations about conflict that polarized framings generate. Dungan and Epley (Reference Dungan and Epley2024) demonstrate how “People’s expectations are systematically miscalibrated such that they overestimate how negatively others respond to confrontation. These overly negative expectations stem, at least in part, from biased attention to potentially negative outcomes of a constructive confrontation” (779). In other words, people often get what they expect. When barriers are put up at the outset of a political discussion (i.e. you’re all so polarized) participants might underestimate the potential for a positive, open, honest, and solutions-focused discussion. All to say, those interested in bridging differences between citizens should not take as a given the framings constructed around issues such as elections, serving an up-front, educative function about what people believe to set the very conditions for dialogue and action.
Second, in building upon the common ground that already exists, those interested in bridging differences between citizens should focus on what it might take to involve more and different kinds of people in their work. In a sense, this study supports the idea that significant attention to the work of “pre-dialogue” could be a means of fostering even more common ground and political engagement between citizens. We’d recommend doing a demographic and psychographic breakdown of the audiences who have shown up (or are likely to show up) to trust-building events as a source for continuous study and adaptation. Some a priori research involving, say, a survey or interviews with self-identifying conservative members of a community could be warranted to conduct more segmented invitations and sampling to try to bring together diverse people across real differences.
At the same time, there are important power differentials to consider in the very act of invitation to a dialogue. One researcher highlights how bridging efforts “often ask already disadvantaged individuals to show up to spaces that validate the interests of the already advantaged” (Lerner Reference Lerner2024, par. 10)—a perspective underscored at length in Lee’s (Reference Lee2014) tome on the rise of the “public engagement industry.” Even in practitioner manuals on the subject, it’s recognized that “not everyone should bridge,” since it’s counterproductive “to ask people to bridge differences when they’re being discriminated against or otherwise denied social power,” while “some must heal from personal trauma” (“Bridging Differences Playbook” n.d., 11). This only heightens the need for pre-dialogue work to establish both how to bring diverse people together and whether participants are ready, willing, and feel a need for this modality of civic engagement.
Third, as we observed a relative constancy and consensus across samples over the course of a year of study, it could be critical for scholars and practitioners to “reveal” the common ground that exists among the electorate as a long-term, sustained part of this work. Over the course of this project, we became acutely aware of the need to not only look at the differences between the “before” and “after” of a dialogue on a public issue, but to think about what might be transformed in participants beyond potential immediate impacts. Given our findings, the need to reveal common ground needs emphasizing both up front and over the course of bridging work.
The literature in this area has been moving to a focus on long-term impacts. One question remains central: “does dialogue work truly make a difference” (Ralph Reference Ralph2024, par. 5)? Among hundreds of bridging organizations throughout the U.S., some report little success in their efforts. Several studies demonstrate short-term influence on many measures of listening, bias, animosity, and behavior. For instance, recent experimental work shows that cultivating “high-quality” listening can manifest hoped for results in terms of attitudinal changes and depolarizing conversations that bridge ideological divides (Itzchakov et al. Reference Itzchakov, Weinstein, Leary, Saluk and Moty2024). Some research shows that when people see others as individuals (through information about their job, family life, experiences, etc.) and not as part of a homogenous out-group, prejudice can be reduced and positive connections created (Suttie Reference Suttie2024). This only underscores how revealing and sustaining the common ground that already exists between citizens needs to be a guiding light when it comes to bridging differences.
Although participants may change their feelings from engaging in public dialogues, the “principle-implementation” gap highlights that actions may not follow, however, so one of the most outstanding issues is if the work can have impacts “in the long-term and at wide social scales” (Ralph Reference Ralph2024, par. 8). The reveal strategy should play a central role in ongoing efforts to evaluate this work in the long-term. With the “democracy ecosystem” now mapped relatively well across the U.S. (Mapping America’s Healthy 2024), researchers could look to compare the efforts of different organizations using both short and long-term measures.
A reveal strategy could also play an important role in moving participants from dialogue to civic actions. If sustained attention to revealing the common ground that already exists between citizens is provided up front and reiterated throughout an intervention, reflective structured dialogues could do more “follow up” work to promote behaviors beyond voting. Having just learned about common ground, engaged in a civic commitment (taking the time to have a conversation about a political issue), perhaps at the end of a dialogue, participants could be encouraged to go one step further by signing a form to volunteer on an initiative or hold a next conversation in their community. Taking such actions could apply the “foot in the door technique” (McLeod Reference McLeod2023), where taking one step builds commitment and consistency toward other steps.
Chwalisz (Reference Chwalisz2024) believes that current conceptions of democracy as about parties, campaigns, elections, and debate need to center a new paradigm underscoring the relational, the deliberative, and humanity’s connections with the natural world. Organizations such as LRC are some of the best positioned to forward this paradigm in our political processes. This work, however, must be utterly clear about what’s “shared” and where “connections” already exist. Using a reveal strategy to forward bridging work stands to serve the public interest and offer a promising path toward that goal.
Conclusion
There is plenty of good intent in bridging work. And given the alternatives of tribalism, solipsism, and other orientations that can work against the public interest, if anything, it’s more critical than ever that we get bridging work right. A claim has been gaining traction with many publics that the U.S. is not, and should not be, a democracy. The existence of bridging organizations counters this belief, recognizing how “the Constitution was meant to foster a complex form of majority rule, not enable minority rule” (Thomas Reference Thomas2020, par. 1). While founding thinkers were referring to the difference between direct democracy and a republic administered by representatives (with democracy standing in for the latter), the work of building trust and bridging differences is meant to show that, even amid representational political designs, direct democracy remains relevant to generating knowledge of public problems, finding collaborative solutions, and legitimating or thwarting the flows of administrative power (see Thomas Reference Thomas2020).
In this spirit, bridging work isn’t simply a nice add-on to our political environment, but a model for the best that our politics can be. From this project, our hope is that researchers and pro-democracy stakeholders gain improved understandings of what it takes to bring people together in and across their communities. Democratic discourse requires the openness, ambiguity, and interpretive instability that trust-building conversations provide, especially to break beyond tribal extremisms (see Carse Reference Carse1986; Craig Reference Craig2023; Sunstein Reference Sunstein2019; Waisanen Reference Waisanen2020). In promoting civic engagement, civil discourse, and the betterment of everyone’s political futures, our greatest hope is that this project joins with others to ultimately point toward what’s fundamentally necessary for democracy to thrive.
Data availability statement
Parties interested in accessing the data mentioned in this study should contact the corresponding author.
Funding statement
This research project was funded through the generous support of Public Agenda’s Democracy Renewal Project. Public Agenda did not play a part in the data analysis or manuscript preparation.
Competing interests
The authors declare no competing interests in the publication of this research.
Ethical standards
The project was approved by two university IRBs, and all appropriate ethical standards were followed while conducting research with human subjects.
Appendix A. Example LRC Conversation Guide


Appendix B. Research Process and Adaptations
For Conditions 2 and 3, LRC reached out to its network of partners to promote convenings of local and national discussions on Trust in Elections, both in person and online. Although LRC held some conversation events itself, many discussions locally and nationally were held by affiliates. Once interested conveners/hosts decided to hold a discussion, they downloaded the Trust in Elections” discussion guide from the LRC website (see Appendix A; Conversation Guide” 2023) and promoted and reached out to others in their communities or beyond to see if they would like to take part in one or more discussions.
To promote this research study, LRC provided its conveners with an oral script and weblinks and a QR code for the Qualtrics pretests and posttests, where participants received informed consent and access to our surveys. Both surveys remained anonymous, and each participant received a seven-digit code at the beginning of the pretest that they saved and inputted for the posttest.
After the group discussions, at the end of the second survey, a separate link (and QR code) to a Google Form administered and retained solely by Living Room Conversations asked participants to fill out their name, email address, and the code they received for completing the surveys. Living Room Conversations then emailed a $10 gift card (which, given the length of the surveys and the low number of surveys completed in the first couple of months, we upped to $20 in the fall as additional incentive) to all participants who completed and provided codes for the surveys.
Over the one full year in which this research took place, we adapted this project in several ways to best meet the needs of LRC, hosts/conveners, and participants. While our Qualtrics sample between May and October presented no challenges, from our very first run of a pilot to test our design and method for Conditions 2 and eventually 3, starting in July 2024, we discovered a few elements that needed updating.
We created an updated process with LRC so that unique Qualtrics links for pretest and posttests would be time-bound and generated for each individual event. When we began LRC Condition 2 in August 2024, it became clear that our rigorous research design was working well. In gathering data through early October, however, it also became clear that the pretest and posttest surveys were too safeguarded, to the point where many participants were, according to LRC staff, finding it too cumbersome to engage in the process. As one offhand example, we heard that at an event with several elderly people sitting in a circle, there was an effort to pass a phone around to take our lengthy pre- and post-surveys. Many were simply uninterested in doing so or gave up on our process.
For others, retaining the seven-digit code from the pretest and transferring it to the posttest also created a barrier. After completing a pretest and taking part in the discussion event, there may have been participants who just wanted to get home and not take the posttest. Subsequently, we had conversations with LRC staff, Public Agenda, and others, and decided to create a “post-test only” condition starting in October 2024.
To keep some comparability between our conditions, we decided to retain the length and all the questions in this posttest. Although many LRC events are attended by a small number of people, what became immediately apparent is that the posttest was the right “pendulum swing” back to a place where participants were able to take the surveys, with the same security measures in place, yet with reduced barriers compared to the previous condition. Given that we only had about a month of this posttest before the election, in conversation with LRC staff, we decided to continue this research after the election. We kept the same posttest, only making slight amendments to questions that, for instance, referred to the “coming” election now in terms of the “past” election. With LRC seeing little need for the “Trust in Elections” guide after the election, in January 2025 we began running the posttest surveys at other LRC events, figuring the change in conversation topic would not impact participants’ potential desire to express their views on the election and many of the surveys’ variables (e.g. the SCIM measures). Overall, we engaged in several amendments to make the design of our research both more rigorous and accessible, values that were sometimes in tension.
Nonprofits offer access to publics that can be hard for scholars to reach. Nonprofits offer credibility, trust, and a history of engagement and goodwill built with such publics that can lay the groundwork for broad and deep discovery. Nonprofits also have staff that can assist with the often difficult parts of public-focused research, such as marketing a study, recruitment, and compensating validated participants. Having LRC take on exactly these tasks so that we could build a wall between the data collected and participants’ identification helped us focus our efforts on analyzing data and adapting the project when needed.
It’s also critical to think about how many degrees of separation both researchers and nonprofit staff may have from participant pools. At a certain point, it became clear that conducting surveys at discussion events (even through affiliates) where at least one LRC staff member could be present at was ideal, which LRC implemented in practice. The benefit of having potentially diverse affiliates all over the U.S. carry out the discussion events and offer the links to our surveys was simultaneously a challenge, as the research initially was sometimes at a remove from both the researchers and the nonprofit.
In the future, to alleviate the burden on nonprofits to handle such logistical challenges and to engage with populations beyond the “usual suspects,” we’d advise that researchers consider pulling in a third-party vendor to carry out more staged versions of what we attempted to do with our pre and posttests. In other words, while the costs per participant would go up substantially, we believe there’s an opportunity to have a vendor construct and invite randomized or segmented samples for the complete dialogue process we engaged with LRC—recruiting participants, having them take a pretest, engaging in a dialogue about a social or political issue, and then taking a posttest. This would also offer the benefit of providing researchers with qualitative transcript data that could be systematically analyzed as part of the evaluation. This recommendation should be combined with our earlier point about the need for long-term evaluations of bridging work. If standardized through the support of a focus group vendor, it may be easier for both the researcher(s) and nonprofit to replicate this model more widely across different groups and conditions over time.
Don Waisanen (don.waisanen@baruch.cuny.edu; ORCID: 0000-0002-7768-6093) is a Professor in the Marxe School of Public and International Affairs at Baruch College, City University of New York. All his research examines how communication works to promote or hinder democracy, adaptive leadership development, and collaboration and civic engagement.
Amy B. Becker (abbecker@loyola.edu; ORCID: 0000-0002-8781-8942) is a Professor in the Department of Communication and Media at Loyola University Maryland in Baltimore, MD. Her research examines public opinion toward controversial issues, the implications of new media technologies, and the political effects of exposure and attention to political entertainment, including late-night comedy.