What drives Americans’ attitudes toward climate change? The rational ideal emerging from the Enlightenment is that attitudes are based on knowledge and knowledge is based on evidence. So why do Americans have diverging views about the causes of climate change (Egan & Mullin, Reference Egan and Mullin2017; Leiserowitz et al., Reference Leiserowitz, Maibach, Roser-Renouf, Rosenthal, Cutler and Kotcher2018) – never mind what to do about it – when climate scientists do not? One explanation is that climate change is hard to understand; people who doubt its occurrence or its roots in human activity have insufficient knowledge because they have encountered insufficient evidence. We refer to the idea that lack of knowledge predicts attitudes, and particularly opposition, to policies meant to counteract climate change as the ‘deficit model’.
Indeed, climate change is hard to understand. It involves many unobservable mechanisms contributing to a pattern detectable only in large-scale longitudinal data, with temporally distant and probabilistic consequences. Measures of climate change knowledge show generally poor understanding of its mechanisms (Bedford, Reference Bedford2016; Ranney & Clark, Reference Ranney and Clark2016) and of the geographical regions most immediately affected (Hamilton, Reference Hamilton2015). Unsurprisingly, lay knowledge trails that of experts (Sundblad et al., Reference Sundblad, Biel and Gärling2009). This makes the deficit model plausible. It is neither rational nor adaptive to destroy one's own habitat, and the conclusion that such destruction is likely follows directly from scientific information. So individuals who are not moved toward compensatory actions or even belief in the cause of the destruction must lack this information. The factors that drive denial of anthropogenic climate change just do not make sense when one takes a view beyond only a few years into the future.
The deficit model has its detractors, but its core implication – provide more information and people's attitudes will change – is evident in many science communication efforts (Suldovsky, Reference Suldovsky2017). It also enjoys some empirical support: measures of climate change knowledge do predict concern (Shi et al., Reference Shi, Visschers, Siegrist and Arvai2016), and providing explanations of climate change mechanisms increases confidence that it is anthropogenic (Ranney & Clark, Reference Ranney and Clark2016). Measurable knowledge also affects attitudes toward other controversial policy domains where scientific information is critical (e.g., genetically modified foods – McPhetres et al., Reference McPhetres, Rutjens, Weinstein and Brisson2019; energy – Stoutenborough & Vedlitz, Reference Stoutenborough and Vedlitz2016), as well as attitudes toward science more generally (Bak, Reference Bak2001).
Yet citizens act regardless of their level of understanding: they vote on policies that involve complex systems with opaque mechanisms and serious yet hard-to-predict consequences (see Rhodes et al., Reference Rhodes, Axsen and Jaccard2014), they vote for politicians that promise to enact or block such policies and they respond to polls that exert pressure on policymakers’ priorities and decisions. Given how difficult it is to understand climate change and climate-related policies, how are people supposed to form the attitudes that govern action if not by acquiring more accurate information?
An alternative to the deficit model – the cultural cognition model (Kahan & Braman, Reference Kahan and Braman2006) – effectively says that they don't. Instead, people assess information in a manner that predisposes them toward desired conclusions, and these desired conclusions are the beliefs and attitudes of their group. This biased assessment can result from a host of mechanisms: various species of motivated reasoning such as overweighting attitude-consistent information, selectively ignoring attitude-inconsistent information or failing to detect flaws in others’ reasoning when doing so will be ideologically expedient; affective responses; preferences for in-group informants; and memory effects such as the availability heuristic (Kahan et al., Reference Kahan, Jenkins-Smith and Braman2011). These processes are enacted to allow people to adopt their group's beliefs, and there is strong pressure to do so in order to avoid becoming an outcast. Evidence for this model is that many policy-related judgments are made along party political lines (Cohen, Reference Cohen2003; Smith et al., Reference Smith, Ratliff and Nosek2012; Bolsen et al., Reference Bolsen, Druckman and Cook2014; Colombo & Kriesi, Reference Colombo and Kriesi2017; Ehret et al., Reference Ehret, Van Boven and Sherman2018; Satherley et al., Reference Satherley, Yogeeswaran, Osborne and Sibley2018), a strategy that some political scientists consider reasonable in an information-poor environment (Lupia & McCubbins, Reference Lupia and McCubbins1998; Lau & Redlawsk, Reference Lau and Redlawsk2001; Gilens & Murakawa, Reference Gilens and Murakawa2002). Less reasonable is that even nonpolitical judgments are made along party lines when political cues are available (e.g., Iyengar & Westwood, Reference Iyengar and Westwood2015; Nicholson et al., Reference Nicholson, Coe, Emory and Song2016; Marks et al., Reference Marks, Copland, Loh, Sunstein and Sharot2019), findings that are difficult to explain under the deficit model. Most relevant here is that extensive evidence indicates that climate change beliefs and attitudes divide along party lines in the USA (Guber, Reference Guber2013; McCright et al., Reference McCright, Dunlap and Xiao2013; Marquart-Pyatt et al., Reference Marquart-Pyatt, McCright, Dietz and Dunlap2014; Hornsey et al., Reference Hornsey, Harris, Bain and Fielding2016) and ideological lines in many other countries (Poortinga et al., Reference Poortinga, Whitmarsh, Steg, Böhm and Fisher2019), although the strength of this relationship varies by country (Tranter & Booth, Reference Tranter and Booth2015; Hornsey et al., Reference Hornsey, Harris and Fielding2018; Smith & Mayer, Reference Smith and Mayer2019). Moreover, some studies report little or no relation between climate change knowledge and ideology, despite the usual ideological divisions in attitudes (Stoutenborough & Vedlitz, Reference Stoutenborough and Vedlitz2014; Bedford, Reference Bedford2016). This challenges the deficit model's prediction on the assumption that those who understand the most should be in ideological groups whose attitudes are most consistent with the scientific consensus. Other studies find that actual knowledge and ideology make separate contributions to climate change attitudes (Tobler et al., Reference Tobler, Visschers and Siegrist2012; Shi et al., Reference Shi, Visschers and Siegrist2015).
While the cultural cognition model effectively bypasses knowledge by positing that encountered information will be trusted, discredited or weighted according to its consistency with group attitudes, thus establishing a direct link between one's group and one's attitudes, a third body of evidence suggests that knowledge dynamics do play a role in attitude formation. Measures of general reasoning ability – educational attainment, science literacy, numeracy – are associated with greater polarization rather than convergence on controversial and scientifically complex issues (Kahan et al., Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and Mandel2012, Reference Kahan, Peters, Dawson and Slovic2017; Hamilton et al., Reference Hamilton, Hartter and Saito2015; Drummond & Fischhoff, Reference Drummond and Fischhoff2017). A common interpretation of this relationship is that higher education and reasoning ability lead to greater knowledge of community leaders’ positions (Zaller, Reference Zaller1992) and capacity to reason toward desired conclusions, namely those of one's group (Kahan et al., Reference Kahan, Peters, Dawson and Slovic2017). But measures of domain-specific knowledge show different patterns. Rather than a positive or no relation between such knowledge and attitudes, actual and self-assessed knowledge together are related to attitude extremity. Specifically, those who have the most extreme attitudes are also the most miscalibrated – that is, they overestimate their own knowledge to the greatest extent. This relation has been reported for genetically modified foods (Fernbach et al., Reference Fernbach, Light, Scott, Inbar and Rozin2019) and the European refugee crisis (van Prooijen et al., Reference van Prooijen, Krouwel and Emmer2018). Fernbach et al. (Reference Fernbach, Light, Scott, Inbar and Rozin2019) found a directionally similar but nonsignificant pattern for climate change, and Hamilton (Reference Hamilton2018) found that the groups traditionally skeptical of climate change show the largest overestimation of their own climate change knowledge. In a related finding, the extent to which people believe they know more than doctors about the causes of autism negatively predicts both actual knowledge of autism and support for mandatory vaccination (Motta et al., Reference Motta, Callaghan and Sylvester2018). This overestimation may be a feature of metacognitive processes; extremity of political views is also associated with miscalibration on a purely perceptual decision-making task (Rollwage et al., Reference Rollwage, Dolan and Fleming2018; see also Ortoleva & Snowberg, Reference Ortoleva and Snowberg2015; Stone, Reference Stone2019).
The community of knowledge
We offer a third hypothesis to accommodate the range of data: the community of knowledge. This hypothesis emphasizes that knowledge is a collective enterprise; people depend on others to represent most of their understanding of complex phenomena as well as the evidence that supports that understanding (Sloman & Fernbach, Reference Sloman and Fernbach2018). On this view, most individual reasoning employs simple causal models that include markers indicating that more information – including the kind of mechanistic details that most people lack for complex phenomena like global climate change – can be found outside of the individual. This system is effective because it affords group actions requiring complex knowledge without any single member of the group possessing all of that knowledge. But it also ties metacognitive assessments of one's own knowledge to knowledge in fact held by others. Empirically, people's sense of understanding of complex phenomena increases when they become aware that relevant experts understand the phenomena, even in the absence of any explanatory information (Sloman & Rabb, Reference Sloman and Rabb2016).
A large variety of evidence supports the community of knowledge hypothesis (reviewed in Rabb et al., Reference Rabb, Fernbach and Sloman2019). The hypothesis explains, for instance, why reports by the Intergovernmental Panel on Climate Change have so many authors: numerous individuals are needed to represent the range of expertise required to understand climate change, with each individual's knowledge contributing a small amount to a much larger web (see Hardwig, Reference Hardwig1985). More directly, the hypothesis accounts for the illusion of explanatory depth – the finding that respondents asked to estimate their own understanding of an object both before and after explaining how it works downgrade their estimates. This effect has been shown for ordinary artifacts and natural phenomena (Rozenblit & Keil, Reference Rozenblit and Keil2002), psychiatric disorders (Zeveney & Marsh, Reference Zeveney and Marsh2016), historical events (Gaviria et al., Reference Gaviria, Corredor and Zuluaga-Rendón2017) and political policies (Fernbach et al., Reference Fernbach, Rogers, Fox and Sloman2013; Vitriol & Marsh, Reference Vitriol and Marsh2018; Voelkel et al., Reference Voelkel, Brandt and Colombo2018), including climate-protective behavioral nudges (Bromme et al., Reference Bromme, Thomm and Ratermann2016). The community of knowledge hypothesis holds that this overestimation occurs because people feel that they understand an object when someone else does (i.e., that they conflate others’ knowledge with their own; for a discussion, see Rabb et al., Reference Rabb, Fernbach and Sloman2019). In principle, this sense of understanding is often sufficient for action; as long as an individual has the general (macro) causal structure right, the mechanistic (micro) details are not important because those details typically can be retrieved if needed. Some findings from climate change studies suggest this dynamic. Mumpower et al. (Reference Mumpower, Liu and Vedlitz2016) found that ratings of the extent to which scientists understand climate change consequences significantly (albeit weakly) predicted perceived climate change risk, while respondents’ actual climate change knowledge did not. And the fact that awareness of scientific consensus sometimes moves people's climate change beliefs (Lewandowsky et al., Reference Lewandowsky, Gignac and Vaughan2013; van der Linden et al., Reference van der Linden, Leiserowitz, Feinberg and Maibach2015) can be interpreted as evidence of people's reliance on collective knowledge.
Such consensus information can be overridden by partisan cues (Bolsen & Druckman, Reference Bolsen and Druckman2018), as the cultural cognition model would expect. According to the community of knowledge hypothesis, revealing a community's position on an issue should have an influence – indeed, a big influence – on attitudes precisely because it is this community that has much of the knowledge. This explains the importance of partisanship. But the fact that perceived (as opposed to actual) understanding is sensitive to the knowledge of one's community (Sloman & Rabb, Reference Sloman and Rabb2016) implies that the most miscalibrated individuals are the most dependent on their community's knowledge. This explains why the most miscalibrated are more likely to assume the perspectives belonging to their community in their purest (most extreme) form (Motta et al., Reference Motta, Callaghan and Sylvester2018; van Prooijen et al., Reference van Prooijen, Krouwel and Emmer2018; Fernbach et al., Reference Fernbach, Light, Scott, Inbar and Rozin2019). A converse result is that people who are made to appreciate their own miscalibration sometimes moderate the extremity of their attitudes in a repeat test (Fernbach et al., Reference Fernbach, Rogers, Fox and Sloman2013; but see Voelkel et al., Reference Voelkel, Brandt and Colombo2018). In sum, there are reasons to suspect that perceived understanding, which covaries with others’ understanding, plays a role in attitude formation independent of both group identity and actual understanding.
The issues here have practical significance. If the community of knowledge hypothesis is correct, then the vast majority of the knowledge that should influence attitudes about climate change is necessarily held by a community that includes large numbers of experts and is out of reach of individuals, certainly non-experts. The goal, therefore, of public education should not be to endow individuals with rich understanding of climate change science or the cognitive skills required to evaluate claims. Instead, the goal should be to facilitate the minimal individual understanding sufficient to accept the coarse causal accounts provided by those who do have thorough understanding in order to foster trust in their expertise.
The current experiments first address whether people's sense of understanding of political policies is affected by learning that others understand them. We also investigate whether the effect follows ideological lines. There are two reasons to think that it would: trust in scientific expertise has become a partisan issue, with liberals expressing more faith in science than conservatives (Hamilton et al., Reference Hamilton, Hartter and Saito2015; Funk & Kennedy, Reference Funk and Kennedy2016); and understanding judgments are metacognitive assessments, other varieties of which can be influenced by partisan cues (Ramirez & Erickson, Reference Ramirez and Erickson2014; Graham, Reference Graham2018). Of course, veridical causal knowledge is inherently useful for interacting with one's environment, regardless of its source. If the community of knowledge view is correct, then people can incur real costs for failing to depend on others’ knowledge simply because those others come from different ideological groups if the knowledge is accurate.
We also use the contagious understanding paradigm to compare the three accounts of the relation between knowledge and attitude determination reviewed above (see Figure 1 & Table 1). The deficit model states that people's attitudes are governed by how much they personally know and do not know. As such, the model makes no predictions about the effect of contagious sense of understanding; rather, it implies that people's sense of understanding is governed by their actual understanding, and that this true understanding will predict attitudes. The cultural cognition model predicts that people will give positive responses to claims supported by their community and negative responses to claims that are rejected. Although it too makes no predictions about the relationship between perceived understanding and individual attitudes, its emphasis on adherence to group attitudes suggests a reverse relation (i.e., that the groups one affiliates with (and their associated attitudes) will influence sense of understanding); perceived understanding judgments would therefore reflect a kind of expressive responding (Hamlin & Jennings, Reference Hamlin and Jennings2011). The community of knowledge view predicts that perceived understanding will be contagious whenever the external source of that sense of understanding has some expertise, whenever it is an agent who can be trusted to have relevant knowledge, and that both perceived understanding and group identification will influence attitudes.
Experiments 1a, 1b and 2
Experiment 1a served as an initial test of whether people experience a contagious sense of understanding in the domain of policy, and if they do, whether it is susceptible to partisan influence. If it is, we might expect liberal respondents (who typically report more trust in science) and those that oppose President Trump (who has publicly questioned traditional experts)Footnote 1 to show the effect, but for conservative respondents and those that support Trump not to. We also tested whether patterns would differ depending on whether a policy concerns controversial or noncontroversial domains, or whether the people said to understand the domain are traditional or nontraditional but plausible experts.
Experiment 1b replicated the findings of Experiment 1a in a within-participants design, where participants could in principle notice the knowledge manipulation and thus engage in intentional expressive responding. That is, liberals might show a contagious sense of understanding effect because they wish to demonstrate their trust in scientists, while conservatives would show none.
Experiment 2 further examined partisan influence by varying the stated intention of the survey. Partisan patterns in the contagious sense of understanding should be more likely to emerge when respondents believe they are being queried for a study about political opinion rather than nonpolitical research about understanding how things work (cf., Unsworth & Fielding, Reference Unsworth and Fielding2014).
We used the TurkPrime panel to recruit equal numbers of conservative and liberal adult US respondents from Amazon's Mechanical Turk, a convenience sample that has been shown to be representative of the larger population on various psychological dimensions when sorted by political ideology (Clifford et al., Reference Clifford, Jewell and Waggoner2015). The contagious sense of understanding has been repeatedly replicated but is typically a small to medium size effect; sample sizes (ns ≈ 400) were selected to achieve power to detect a medium effect (d = 0.4, α = 0.05, 1 – β = 0.8) within ideological groups.
Instructions adapted from Rozenblit and Keil (Reference Rozenblit and Keil2002) explained how to use the rating scale to reflect different levels of understanding: three examples of what one might know about crossbows illustrated the depth of causal knowledge that ratings of one, four or seven should indicate. Participants then read passages about various political policies (see Table 2). No information about how a given policy might work was provided, but relevant experts were said to understand (CK, for Community Knowledge condition) or not to understand (no-CK) how the policy will influence subsequent events. Issue contentiousness was manipulated by using policies that concerned hot-button issues (e.g., immigration) or non-hot-button issues (e.g., highway improvements). The type of experts that did or did not understand the issue was also manipulated; some were scientists or academics (traditional), while others were nonacademic but plausibly knowledgeable groups (nontraditional; e.g., homeland security officers). After reading each item, participants rated on a seven-point scale how well they understood how the policy would influence subsequent events. Experiment 2 added a survey purpose manipulation: half of the participants were told that the survey was political (concerning “people's views about political issues”), while the other half were told that it was nonpolitical (“how people understand complex objects and events”). This manipulation appeared both in the instructions and at the top of each policy understanding page. All experiments concluded with four additional measures: beliefs in the extent to which scientists possess expertise that should influence policy (1 = not at all to 5 = very much), with Experiments 1b and 2 adding a comparable measure for nontraditional experts; political ideology (1 = very conservative to 7 = very liberal); opposition to President Trump (1 = strongly support to 7 = strongly against); and basic demographic information, including party affiliation. Follow-up tests indicated that the TurkPrime selection procedure was effective; respondents’ self-reported party affiliations and political ideologies were consistent with experimental groupings.
Results and discussion
Participants who took the survey more than once or reported having taken a similar experiment previously were excluded from the analysis (after exclusions: n exp.1a = 382; M age = 38.6, SD = 10.9; 55.8% female; n exp.1b = 388; M age = 38.2, SD = 11.6; 52.2% female; n exp.2 = 235;Footnote 2M age = 38.8, SD = 13.3; 51.9% female). Unsurprisingly, opposition to President Trump split along ideological lines (Experiment 1a: M conservative = 2.83, M liberal = 6.23, t(380) = –19.58, p < 0.001, d = 2.0; Experiment 1b: M conservative = 2.96, M liberal = 6.02, t(386) = –16.87, p < 0.001, d = 1.72; Experiment 2: M conservative = 2.52, M liberal = 6.2, t(233) = –3.68, p < 0.001, d = 2.39).
Mean understanding ratings from all experiments are shown in Figure 2. Here, we summarize key results; any unreported main effects or interactions were nonsignificant (see Supplementary Materials for complete analysis of variance (ANOVA) tables). For Experiment 1a, an ANOVA with issue contentiousness (hot-button/non-hot-button) as a within-participants factor and community knowledge (no-CK/CK), expert type (traditional/nontraditional) and ideology (conservative/liberal) as between-participants factors showed a main effect of community knowledge, F(1, 374) = 4.20, p = 0.041, η p2 = 0.011, M no-CK = 2.67, M CK = 2.94, but no community knowledge by ideology interaction, F < 0.001, p = 0.99. A main effect of issue contentiousness, F(1, 374) = 11.57, p = 0.001, η p2 = 0.03, was due to slightly higher reported understanding for hot-button (M = 2.88) than for non-hot-button (M = 2.74) issues.
The overall pattern was replicated with community knowledge manipulated within participants in Experiment 1b: a main effect of community knowledge, M no-CK = 2.65, M CK = 3.59, F(1, 188) = 73.41, p < 0.001, η p2 = 0.28, but no ideology by community knowledge interaction, F = 0.02, p = 0.881. Again, a main effect of issue contentiousness, F(1, 188) = 7.77, p = 0.006, η p2 = 0.04, was due to slightly higher understanding of hot-button (M = 3.2) than non-hot-button issues (M = 3.04). Issue contentiousness interacted with ideology and community knowledge, F(1, 188) = 5.57, p = 0.019, η p2 = 0.029, because conservatives (M = 2.75) reported slightly higher understanding than liberals (M = 2.5) for hot-button issues not understood by experts.
Experiment 2 also showed the same pattern (main effect of community knowledge, M no-CK = 3.07, M CK = 3.81, F(1, 137) = 45.48, p < 0.001, η p2 = 0.25, no ideology by community knowledge interaction, F = 0.06, p = 0.8). The survey purpose manipulation showed no effect (F = 0.62, p = 0.434) and did not interact with any other variables. Issue contentiousness interacted with ideology, F(1, 137) = 5.82, p = 0.017, η p2 = 0.041, such that conservatives reported greater understanding of hot-button (M = 3.72) relative to non-hot-button (M = 3.38) issues, while liberals did not (M hot-button = 3.3, M non-hot-button = 3.36).
These results indicate that the contagious sense of understanding is robust for policy; people reported greater understanding of policies when experts understand them for controversial and mundane issues, when traditional and nontraditional experts are said to understand them, in political and nonpolitical surveys and using within- and between-participants designs. Most importantly, they did so regardless of their own location on the ideological spectrum. In contrast, explicit beliefs about whether scientists possess knowledge that should inform policy did vary by ideology, although the difference was moderateFootnote 3 (Experiment 1a: M conservative = 3.35, M liberal = 4.04; t(380) = –7.97, p < 0.001, d = 0.82; Experiment 1b: M conservative = 3.52, M liberal = 4.15; t(386) = –7.45, p < 0.001, d = 0.75; Experiment 2: M conservative = 3.39, M liberal = 4.02, t(233) = –5.3, p < 0.001, d = 0.69), and liberals were less sanguine about nontraditional experts informing policy (Experiment 1b nontraditional: M conservative = 3.14, M liberal = 3.32, t(386) = –2.1, p = 0.035, d = 0.21; Experiment 2 nontraditional: M conservative = 3.2, M liberal = 3.19, t(233) = 0.034, p = 0.969, d = 0.01). Moreover, trust in scientists to inform policy was negatively associated with both conservatism (r exp.1a = –0.47, p < 0.001; r exp.1b = –0.4, p < 0.001; r exp.2 = -0.47, p < 0.001) and support for President Trump (r exp.1a = –0.47, p < 0.001; r exp.1b = –0.37, p < 0.001; r exp.2 = –0.44, p < 0.001). In sum, respondents who were less in favor of expert knowledge informing policy nevertheless reported a greater sense of understanding about how policies work when they believed these experts understand them.Footnote 4
Skepticism of human-caused climate change is uniquely predicted by political ideology when compared with skepticism of genetically modified foods and vaccine safety (Rutjens et al., Reference Rutjens, Sutton and van der Lee2018), and the partisan gap in support for spending on broad concerns (e.g., defense, schools, poverty) has widened more for the environment than for any other issue over the last 30 years in the USA (Egan & Mullin, Reference Egan and Mullin2017). If any issue is capable of violating the nonpartisan pattern of contagious sense of understanding, we would expect it to be climate change. Although some policies (post-hurricane relief, seawall construction) in the previous experiments concerned climate issues, Experiment 3 introduces a policy item that refers to climate change by name. As in all of the other items, the details of the policy are intentionally absent. The experiment also measures attitudes toward the hot-button policies and potentially strengthens the survey purpose manipulation by stating in the political condition that the study concerns anthropogenic climate change, rather than political views more generally.
Experiment 3 used a modification of the design of Experiment 2. First, one hot-button item was replaced with a climate change policy (see Table 2). Second, the survey purpose manipulation stated the experiment was about “human-caused climate change” in the political condition. As in Experiment 2, this manipulation appeared both in the instructions and above every policy understanding judgment. The climate change policy appeared first in the climate change purpose condition and last in the understanding purpose condition. Third, four follow-up measures probing participants’ support for the individual hot-button policies (e.g., “How would you rate your support for a bill to build infrastructure that would reduce the impact of climate change on coastal areas?”) were added at the end of the experiment. We dropped the expert type manipulation since this factor had not made a difference in the preceding experiments.
Results and discussion
Exclusion criteria from previous experiments were used (n = 383; M age = 39.1, SD = 12.6; 48.8% female). Trump opposition followed ideological lines (M conservative = 2.27, M liberal = 6.69, t(381) = –4.42, p < 0.001, d = 3.53).
An ANOVA on understanding judgments revealed a main effect of community knowledge, M no-CK = 2.63, M CK = 3.49, F(1, 379) = 133.82, p < 0.001, η p2 = 0.26, but no ideology by community knowledge interaction (F = 0.75, p = 0.386). Issue contentiousness showed a reliable effect, F(1, 379) = 47.44, p < 0.001, η p2 = 0.11, and interacted with survey purpose, F(1, 379) = 8.15, p = 0.005, η p2 = 0.021, such that the difference between understanding ratings of non-hot-button and hot-button issues was larger in the understanding purpose condition (M hot-button = 3.18, M non-hot-button = 2.78; t(190) = 7.13, p < 0.001, d = 0.52) than in the climate change purpose condition (M hot-button = 3.23, M non-hot-button = 3.07; t(191) = 2.79, p = 0.006, d = 0.2).
Nevertheless, inspection of means shows that the survey purpose manipulation did affect one item: climate change policy. In the climate change purpose condition, the contagious sense of understanding was eliminated for conservatives, M no-CK = 3.02, M CK = 3.27, t(90) = 0.69, p = 0.491, d = 0.14, and, surprisingly, for liberals as well, M no-CK = 2.98, M CK = 3.14, t(98) = 0.96, p = 0.339, d = 0.19 (see Figure 2).
Beliefs about scientists’ expertise informing policy followed the now-familiar pattern, varying by ideology (M conservative = 3.50, M liberal = 4.4, t(381) = –10.77, p < 0.001, d = 1.09) and negatively correlating with conservatism (r = –0.52, p < 0.001) and support for Trump (r = –0.54, p < 0.001). Measures of support for the specific hot-button policies also showed a predictable partisan pattern (see Figure 3). Relative to conservatives, liberals were more supportive of legislation on climate change (M conservative = 3.3, M liberal = 4.37, t(381) = –11.23, p < 0.001, d = 1.15), assault rifle attachments (M conservative = 3.04, M liberal = 4.51, t(381) = –12.5, p < 0.001, d = 1.28) and immigration (M conservative = 2.81, M liberal = 4.12, t(381) = –11.94, p < 0.001, d = 1.22); this ideological difference shrank on the issue of protecting natural gas reserves (M conservative = 3.64, M liberal = 3.87, t(381) = –2.51, p = 0.013, d = 0.26).
Results thus far indicate that community knowledge has a reliable and nonpartisan effect on understanding judgments in the domain of public policy, including climate-related policies, with the sole exception of highly politicized climate policy. Despite clear ideological divisions in support for the policies and in enthusiasm for scientists’ knowledge informing them, there was no comparable division in the contagious sense of understanding conferred by awareness of this knowledge. This is a departure from the many partisan differences, including metacognitive assessments, reported in the literature.
Yet several questions remain. First, the deficit and cultural cognition models seem incomplete because neither view predicts the contagious sense of understanding seen here or the knowledge dynamics discussed above. But defenders of the deficit model would be right to point out that we did not measure actual knowledge, so relationships between domain knowledge, sense of understanding and attitudes are unknown. Moreover, the observed partisan divisions in policy support are predicted by the cultural cognition model, so the fact that one's sense of understanding varies with awareness of others’ knowledge, although interesting in its own right, says little about the attitudes that likely determine action. Second, it is possible that respondents assumed the experts in our scenarios were politically neutral or members of their own ideological groups. If only neutral or ideologically consistent knowledge sources increase people's sense of understanding, then the cultural cognition model could explain the apparently nonpartisan pattern of understanding judgments. An important question is whether ideologically inconsistent knowledge sources confer a contagious sense of understanding, and if so, how that understanding influences attitudes. To our knowledge, this question has not been addressed in the literature, although the direct effects of in-group status on argument persuasiveness (e.g., Mackie et al., Reference Mackie, Worth and Asuncion1990) are consistent with a cultural cognition interpretation of our data. Finally, liberals and conservatives respond quite differently to climate policies intended to adapt to as opposed to mitigate climate change (Campbell & Kay, Reference Campbell and Kay2014). Different patterns of understanding might emerge depending on this important dimension of climate policy. Experiment 4 addresses these questions by adding measures of climate change knowledge and perceived scientist ideology and by testing both adaptation and mitigation policies.
Experiment 4 varied the previous design in several ways. Community knowledge was manipulated between rather than within participants. All policy understanding items concerned climate change; we developed three adaptation and three mitigation policy scenarios based on policies examined by Bateman and O'Connor (Reference Bateman and O'Connor2016). The experiment included the survey purpose manipulation from Experiment 3, here strengthened by mentioning human-caused climate change in the climate change condition policy items only. After the main task, participants supplied demographics and then rated their overall understanding of climate change (“How well do you understand how global warming and climate change work?”, using the same scale as the understanding judgments), their support for the six policies, the appropriateness of scientists informing policy and the scientists’ ideologies (“We have discussed the policy understanding of a number of scientists and experts. In general, what sort of political views and beliefs do you think these scientists hold on average?”, using the same scale as political ideology). Participants were then instructed not to use websites or other sources and answered 10 true-or-false questions about the science of climate change adapted from studies that assessed actual knowledge (Mumpower et al., Reference Mumpower, Liu and Vedlitz2016; Shi et al., Reference Shi, Visschers, Siegrist and Arvai2016; Ranney & Clark, Reference Ranney and Clark2016); each question was followed by a confidence judgment. The experiment concluded with measures of political leaning and support for President Trump.
Results and discussion
Exclusion criteria from previous experiments were used (n = 379; M age = 36.9, SD = 11.4; 50.4% female). Trump opposition (M conservative = 2.47, M liberal = 6.66, t(377) = –4.2, p < 0.001, d = 3.05) and trust in scientists to inform policy (M conservative = 3.5, M liberal = 4.44, t(377) = 9.57, p < 0.001, d = 0.98; correlation with Trump support r = –0.509, p < 0.001) followed ideological lines, as did self-reported climate change knowledge (M conservative = 4.1, M liberal = 4.5, t(377) = –3.08, p = 0.002, d = 0.31), although assessed (actual) knowledge did not (M conservative = 66% correct, M liberal = 68% correct, t(374) = –1.3, p = 0.186, d = 0.14). Unsurprisingly, conservatives (M = 3.46) supported climate policies less than liberals did (M = 4.21, t(377) = 10.01, p < 0.001, d = 1.03); this pattern was stable across individual policies and policy types (all p-values < 0.001; see Figure 3).
As in all previous experiments, an ANOVA on mean policy understanding judgments showed a main effect of community knowledge, F(1, 371) = 36.2, p < 0.001, η p2 = 0.089, but no community knowledge by ideology interaction, F < 1, p = 0.837. A main effect of survey purpose, F(1, 371) = 5.08, p = 0.025, η p2 = 0.014, was due to higher ratings in the climate change purpose condition (M = 3.0) than in the understanding purpose condition (M = 2.7). Policy type also showed a main effect, F(1, 371) = 9.69, p = 0.002, η p2 = 0.025, with mitigation measures (M = 2.92) eliciting higher understanding ratings than did adaptation measures (M = 2.78), but policy type was not involved in two- or three-way interactions with ideology and community knowledge (F-values < 1). In sum, all climate change policies showed a nonpartisan contagious sense of understanding (see Figure 2). We speculate that Experiment 3's null result for climate change in the political condition was due to that experiment's within-participants design allowing respondents to notice the community knowledge manipulation.
We also examined whether participants’ sense of understanding of climate change itself was sensitive to the community knowledge manipulation. After all, the community of knowledge hypothesis holds that people's sense of understanding is increased by ambient awareness of community understanding, so hearing repeatedly that various natural scientists understand climate policies could boost people's sense of understanding of a critical component of climate policy: the science of climate change itself. Although reported climate change understanding was higher for liberals than for conservatives, as noted above, it was also higher in the community knowledge (M = 4.44) than in the no community knowledge condition (M = 4.16; F(1, 371) = 4.6, p = 0.033, η p2 = 0.012), and the two variables did not interact (F(1, 371) = 0.01, p = 0.956, η p2 = 0).
We next tested the predictions made by the three accounts of attitude formation using a linear regression with mean policy understanding judgments, ideological group and assessed knowledge scores as independent variables and mean policy support as the dependent variable (R 2 = 0.249), F(3, 372) = 41.13, p < 0.001. In violation of the deficit model, assessed knowledge did not predict attitudes, β = –0.046, p = 0.313. (Also against this model, assessed knowledge was unrelated to perceived knowledge, r = 0.047, p = 0.362.) Ideology did predict attitudes, β = 0.463, p < 0.001, which is expected under the cultural cognition model, but so did rated understanding, β = 0.19, p < 0.001, which is not. One issue with these analyses is that we are comparing the variance in attitudes attributable to actual understanding of climate change science with variance attributable to participants’ sense of understanding of climate change policies. This confounds actual versus perceived understanding with knowledge domain, even though the two domains overlap. We therefore reran the regression with self-reported climate change understanding replacing mean policy understanding. The results were nearly identical, showing that perceived but not actual knowledge of climate change predicts attitudes toward climate policy independently of ideology. This suggests that the cultural cognition model is incomplete; people's sense of understanding, which has been shown to vary with community knowledge, partly accounted for the explained variance in climate change policy support, over and above respondents’ ideological groups. The same pattern is seen in separate regressions for adaptation and mitigation policies; when the dummy-coded ideological group variable (liberal, conservative) is replaced by the continuous measure of political leaning (1–7 conservative to liberal scale); and when age, gender and education are added to the model. These results also remain stable when self-reported climate science understanding replaces mean policy understanding (see Supplementary Materials for regression tables).
We also tested whether participants’ beliefs about the ideological leanings of the experts that caused the contagious sense of understanding affected the results. Overall, liberals did consider these experts more liberal (M = 5.05) than conservatives did (M = 4.15; t(377) = –6.07, p < 0.001, d = 0.63). We therefore reran the main ANOVA including just those participants who responded either above the scale midpoint (i.e., liberals and conservatives who thought the experts were liberal, n = 227) or at the midpoint (liberals and conservatives who thought the experts were politically neutral, n = 75). Again, we observe the same basic pattern: a main effect of community knowledge (p-values ≤ 0.001), but no interaction with ideology (p-values ≥ 0.231). To compare these results with the findings for attitudes, we created a new variable indicating whether a given participant's ratings of the experts’ ideologies placed them outside of their own ideological group (i.e., conservatives who considered experts to be liberal and vice versa). Adding this variable to the regression models reported above did not change the pattern; sense of understanding (β = 0.19, p < 0.001) and ideology (β = 0.439, p < 0.001) still independently predicted policy support, while assessed knowledge (β = –0.042, p = 0.36) and ideological in-group status of knowledge sources (β = –0.052, p = 0.302) did not.
Finally, we examined miscalibration and attitude extremity. Following Fernbach et al. (Reference Fernbach, Light, Scott, Inbar and Rozin2019), we operationalized miscalibration as the difference between actual climate science understanding (mean-centered climate change knowledge assessment scores) from sense of climate science understanding (mean-centered ratings of general climate change understanding). We operationalized attitude extremity by subtracting the scale midpoint from policy support ratings, removing signs and averaging the resulting values to generate individual extremity scores (Fernbach et al., Reference Fernbach, Rogers, Fox and Sloman2013). A linear regression (R 2 = 0.093), F(3, 372) = 12.72, p < 0.001, showed ideology (β = 0.247, p < 0.001) and miscalibration (β = 0.151, p = 0.021) separately predicted attitude extremity, while their interaction did not (β = 0.02, p = 0.761). The same result obtains when miscalibration is operationalized as: (1) the extent to which participants were overconfident (mean confidence ratings in the knowledge assessment task minus mean performance on that measure); or (2) the gap between participants’ sense of understanding climate policy and their own knowledge of climate science (mean understanding from the policy judgment task minus performance on the knowledge assessment). In short, the gap between perceived and assessed knowledge predicted attitude extremity independently of ideology no matter how that gap was quantified.
Five experiments show the contagious sense of understanding of public policies. People reported higher understanding of policies related to climate change and other controversial issues when they believed that experts understand how they work, a pattern previously reported for natural phenomena, causal systems that are similarly complex and difficult for individual agents to understand (Sloman & Rabb, Reference Sloman and Rabb2016). The effect was robust and impervious to experimental efforts to elicit the partisan patterns predicted by divisions in liberals’ and conservatives’ reported trust in science generally (Gauchat, Reference Gauchat2012) and climate science in particular (Hamilton et al., Reference Hamilton, Hartter and Saito2015; Funk & Kennedy, Reference Funk and Kennedy2016). The sole exception was climate change policy understanding in a context uniquely suited to expressive responding (Experiment 3). We take the broad pattern as evidence that the contagious sense of understanding is difficult to disrupt. This places understanding judgments in a rare class, since effects of partisan cues and differences associated with ideological groupings are seen in many other kinds of policy assessment (Cohen, Reference Cohen2003; Smith et al., Reference Smith, Ratliff and Nosek2012; Bolsen et al., Reference Bolsen, Druckman and Cook2014; Colombo & Kriesi, Reference Colombo and Kriesi2017; Ehret et al., Reference Ehret, Van Boven and Sherman2018; Satherley et al., Reference Satherley, Yogeeswaran, Osborne and Sibley2018).
In the case of climate change, whether society accepts or rejects the conclusions of collective wisdom is an urgent concern. We found that the contagious sense of understanding had downstream effects, predicting overall support for climate change policies independently of ideology, even though actual understanding of climate change did not. Moreover, extremity of policy support (both for and against) was predicted by the extent to which our US samples overestimated their own knowledge. The better calibrated respondents were, the more moderate were their views. An open question is whether the same would hold in other populations. Another open question is whether our operationalization of ideological group was too coarse. Although the attitudes toward climate change that cluster among hierarchical-individualists and egalitarian-communitarians show conspicuous overlap with those that cluster among conservatives and liberals, it is possible that replacing our unidimensional scales with the bidimensional measures common to cultural cognition studies would yield interactions.
What does this mean for climate policy? Actual knowledge of climate change did not vary with people's support for climate policies or location on the ideological spectrum in our sample (see also Stoutenborough & Vedlitz, Reference Stoutenborough and Vedlitz2014; Bedford, Reference Bedford2016), so the notion that people who disregard the scientific consensus on climate change or oppose climate action do so because they lack some necessary information was not supported. For these reasons, the deficit model cannot be the whole story; although this point is widely discussed, the model still underlies many climate science communication efforts (Suldovsky, Reference Suldovsky2017). A well-known alternative – the cultural cognition model – is supported by many findings, including some of our own. But the role of contagious sense of understanding in attitudes indicates that this model, which makes no obvious predictions about metacognitive judgments like understanding assessments except perhaps that they follow ideological lines, is also incomplete. Of course, these two models are not mutually exclusive (van der Linden et al., Reference van der Linden, Maibach, Cook, Leiserowitz, Ranney, Lewandowsky, Árvai and Weber2017). The community of knowledge hypothesis offers a means to integrate them. It gives room for a role for sense of understanding while proposing that most of the knowledge underwriting that sense is contained in the heads of members of one's community.
The community of knowledge hypothesis proposes that climate science information serves a purpose other than merely informing: it scaffolds people's impressions of collective understanding, impressions that do increase support over and above that accounted for by ideology. Impressions of collective understanding can come from other sources such as ‘science-y’ cues (Isberner et al., Reference Isberner, Richter, Maier, Knuth-Herzig, Horz and Schnotz2013; Giffin et al., Reference Giffin, Wilkenfeld and Lombrozo2017), and the effects of scientific consensus information on attitudes can also be interpreted this way (Lewandowsky et al., Reference Lewandowsky, Gignac and Vaughan2013; van der Linden et al., Reference van der Linden, Leiserowitz, Feinberg and Maibach2015). Although these effects are demonstrated in laboratory experiments where the cues are artificially varied, we emphasize that such cues are typically valid; that is, under normal circumstances, the ambient sense of understanding they create corresponds to actual scientific understanding. Of course, the fact that they can be manipulated in experiments means that they can be employed by bad faith actors, as with quasi-scientific efforts to obfuscate the dangers of smoking (see O'Connor & Weatherall, Reference Bateman and O'Connor2019). In sum, we do not suggest giving up on scientific communication. Rather, we offer a reconceptualization of its role, as a (true) indicator of collective understanding. Educational efforts are not futile on this view, but should be rethought. The goal should not be thorough individual understanding, or even critical reasoning skills, but the minimal individual understanding sufficient to accept coarse causal accounts provided by those who do have thorough understanding. Information, on this view, becomes a vehicle of trust.
Another virtue of providing information is that it can serve to show people how much they don't know. While this exercise may sound counterproductive, it does reduce attitude extremity in some cases (Fernbach et al., Reference Fernbach, Rogers, Fox and Sloman2013). Conversely, people who are least aware of how much they don't know sometimes adopt the most extreme positions (Motta et al., Reference Motta, Callaghan and Sylvester2018; van Prooijen et al., Reference van Prooijen, Krouwel and Emmer2018; Fernbach et al., Reference Fernbach, Light, Scott, Inbar and Rozin2019). This pattern was evident in our data, too: extremity of policy ratings grew larger as the gap between perceived and assessed knowledge widened. While extreme support for climate policy may seem like a happy outcome, we nonetheless see merit in better calibrated individuals’ tendency toward less polarized positions. Rather than promoting extreme action on climate change at all costs, this would support thoughtful action based on science that would take into account other pressing social needs and openness to the views of experts, even when they clash with those of one's in-group.
Admittedly, causing people to appreciate the limits of their own knowledge is a double-edged sword. While it moderates extreme attitudes in some cases, it may seem condescending if done without humility or manipulative if the recipient doubts the messenger's intent. This raises important questions about source credibility, a critical issue in the psychology of climate change that has received some attention (cf., Rabinovich et al., Reference Rabinovich, Morton and Birney2012; Lachapelle et al., Reference Lachapelle, Montpetit and Gauvin2014; Vraga et al., Reference Vraga, Myers, Kotcher, Beall and Maibach2018), but surely merits a systematic and sustained investigation. One hopeful note is that we found no evidence of ideological source effects on understanding judgments. Conservatives reported greater understanding of policies said to be understood by scientists that they considered liberal and in turn showed increased support for climate policies. Defenders of cultural cognition would be right to point out that effect sizes of ideology on attitudes dwarfed those of the contagious sense of understanding. But it is not feasible to attempt to change ideological commitments, and perhaps not ethical either. Improving calibration and increasing awareness that all complex knowledge critically depends on others constitutes a compromise approach between fruitlessly dispensing information on the assumption that it speaks for itself and capitulating to the entrenched nature of group attitudes.
To view supplementary material for this article, please visit https://doi.org/10.1017/bpp.2020.40.
This work was funded by a grant from the program on Humility and Conviction in Public Life at the University of Connecticut and the John Templeton Foundation.