Hostname: page-component-5b777bbd6c-7sgmh Total loading time: 0 Render date: 2025-06-24T19:39:21.279Z Has data issue: false hasContentIssue false

How the Structure of Scientific Communities Could Affect the Public Uptake of Uncertain Science

Published online by Cambridge University Press:  21 March 2025

Sacha Ferrari
Affiliation:
Center for Logic and Philosophy of Science, KU Leuven, Belgium
Wouter Lammers
Affiliation:
Public Governance Institute, KU Leuven, Belgium
Sylvia Wenmackers*
Affiliation:
Center for Logic and Philosophy of Science, KU Leuven, Belgium
*
Corresponding author: Sylvia Wenmackers; Email: sylvia.wenmackers@kuleuven.be
Rights & Permissions [Opens in a new window]

Abstract

We present an agent-based model to study how the structure of a scientific network could affect the public uptake of science and how this impact is influenced by scientific uncertainty and affinity bias. For unbiased agents, a highly connected scientific network decreases the probability that the public favors the correct theory. For biased agents, however, a moderately connected scientific network causes the public to favor the correct theory more often. This results from the competition between the scarcity of information (for poorly connected agents) and the spread of misleading information (for highly connected agents). Adding more scientists strengthens both effects.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use and/or adaptation of the article.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

In contemporary society, science plays an important role in many aspects of life, such as healthcare, energy, and education. However, it can be challenging for individuals to determine the most credible scientific theory when making personal or policy decisions. Factors such as literacy level, ideological orientation, and the manner of science communication can influence their judgments (Miller, Reference Miller1998; Rekker, Reference Rekker2021; Knight, Reference Knight2006; Harker, Reference Harker2015).

This article focuses on topics lacking scientific consensus, a common stage in the scientific process (Shwed and Bearman, Reference Shwed and Bearman2010). Even perfectly rational scientists may endorse differing theories as a result of inherent variability in research findings. Hence, consensus is more likely when all research results are shared, but the speed of sharing and processing such information by peers has limits. Internal communication channels are thus vital for scientific progress. Moreover, scientists—like all humans—are susceptible to affinity bias, where information uptake is influenced by the source’s affinity. This bias can affect consensus formation and sometimes even increase polarization.

Previous agent-based studies have shown that the structure of scientific networks affects scientists’ beliefs, influencing the formation of consensus or polarization (Zollman, Reference Zollman2007; O’Connor and Weatherall, Reference O’Connor and Owen Weatherall2018). Empirical research also indicates that citizens react differently to scientific results when they perceive a lack of consensus among scientists. In particular, a lack of perceived consensus among scientists has been shown to have a slightly negative effect on citizens’ belief in findings reported in science communication (Chinn et al., Reference Chinn, Lane and Hart2018; Gustafson and Rice, Reference Gustafson and Rice2019; van Stekelenburg et al., Reference van Stekelenburg, Gabi Schaap, van’t Riet and Buijzen2022). So, there is a complex interplay of individual and network-level factors in the formation of scientific consensus and the effects on citizens’ beliefs. So far, we know of little research simulating the effects of this interplay on citizens’ uptake of scientific findings.

The core innovation of this article is that we investigate how network features of the scientific community affect citizens’ uptake of scientific findings. We do so with computational simulations, extending the model of Zollman (Reference Zollman2007). Our extended model includes two groups of actors: scientists and other citizens. We study the effect on citizens’ beliefs of the different types of networks that scientists may form. We also include two additional variables: the uncertainty of the evidence and the affinity bias of scientists and citizens. In the next subsection, we discuss what we know about the four main variables.

1.1. Four main variables

First, the main dependent variable is the citizens’ uptake of scientific theories: it is through this success rate among citizens that we can assess whether the public gets a good understanding of science. In this article, we quantify the public uptake of science with a single number: the success rate of a correct theory in the citizen community. This success rate is given by the proportion of the number of citizens favoring the correct theory over the total number of citizens (see later discussion). For the sake of simplicity, we adopt what one calls, in public communication of science and technology (PCST), the deficit model of science communication, which focuses on unilateral knowledge transfer from scientists to other citizens (Wynne, Reference Wynne1991; Burns et al., Reference Burns, John O’Connor and Stocklmayer2003). In this model, citizens are relatively passive receivers of evidence. We are aware of the limitations of this model (Trench, Reference Trench, Cheng, Claessens, Gascoigne, Metcalfe, Schiele and Shi2008; Seethaler et al., Reference Seethaler, Evans, Gere and Rajagopalan2019), but we consider this minimal model here as a first step toward a more comprehensive understanding of the impact of scientific uncertainty on the public uptake of science (Schmid-Petri and Bürger, Reference Schmid-Petri, Bürger, Leßmöllmann, Dascal and Gloning2020). One limitation of our model is that while the scientists’ search for evidence is influenced by their prior beliefs (as explained later), the citizens are modeled as receiving the same evidence, to which they may respond differently depending on their prior beliefs.

Second, the main independent variable of our model is the structure of scientific networks. Here, we understand the term structure as the shape of the network of epistemic relations that exist between scientists. In particular, two scientists share an epistemic connection in the network when they exchange their empirical results. Bibliometric analysis has shown that many scientists are just a few links away from each other (Newman, Reference Newman2001 a). Authors’ positions in networks affect the uptake of their results (Uddin et al., Reference Uddin, Hossain and Rasmussen2013; Kumar, Reference Kumar2015). Thus, network structures directly affect the dissemination of newly produced scientific knowledge among scientists and potentially among citizens as well. Next, we consider two moderating variables.

Third, the acceptance of a scientific theory by citizens can depend on how uncertain this theory is. Uncertainty is inherent to scientific inquiry (Kampourakis and McCain, Reference Kampourakis and McCain2019; Pellizzoni, Reference Pellizzoni2003) and can be due to the limited accuracy of the experimental setup (e.g., a polymerase chain reaction [PCR] test with aleatory false-positive results or a telescope with a low-resolution lens), the nature of the studied object itself (e.g., a complex social phenomenon or a stochastic quantum effect), or both. The communication of scientific uncertainty to a public audience has received ample attention (Giles, Reference Giles2002; Fischhoff and Davis, Reference Fischhoff and Davis2014; Broomell and Kane, Reference Broomell and Bodilly Kane2017; Van Der Bles et al., Reference Van Der Bles, Freeman, Galvao, Zaval and Spiegelhalter2019). Indeed, making the scientific uncertainties explicit can affect the acceptance of a scientific hypothesis or theory by citizens (Gustafson and Rice, Reference Gustafson and Rice2019). To contribute to the existing literature, we aim to assess this impact in a more systematic and quantitative way.

Fourth, scientists and citizens alike are susceptible to psychological biases. One such bias is affinity bias, where individuals give more weight to evidence coming from people with whom they share similar beliefs, regardless of whether the new evidence confirms their own beliefs. So, affinity bias is a form of homophily, understood here as a preference for interacting with like-minded people (see Dandekar et al., Reference Dandekar, Goel and Lee2013); it pertains to the source rather than the content. As such, it differs from biased assimilation or confirmation bias (whereby people selectively accept evidence that confirms their prior beliefs while rejecting disconfirming findings; see, e.g., Lord et al. Reference Lord, Ross and Lepper1979).Footnote 1 Affinity bias seems especially relevant for modeling scientists who revise their beliefs in response to evidence and who make decisions on whether or not further experiments are required. Moreover, the bias of individual scientists may affect the whole scientific community through peer interaction, as well as the rest of society through public communication. The impact of biases has been studied in scientific communities, both in psychology and in the philosophy of science (Peters, Reference Peters2021; Mahoney, Reference Mahoney1977; Wilholt, Reference Wilholt2009; Schumm, Reference Schumm2021; Peters, Reference Peters2022; Kelly, Reference Kelly2008; Dorst, Reference Dorst2023).

Biases have also been implemented in numerical models. For instance, Baumgaertner and Justwan (Reference Baumgaertner and Justwan2022) modeled how people’s beliefs are influenced by homophily. As mentioned, this bias is similar to what we call affinity bias in the current article. However, Baumgaertner and Justwan (Reference Baumgaertner and Justwan2022) only considered a single group of agents (modeled after online groups) with full beliefs, whereas we investigate two groups of agents with graded degrees of belief. An earlier example of a computational study focused on homophily is Dandekar et al. (Reference Dandekar, Goel and Lee2013), who started from DeGroot’s (Reference DeGroot1974) model. Individuals update their subjective probability assignments by taking a weighted average over the opinions of others. This can be understood as an agent-based model on a total graph with weighted edges that can be chosen to represent homophily. Dandekar et al. (Reference Dandekar, Goel and Lee2013) pointed out that homophily alone does not lead to polarization in such a model (whereas biased assimilation does).

Our work aims to contribute to this debate by evaluating the role of affinity bias in shaping the beliefs of scientists and citizens, especially under scientific uncertainty. Key questions include: How does affinity bias influence scientists’ beliefs when results are uncertain? Is affinity bias overcome with more certain evidence? Additionally, our model tests whether individually problematic dispositions (e.g., affinity bias) are equally problematic at the group level. Some cognitive biases can be problematic at the individual level but turn out to be beneficial at the group level (Peters, Reference Peters2021); this is known as Mandevillian intelligence (Smart, Reference Smart2018).

Methodologically, we chose the public uptake of science as our dependent variable because this is the effect on which PCST generally focuses. The structure of the scientific community, scientific uncertainty, and affinity bias could, in principle, all be considered as independent variables. We selected the structure of the scientific community as our main independent variable, though, because our goal is to understand, for a specific structure of the scientific interactions, how changes in individual behaviors (i.e., affinity bias) and the accuracy of experiments (i.e., scientific uncertainty) affect the dependent variable.

1.2. Interaction of the four variables

Previous models have studied these variables in isolation or have focused on the interaction of some of them. In practice, however, these factors operate simultaneously and likely interact in complex ways, so their net effect seems impossible to determine a priori. We are not aware of any model or theory that has incorporated all these variables together. Therefore, we opted for a comprehensive simulation model to study dynamic interactions between these variables. This approach helps us to develop a more nuanced understanding of the effects of these factors on public science communication.

Complex interplays of parameters on individual and network levels can be simulated with agent-based modeling (Hedström and Ylikoski, Reference Hedström and Ylikoski2010; Bruch and Atwell, Reference Bruch and Atwell2015). Agent-based modeling is used in the social sciences to understand the dynamics of social phenomena (Šešelja, Reference Šešelja, Edward and Nodelman2023). Such models consider social entities (individuals, institutions, etc.) like agents forming a network. Each of these agents can share information or influence others in other ways through the agent community. In particular, the network epistemology framework of Bala and Goyal (Reference Bala and Goyal1998) has been adapted in the context of science by Zollman (Reference Zollman2007) and has been further developed in several publications to describe the dynamics of scientific communities (Weatherall et al., Reference Weatherall, O’Connor and Bruner2020; O’Connor and Weatherall, Reference O’Connor and Owen Weatherall2018). Our article aims to adapt this model in a new direction to simulate how a scientific community exchanges knowledge with a nonscientific audience.

Our model represents an undecided scientific community hesitating between two theories, $A$ and $B$ . We assume that one theory is, in fact, correct, but the scientists only have fallible means for determining this empirically. Some scientists perform experiments; they make their outcomes public to inform other scientists as well as citizens. In response, the members of both groups progressively change their degrees of belief concerning theories $A$ and $B$ . As far as we know, our extension of Zollman’s model is the first one to consider two distinct epistemic communities: scientists and other citizens.

As we will see in the next section, our four variables can be implemented numerically, so their influence can be quantified. The outcomes of our model can be read both descriptively and normatively. On the one hand, we describe how agents react to various combinations of the aforementioned variables and parameters. On the other hand, we can use this knowledge to assert how a scientific network should be organized in order to maximize public uptake of the correct scientific theory. Our article can also be considered as a first step toward a comprehensive computational study of the deficit model in science communication. Our methodology can be extended to more complex science communication paradigms, such as the dialogue approach in PCST, but doing so falls outside the scope of our present work.

Our article is structured as follows. In section 2, we introduce Zollman’s model and present our modifications to it. In section 3, we run the model with varying input values for the main parameters. In section 4, we summarize our key findings and suggest directions for follow-up studies.

2. The model

In this section, we introduce the model of Zollman (Reference Zollman2007) and our extension of it. We explain how we implemented the adapted model numerically to address our research question.

2.1. Zollman’s model and our application of it

There are several agent-based models that aim to describe how individuals create, share, and update their knowledge (also known as opinion dynamics; for a review, see, e.g., Fischbach et al., Reference Fischbach, Marx and Weitzel2021). In recent years, Zollman’s model and its improvements raised specific attention. In his influential article, Zollman (Reference Zollman2007) applied the economic model of Bala and Goyal (Reference Bala and Goyal1998) to epistemic communities in order to understand their communication structures. Such a model only considers one type of agent, which represents scientists. Each scientist is a node of a communication network: a scientist can interact with other scientists (if some communication channel links them directly) or can stay isolated from other scientists (if no direct channel exists between them).

Zollman (Reference Zollman2007) described quantitatively how this network of interactions influences each scientist’s beliefs about given theories. Agents can have degrees of beliefs about which of two options, $A$ and $B$ , is best. From round 1 onward, agents have to decide between two statements: “Treatment $A$ is better than treatment $B$ ” or “Treatment $B$ is better than treatment $A$ .” In our article, we apply the model to two scientific theories (rather than treatments, although this interpretation remains admissible, too). We consider a scientific community in which two competing theories, $A$ and $B$ , have been proposed to explain a given phenomenon. The first theory, $A$ , is a well-known theory that has been confirmed by a large number of experiments. The second theory, $B$ , is either a theory that has so far been ignored—for instance, because its predicted effects were too small compared with available measurement resolution—or an improved version of theory $A$ . We consider the phase in which a new empirical method has just become available that might give more strength to theory $B$ (relative to theory $A$ ).Footnote 2 Note that, in reality, two scientific theories are rarely each other’s negation.Footnote 3 Our model merely compares the relative merits of two theories in a context where those are the main or only contenders.

To model such situations, in which dissensus about two rival theories has started to emerge within the scientific community, we assume that each scientist has a personal degree of belief about which theory is better. We assume that these degrees of belief are rational in the sense that they adhere to the axioms of probability. Hence, we also call the degrees credences. For a given agent at a given time, we denote these degrees of belief, respectively, by $P\left( A \right) \equiv P$ (“Theory $A$ is better than theory $B$ ”) and $P\left( B \right) \equiv P$ (“Theory $B$ is better than theory $A$ ”). They are both real numbers between 0 and 1, with $P\left( A \right) = 0$ denoting the agent’s subjective certainty that theory $B$ is better than theory $A$ and $P\left( A \right) = 1$ denoting their certainty that theory $A$ is better than $B$ , mutatis mutandis, for $P\left( B \right)$ . Rational coherence requires that these credences obey the normality requirement: $P\left( A \right) + P\left( B \right) = 1$ . So, for instance, if a scientist has a credence of 80% that theory $B$ is better than $A$ ( $P\left( B \right) = 0.8$ ), their credence that theory $A$ is better must be 20% ( $P\left( A \right) = 1 - 0.8 = 0.2$ ). If an agent has a credence above 50% that either theory is better than the other (at a given time), we say the agent favors that theory.

As mentioned earlier, in our model, theory $A$ is initially far more established than theory $B$ . Scientists are unlikely to challenge theory $A$ without significant belief in theory $B$ . However, some dissident scientists may doubt the established theory $A$ and conduct new experiments to confirm their belief. Meanwhile, their conservative colleagues strongly favor theory $A$ and will not perform additional experiments. Stated differently, we assume that only dissident scientists who have a prior degree of belief in the superiority of theory $B$ greater than 50% ( $P\left( B \right) \gt 0.5$ , or equivalently $P\left( A \right) \lt 0.5$ ) will deem it relevant to run further experiments in order to further confirm theory $B$ and to convince their colleagues that theory $B$ warrants more support than theory $A$ . Conservative scientists who favor the established theory $A$ (i.e., having $P\left( A \right) \gt 0.5$ and thus $P\left( B \right) \lt 0.5$ ) lack incentive to run extra experiments because of $A$ ’s established empirical adequacy and their low belief in $B$ . However, a conservative scientist may update their beliefs based on dissidents’ results, and if they come to favor $B$ ( $P\left( A \right) \lt 0.5$ ), they may run experiments to confirm their new belief, becoming dissidents themselves.

Zollman’s model assumes that the second treatment is better than the first one. Analogously, we assume that theory $B$ has better predictive success than theory $A$ . However, the experimental device is not perfect and leaves room for uncertainty. That is, the device does not lead to a positive result in favor of $B$ 100% of the time. Although not perfect, we expect it to have an accuracy of more than 50%. A lower value would imply, given our assumption that $B$ is indeed better than $A$ , that the device is not a suitable one. A 50% accuracy would be equivalent to assessing the truth of theory $B$ by flipping a coin. We define the accuracy of the experimental device, $p$ , as the sensitivity of the device: the probability of producing a true-positive experimental result (given that $B$ is the correct theory).Footnote 4 For example, in a counterfactual case where the geocentrism versus heliocentrism debate took place with 19th-century telescope technology, $p$ could represent the probability of measuring stellar parallax (which is a true positive, given that heliocentrism is correct). Formally, we use the notation

(1) $$p = 0.5 + \varepsilon, $$

where $\varepsilon $ is a real number between 0 and 0.5. If $\varepsilon = 0.5$ , then the device is 100% accurate, and its results leave no room for uncertainty.

Because $p$ is a probability, its (Bayesian) interpretation can be extended from the experimental accuracy to encompass other forms of uncertainty. Indeed, dispersion in the experimental outcomes is not necessarily caused by the measurement device alone but can also result from the intrinsic stochasticity of the system under study itself. Having that in mind, one can apply our model to many other fields dealing with inherent uncertainty, such as the social sciences, medicine, statistical physics, and quantum mechanics. For instance, one can cite sampling error in a population survey (in that case, one of the theories could be “The majority of the population is shorter than 170 cm”), chaos in weather simulation (“It will rain tomorrow at 2:34 p.m.”), the detection of an electron outside an electron trap (“The electron stays in the trap for at least 30 minutes”), and so forth.

In Zollman’s model, in order to reduce statistical error, each dissident scientist chooses to run the experiment $n$ times (always with a probability of success of $0.5 + \varepsilon $ for each run). $E$ denotes the event of $k$ positive results out of $n$ runs. The probability of this event, given that theory $B$ is true, is given by the binomial distribution:

(2) $$P(E|B) = P\left( {k,n,p} \right) = {{n!} \over {\left( {n - k} \right)!k!}}\;{p^k}{(1 - p)^{n - k}},$$

where $p = 0.5 + \varepsilon $ .

When faced with the evidence $E$ of such a run of $n$ experiments, each agent updates their prior credences according to Bayes’s rule:

(3) $${P_{{\rm{new}}}}\left( B \right) = P(B|E) = {{P(E|B)P\left( B \right)} \over {P\left( E \right)}},$$

where $P\left( B \right)$ is the agent’s prior credence in the superiority of theory $B$ , $P(E|B)$ is the probability of the evidence $E$ given that the theory $B$ is true, and $P\left( E \right)$ is the absolute probability of $E$ . By the law of total probability, the latter probability can be rewritten as

(4) $$P\left( E \right) = P(E|B)P\left( B \right) + P(E|A)P\left( A \right),$$

which says that the probability that $E$ occurs (without knowing whether proposition $A$ is true or $B$ ) is proportional to the probability that it occurs on theory $A$ or on theory $B$ , weighted by the probability that the given theory is correct. Assuming $E$ corresponds with $k$ successes out of $n$ experiments, we obtain by combining equations 24:

(5) $$\eqalign{ {{P_{{\rm{new}}}}\left( B \right)} \hfill & { = {{{p^k}{{(1 - p)}^{n - k}}P\left( B \right)} \over {{p^k}{{(1 - p)}^{n - k}}P\left( B \right) + {{(1 - p)}^k}{p^{n - k}}\left( {1 - P\left( B \right)} \right)}}} \cr & { = {1 \over {1 + {{1 - P\left( B \right)} \over {P\left( B \right)}}{{\left( {{{0.5 - \varepsilon } \over {0.5 + \varepsilon }}} \right)}^{2k - n}}}}.}} $$

Each dissident scientist will perform an experimental run and update their prior beliefs according to equation 5. If $\varepsilon $ is very small (meaning that an unreliable experimental device is used), the outcome of the run has a nonnegligible probability of disconfirming theory $B$ . Then, after updating their belief, the dissident scientist can end up with a degree of belief in $B$ below 0.5. The agent will then disfavor their former favorite theory $B$ and become a conservative scientist who favors theory $A$ . This scientist will not perform any new experiments because, as mentioned, scientists are reluctant to perform a costly experiment in favor of a new theory in which they have low credence when there are already a lot of old experiments in favor of $A$ .

But all the scientists, both conservatives and dissidents, aim to improve their knowledge and are open to listening to neighbor scientists located in their direct network. Thus, even if conservative scientists will not perform an experiment themselves, they will consider the experimental results of dissident colleagues who are direct neighbors and update their prior beliefs according to equation 5. In Zollman’s model, all pieces of evidence have the same weight, regardless of whether they come from the scientists themselves or from dissident colleagues.

2.2. Extending the model

Some extensions of Zollman’s original (Reference Zollman2007) model have been proposed in the literature. O’Connor and Weatherall (Reference O’Connor and Owen Weatherall2018) added a social bias (similar to affinity bias) related to the source of the evidence: in their model, scientists treat the evidence of peers as more uncertain when their credences are further apart. The authors found that this promotes polarization, but their model only concerns the scientific community and does not include citizens. Wu (Reference Wu2023) set up a variant of the model including two groups of agents, in which members of one group ignored the testimony of members of the other group. Zollman’s later (Reference Zollman2010) model represents scientists who again have a choice between two methods, but now, instead of one having a known success rate and the other being unknown compared to it, both methods have unknown success rates (modeled by binomial distributions). Gabriel and O’Connor (Reference Gabriel and O’Connor2024) added confirmation bias to this model and found that it may improve group learning. After each experimental round, agents have some probability to accept or reject these outcomes. This probability is driven by a beta-binomial distribution that depends on the history of success and failure of each theory and the new outcomes. In another version of the model, Weatherall et al. (Reference Weatherall, O’Connor and Bruner2020) considered an epistemic community made of scientists, policymakers, and a propagandist. The propagandist aims to shift public opinion in one direction by cherry-picking among the experimental results that confirm their prior beliefs and massively sharing them. Even though both communities (i.e., scientists and citizens) are considered, Weatherall et al. (Reference Weatherall, O’Connor and Bruner2020) did not give a systematic study of the impact of the scientific network. They only considered two types of networks among the thousands possible: the cycle graph and the complete graph. In the next section, we will discuss the interpretation of these graphs in more detail.

In our model, affinity bias influences how agents (both scientists and citizens) adapt their degrees of belief in response to the testimony of (other) scientists. Specifically, if the agent is prone to affinity bias, their trust in the scientist’s testimony will be high if their prior credence on a particular topic (in this case, whether they favor theory $B$ ) is very similar. The closer the agent’s prior degree of belief is to that of the scientist, the more the agent will trust the reported evidence.

To represent this type of belief revision, we must deviate from Bayes’s rule (eq. 5), which is part of Zollman’s base model, because it assumes that all evidence is learned with certainty. Instead, we start from Jeffrey’s (Reference Jeffrey1990) generalization of Bayes’s rule, as did O’Connor and Weatherall (Reference O’Connor and Owen Weatherall2018). In addition, we modify the way agents respond to testimony under the influence of affinity bias. To achieve this, we essentially use the same equation as O’Connor and Weatherall (Reference O’Connor and Owen Weatherall2018), but with one component fewer.

Formally, we consider a scientist $j$ who reports their evidence $E$ to another agent $i$ , who does not fully believe this testimony. By testimony, we consider an observation report (i.e., a scientist’s testimony on their experimental evidence) rather than an expert’s posterior degree of belief, which has been studied, for example, by Steele (Reference Steele2012) and Roussos (Reference Roussos2021). According to Jeffrey’s (Reference Jeffrey1990) conditioning, the posterior of agent $i$ is as follows:

(6) $$P{'_i}\left( B \right) = {P_i}(B|E)P{'_i}\left( E \right) + {P_i}(B| - E)P{'_i}\left( { - E} \right),$$

where $P{'_i}\left( B \right)$ is agent $i$ ’s posterior credence that theory $B$ is better than theory $A$ ; ${P_i}(B|E)$ and ${P_i}(B| - E)$ are the conditional probabilities of theory $B$ being better than theory $A$ given that $E$ or $ - E$ occurred, respectively (see eq. 5); and $P{'_i}\left( E \right)$ and $P{'_i}\left( { - E} \right)$ are agent $i$ ’s posterior credence that $E$ or $ - E$ occurred, respectively, after accounting for the testimony of scientist $j$ . These final two factors are influenced by the affinity bias, as defined in equation 7.

In our model, when scientist $j$ claims that they received evidence $E$ , the posterior credence of agent $i$ depends on the affinity bias, as follows:

(7) $$P{'_i}\left( E \right) = 1 - {\rm{min}}\left( {1,{\rm{max}}\left( {0,\alpha {\rm{\;}}\left| {{P_i}\left( B \right) - {P_j}\left( B \right)} \right|} \right)} \right),$$

where $P{'_i}\left( E \right)$ is agent $i$ ’s posterior credence in $E$ , $\left| {{P_i}\left( B \right) - {P_j}\left( B \right)} \right|$ is the distance between the prior credences of agent $i$ and scientist $j$ in theory $B$ being better than $A$ , and $\alpha $ is a positive real parameter that represents the degree of affinity bias of agent $i$ . $P{'_i}\left( { - E} \right)$ is obtained as $1 - P{'_i}\left( E \right)$ .

If $\alpha $ is 0, the agent is not prone to affinity bias, and $P{'_i}\left( E \right)$ will be 1, so the agent will trust another scientist regardless of how different their beliefs are. As $\alpha $ increases, the agent is more prone to affinity bias, so the agent will distrust experimental results coming from other scientists, except those for which $\left| {{P_i}\left( B \right) - {P_j}\left( B \right)} \right|$ is smaller than $1/\alpha $ . We notice as well that when the credences of scientists $i$ and $j$ in the theories get closer, the subjective probability $P{'_i}\left( E \right)$ approaches 1: scientist $i$ approaches full belief in the occurrence of $E$ as reported by scientist $j$ . So, there are two ways in which an agent $i$ may fully trust the testimony of scientist $j$ : if $\alpha $ is 0 or if agent $i$ happens to have the same prior credence in $B$ as scientist $j$ . In both cases, $P{'_i}\left( E \right) = 1$ , and Jeffrey’s formula reduces to Bayes’s rule.

Our equation 7 is structurally similar to the expression introduced by O’Connor and Weatherall (Reference O’Connor and Owen Weatherall2018), except that we left out the additional factor of $\left( {1-{P_i}\left( E \right)} \right)$ —a useful simplifying assumption.Footnote 5 So, our approach assumes that the uptake of the testimony only depends on the difference in the degree of belief between $i$ and $j$ and the intensity of the affinity bias, regardless of how probable this piece of evidence is.

We have described how each scientist updates their degree of belief according to their own experiment’s outcomes and those of their epistemic neighbors. These pieces of evidence are communicated to the citizens via a communication channel, called a mediator. In our article, we only consider a rapporteur in the role of a mediator, who publishes all the scientific outcomes.Footnote 6 Unlike the dialogue model in PCST (Trench, Reference Trench, Cheng, Claessens, Gascoigne, Metcalfe, Schiele and Shi2008), there is no scientist–citizen knowledge co-production. The citizens merely receive information from the mediator, a one-way communication channel from scientists to citizens. Once new evidence has been produced by any scientist, it reaches every citizen. Like the scientists, each citizen will use equation 6 to update their degree of belief. We assume that, realistically, citizens, like scientists, are prone to affinity bias.

2.3. From the theoretical model to its numerical implementation

As stated in the introduction, our aim is to understand how the structure of scientific communities, scientific uncertainty, and affinity bias affect the public uptake of science. The agent-based model we just reviewed gives us a useful tool to approach this question. We implemented our model in a Python algorithm (publicly available: Ferrari, Reference Ferrari2025).

For each simulation, several parameters are fixed: the structure of the scientific community, the sensitivity of the experimental device, the affinity bias, the number ${N_{{\rm{sc}}}}$ of scientists in the scientific community, the number ${N_{{\rm{cit}}}}$ of citizens in the public, and the number of experiments $n$ done by each dissident scientist in each run.

The ${N_{{\rm{cit}}}}$ citizens, like the ${N_{{\rm{sc}}}}$ scientists, all start with a prior degree of belief $P\left( B \right)$ at time $t = 0$ . These degrees of belief (between 0 and 1) are randomly generated by the computer. Each dissident scientist (with $P\left( B \right) \gt 0.5$ ) will run $n$ experiments with a probability of success of $0.5 + \varepsilon $ for each trial. At the next time increment ( $t = 1$ ), each of these dissident scientists will update their personal degrees of belief according to the outcomes of their own experiment by using Bayes’s rule. In addition, they will share their results with scientists located in their neighborhood. Remember that the network of scientists is a graph, where each scientist is represented by a vertex and each connection by an edge. Each of the scientists (conservative or dissident) of the neighborhood will update on each of the upcoming pieces of evidence coming from their neighborhood according to Jeffrey’s rule (eq. 6). Then, each of the citizens will update their degree of belief with all the pieces of evidence produced by the scientific community according to Jeffrey’s rule as well.

We reiterate this process for $t = 2$ , $t = 3$ , and so forth until all agents (both scientists and citizens) stabilize their beliefs: not changing them for subsequent time $t$ . The time after which beliefs stabilize is called the stabilization time $\tau $ . Once all beliefs are stabilized, the simulation stops. This is the halt condition of our algorithm. We can now count how many scientists and citizens favor theories $A$ and $B$ . From these numbers, we can draw conclusions about the public uptake of science for this specific community.

So far, we have described a single simulation for specific values of the independent variables and a random degree of belief assignation. In order to have a general picture of the impact of a given choice of parameters (our independent variables), we would like to make this result independent of the prior beliefs of the agents (i.e., the $P\left( B \right)$ at $t = 0$ ). To do so, we randomize the initial distribution of beliefs of agents and simulate the same epistemic network with the same parameters a large number of times. More specifically, we start with a random distribution of scientists with degrees of belief between 0 and 1 and a distribution of citizens with degrees of belief between 0 and 0.5 (so, no citizens favor $B$ at $t = 0$ because we assume that the conservative scientists had enough time in the past to convince all the citizens to favor theory $A$ ). Then, we average the proportion of agents favoring the correct theory (i.e., theory $B$ ) at the end of the interaction process (i.e., once all scientists’ beliefs have stabilized). This average ratio is called the success rate. We use this success rate among the citizens to assess if the public gets a good understanding of science (assuming the deficit approach of PCST). This is why we quantify the dependent variable of this article (i.e., the public uptake of science) with the success rate of theory $B$ (i.e., the correct theory) in the citizen community.

We summarize the independent and dependent variables in table 1. The main four variables of this article are written in bold.

Table 1. Independent and Dependent Variables of the Model; Main Variables of Interest Indicated in Bold

Independent Variable Name Symbol Range of Value
Number of scientists N sc Natural number
Number of citizens* N cit Natural number
Number of experiments at each run n Natural number
Network structure None All possible graphs representing connections between N sc agents
Sensitivity of the experimental device 0.5 + ε ε ∈ [0, 0.5]
Affinity bias of agents * α Positive real number
Dependent Variable Name Symbol Range of Value
Success rate of scientists None [0, 1]
Success rate of citizens None [0, 1]
Stabilization time τ Natural number

* Added to Zollman’s model.

Our model is now complete. In the next section, we will investigate how the choice of the scientific network affects both the scientists and the public in their beliefs: we study the success rate in these two communities.

3. Network structure

The structure of a scientific community can be represented by a graph in which each vertex represents an agent and each edge represents the connection existing between two agents. For instance, the graph of all members of the same university department is usually a complete graph: each agent stands in a direct epistemic connection with any other one. In other cases, the graphs might not be connected sets, such as when there is an accidental or forced segregation of two (or more) scientific communities (e.g., due to language barriers). In this case, the graph of the whole scientific community consists of at least two disconnected subgraphs. Such a setup does not imply that the subcommunities cannot reach the right conclusion independently. A more extreme case consists of a society of isolated agents with no communication between any of them. One can think of independent scholars without affiliation to any university and therefore lacking coverage for their research or scholars during antiquity, when manuscripts were often unaffordable and communication means were very slow or nonexistent. Some authors also consider two other kinds of networks: the cycle and the wheel (Zollman, Reference Zollman2007; O’Connor and Weatherall, Reference O’Connor and Owen Weatherall2018). In a cycle network, each agent is connected to two other agents. The resulting connected graph is a loop. Such a network is one of the most economical to link all agents together. However, the path between one agent and another can be long and has to transit through a lot of peers who could modify the message. The wheel is an improved version of the cycle with an agent at the cycle’s center and connected to all other agents. This agent is like a postal worker providing shortcuts for communication between any pair of agents. An illustration of these four networks is presented in figure 1.

Figure 1. The complete, isolated, cycle, and wheel networks.

In this section, our object of investigation is the effect of the network structure of the scientific community on the success rates within the communities of both scientists and citizens. As mentioned before, we assume here that the communication channel is a rapporteur, such that all experimental outcomes produced by the scientists are made public to the citizens (which may be viewed as the ideal of open science), and that the latter take this information into account (an even less realistic modeling assumption). Stated differently, at each round, each citizen will update (in a Bayesian way) their degree of belief based on all the experiment outcomes produced during this round.

3.1 Complete, isolated, cycle, and wheel graphs

We begin by examining the four common network graphs—complete, isolated, cycle, and wheel—to understand their impact on the public uptake of science. We model a society of 20 scientists and 20 citizens, considering both societies of agents without affinity bias ( $\alpha = 0$ ) and with agents prone to affinity bias ( $\alpha \gt 0$ ). Initially, scientists’ prior degrees of belief are randomly distributed between 0 and 1, and citizens’ are distributed between 0 and 0.5.

3.1.1. Unbiased case: $\alpha = 0$

The result of the unbiased case is depicted in figure 2. We notice that the wheel network is the most successful graph for making scientists favor the correct theory, followed by the complete network and the cycle network. For the isolated network, even if half of the scientists start by favoring the correct theory on average, less than half of them end up with the right conclusion. We can explain this by noticing that a small value of $\varepsilon $ implies a high probability of failure ( $k \lt n/2$ ). In the case of an update with false-negative results, a scientist starting with a prior degree of belief above 0.5 can have a posterior degree of belief below 0.5 at the next time increment. As a consequence, this scientist will then stop running experiments. Because the agent is isolated, they will not have any new experimental outcomes for updating their erroneous belief. Such an agent will stay stuck below 0.5 forever.

Concerning the impact of these four structures on citizens, we notice quite similar behaviors in each case. No citizen will be convinced to favor the correct theory if the accuracy of the experimental device is 0.5 ( $\varepsilon = 0$ ). But the number of citizens who favor the correct theory grows rapidly and reaches the maximal value even for a poor accuracy of the device. It is surprising to see that the isolated network now performs as well as the other networks.

In order to study the robustness of our results, we varied the number of scientists and the number of citizens. We discovered that varying the number of citizens does not affect their success rate. However, as depicted in figure 3, a larger scientific community leads to a better success rate both for itself and for the citizens. Although we exemplified it for a complete graph, this statement is valid for all four graphs considered here.

3.1.2. Biased case: $\alpha \gt 0$

We run the same algorithm, now considering a society with affinity bias ( $\alpha = 2$ ). In figure 4, we notice that the success rate of the four networks is lowered, and none of them is able to convince either the scientists or the citizens to favor the correct theory. This effect is even more prominent in the case of citizens; only the complete graph reaches slightly more than 50%. If we add more scientists to the network, the success rate for the scientists rises but never surpasses 75%, and the citizens’ success rate never rises above 50%. For readability, we omitted these plots.

The geometry with the lowest success rates is once again the isolated network. We notice here that the more connected a graph is, the higher its success rates are. The next subsection investigates whether that statement can be true in general.

3.2. General graphs

Even though the complete, isolated, cycle, and wheel graphs are often implemented in Zollman’s model (Zollman, Reference Zollman2007; Weatherall et al., Reference Weatherall, O’Connor and Bruner2020; O’Connor and Weatherall, Reference O’Connor and Owen Weatherall2018), there are few systematic studies covering all the possible graphs. Zollman (Reference Zollman2007) investigated all the possible graphs for ${N_{{\rm{sc}}}} = 3,4,5,6$ . (We will discuss his conclusions later on.) Zollman’s approach aims to be analytical: for a fixed number of scientists (i.e., vertices), he computed all the ways of linking them. He ended up with 2, 6, 21, and 112 possible graphs, respectively. This number grows exponentially with the number of scientists (Sloane, Reference Sloane2024): 853 for 7 scientists, 11,117 for 8, 261,080 for 9, 11,716,571 for 10, and so forth. Hence, an intrinsic limitation of this type of research is the exponential increase in computation time needed for exploring larger graphs. However, it is worth investigating beyond graphs of six vertices because a scientific society is rarely limited to six individuals, and important differences are to be expected for larger networks. As in the previous example, we would like to simulate all possible graphs for a community of 20 scientists. For this case, there are roughly ${10^{37}}$ possible graphs (Sloane, Reference Sloane2024). Because our script takes 0.1 seconds to simulate 10 graphs in one central processing unit (CPU), it would take around ${10^{35}}$ seconds, or ${10^{27}}$ years (i.e., more than 1 billion times the current age of the universe), to go through all possible graphs. Clearly, this is far beyond the capacity of current computers. Instead, we modestly simulated 10,000 random graphs. We will show later that this tiny sample seems to suffice for studying the trend of the results.

Like Zollman (Reference Zollman2007) and earlier authors (see, e.g., Newman, Reference Newman2001a, Reference Newman2001b), we synthesize the graph identity with one unique number: the clustering coefficient (also called transitivity).Footnote 7 This coefficient aims to describe how vertices tend to be clustered. For each agent, the local clustering coefficient is proportional to the number of connections the agent’s neighbors form. The more the neighbors are connected, the higher the local clustering coefficient. These local coefficients are computed for each vertex (i.e., for each agent) and are averaged. This final number (between 0 and 1) is called the global clustering coefficient or simply the clustering coefficient of the graph. For example, a completely isolated community has a graph with a null coefficient, and a fully connected community has a coefficient of 1. The cycle structure has a coefficient of 0.5. The higher the coefficient, the denser and more connected the community.

In figure 5, we simulated 10,000 random graphs and computed their clustering coefficient, their stabilization time, and their ratio of scientists who reached the correct conclusion. The upper plots pertain to an unbiased society ( $\alpha = 0$ ) and the lower ones to a more biased one ( $\alpha = 2$ ). The complete, isolated, cycle, and wheel graphs are represented by specific symbols as well.

Our results for scientists agree with earlier work in this area. In addition, we consider the effect of the network in one community (the scientists) on the credences of another group (the citizens), for which no such studies exist. Moreover, we study the interaction with affinity bias, as discussed later.

3.2.1. Unbiased society

In the case of an unbiased society, the more clustered the graph is (i.e., the higher the clustering coefficient), the more likely the scientists will reach the correct conclusion. Concerning the citizens, however, it is the exact opposite: the more disconnected a graph is, the more likely the citizens will favor theory $B$ . We notice, out of the four common graphs, that the complete one cannot lead all the citizens to favor theory $B$ . However, it is the quickest one: the community stabilizes after only a few iterations. The isolated graph lies in the bottom left and scores a success rate below 0.5 for the scientists, although scoring at 1 for the citizens. We can explain this by noticing that once a dissident scientist runs an experiment whose outcomes lower their degree of belief below 0.5, they will never do an experiment again, nor will they update their belief based on another scientist’s experiment. At the same time, each isolated scientist will share their knowledge with the audience, and the latter will reach the correct conclusion. In general, we notice that increasing the clustering of a graph improves its stabilization time and the ratio of dissident scientists but lowers the chance of getting all the citizens unequivocally favoring theory $B$ (i.e., being in one of the horizontal strips of the second graph). We can see it as a trade-off between a successful scientific community and a successful citizen community. The link between connectivity and stabilization time is consistent with the results of Zollman (Reference Zollman2007).

3.2.2. Biased society

We ran the simulation again with a non-null level of affinity bias ( $\alpha = 2$ ). We first notice that the three dotted clouds in the three lower charts in figure 5 are, on average, convex. This time, no graph achieves a success rate of 1 for the scientists, and in a few graphs only, the success rate for the citizens is above 0.5. The most successful graphs are located around a clustering coefficient of 0.6. The success rate of the isolated graph is one of the worst ones, even though its stabilization time is very low. The three other classical graphs have low success rates, especially the cycle and the wheel, which lie below the majority of points. This specific convex shape of the curve can be understood as the result of two competing phenomena: epistemic isolation of the agents due to high affinity bias and the fast dissemination of false pieces of evidence in highly connected graphs. The first phenomenon that takes place is poorly connected graphs. Agents are isolated as a result of the lack of connection with other agents and have fewer opportunities to receive information from other agents. This effect is even more stringent with the affinity bias: even though an agent receives an experiment outcome from one of their rare peers, they are more easily prone to discard it. That explains the low success rate for poorly connected networks. This rate increases when the connectedness increases. However, a second phenomenon will counteract this increase. In a highly connected graph, information spreads very fast and very easily to all the agents. This may sound as though it would be beneficial to the success rate. However, even though true-positive results (i.e., in favor of theory $B$ ) spread fast, false-positive results (i.e., in favor of theory $A$ ) do as well. Such false-positive results are difficult to correct once they have been communicated to a large number of agents. This effect has already been pointed out by Zollman (Reference Zollman2007) and is known as the Zollman effect (Šešelja, Reference Šešelja, Edward and Nodelman2023). This effect diminishes for poorly connected networks. The convex shape is thus understood as the result of these two competing phenomena.

Figure 2. Fraction of scientists and citizens who reached the correct conclusion in a society without affinity bias as a function of increasing experimental accuracy (or sensitivity) $0.5 + \varepsilon $ and graph geometry. In these simulations, ${N_{{\rm{sc}}}} = {N_{{\rm{cit}}}} = 20$ , $\alpha = 0$ , $n = 10$ , and number of $runs = 500$ .

Figure 3. Fraction of scientists and citizens who reached the correct conclusion in a society without affinity bias as a function of increasing experimental accuracy (or sensitivity) $0.5 + \varepsilon $ and the number of scientists in the case of a complete graph. In these simulations, ${N_{{\rm{cit}}}} = 20$ , $\alpha = 0$ , $n = 10$ , and number of $runs = 1,000$ .

Figure 4. Fraction of scientists and citizens who reached the correct conclusion in a society with affinity bias as a function of experimental accuracy $0.5 + \varepsilon $ and the graph geometry. In these simulations, ${N_{{\rm{sc}}}} = {N_{{\rm{cit}}}} = 20$ , $\alpha = 2$ , $n = 10$ , and number of $runs = 200$ .

Figure 5. Fraction of scientists and citizens who reached the correct conclusion and the stabilization time as a function of the clustering coefficient. $\alpha = 0$ for the three upper charts, and $\alpha = 2$ for the three lower ones. We fixed $\varepsilon = 0.05$ , $n = 5$ , number of $generations = 20$ , and number of $graphs = 1,000$ . The blue stars denote the complete graph, the green diamonds denote the cycle, the yellow squares denote the wheel, and the black tripods denote the isolated graph.

We notice here a trade-off between accuracy and speed. On average, adding or removing some vertices to change the clustering coefficient of the graph in order to reach the value of 0.6 (i.e., to maximize the success rate of scientists and citizens) will increase the stabilization time. Stated differently, slower graphs will perform better.

To assess the model’s sensitivity to affinity bias, we also run the script for $\alpha = 4$ . In this case, the curve of the first two charts is shifted downward: fewer scientists and citizens reach the correct conclusion. One could have expected this result: because of their strong affinity bias, all the agents will rarely update their degrees of belief and will stay stuck not far from their prior beliefs. The top of the curve lies around 0.5 on average for scientists and around 0.25 on average for citizens. The first value can be understood as follows. Because scientists who initially believe in theory $B$ will never change their minds, their proportion stays the same throughout the interaction process (i.e., 50%). So, half of the scientists in the initial and final communities favor theory $A$ , whereas the other half favors theory $B$ .

In this section, we studied the impact of the scientific network on both the scientists’ and citizens’ beliefs. We stressed that a society prone to affinity bias (i.e., a biased society) performs poorly and is never able to make more than half the citizen population favor theory $B$ . Even if these limitations are unavoidable, a poor result can be improved either by hiring more scientists (raising ${N_{{\rm{sc}}}}$ ) or by reorganizing the scientific network in such a way that its clustering coefficient is near 0.6 (i.e., moderately connected). In the case of unbiased societies, we saw that there is a trade-off between making either scientists or citizens favor the correct theory. These results are especially interesting because they illustrate how the network of one community (i.e., the scientists) affects the uptake by another (i.e., the citizens). This suggests that citizens’ uptake is driven not only by the content of scientific information (i.e., the experimental outcomes) but also by the temporal variations of the flow of information. These variations are caused by the conversion of conservative scientists into dissident scientists, and vice versa, during all the simulations. In addition, the network’s structure directly affects the likelihood of such conversions.

4. Conclusion and outlook

In this article, we investigated how the structure of the scientific community affects citizens’ uptake of science. We proposed an adapted version of the Zollman agent-based model that includes not only the structure of the scientific community and citizen uptake of scientific findings but also scientific uncertainty and the agents’ propensity for affinity bias. The latter, as defined in equation 7, is one of the major contributions of this article.

By doing an extensive study of the influence of the structure of the scientific network, we found that in unbiased societies, on average, most of the scientists and citizens arrive at believing the correct theory. We also noticed a trade-off between successfully making either scientists or citizens favor theory $B$ over theory $A$ . Highly connected scientific communities will lead more scientists than citizens to believe in theory $B$ . Less connected scientific communities will lead more citizens than scientists to believe in theory $B$ . In contrast, we found that a society prone to affinity bias (i.e., biased society) performs poorly and never ends up with more than half of the citizen population favoring the true theory (i.e., theory $B$ ). Two interventions are possible if one wants to improve this ratio: (1) hiring more scientists and (2) reorganizing the scientific network in such a way that it is just moderately connected (clustering coefficient around 0.6). Our findings suggest that maximal connectivity is not always the best way to produce better science, which is in line with the findings of Zollman (Reference Zollman2007).

The previous results give us more insight into how the choice of parameters influences the public uptake of science in the deficit model. By carefully adjusting these parameters, one can improve not only the success rate of the scientific community but also the public uptake of science. Some changes in the model are suggestive of interventions that can be tested experimentally and that can be influenced through policies for the organization of science and for science communication. For instance, one can change the number of connections per scientist in the model as well as in reality (e.g., by incentives that either promote or discourage team science). The effect of these choices will depend on other parameters as well (modifiable or not), such as the degree of affinity bias in society, the number of agents, and the experimental accuracy.

This article is a first contribution toward a comprehensive understanding of the interaction between scientists and the public in science communication. This model can also serve as a starting point for studying the limitations of the deficit model. For instance, we only considered a one-way interaction from the scientists to the other citizens and no interaction between the citizens. A possible improvement would be to move from a deficit model to a dialogue model, which allows two-way communication between the two types of agents as well as communication between the citizens. Other possible improvements would be to consider other psychological biases or to test our model with a more realistic network structure, for instance, based on citation patterns reported in empirical bibliometric studies. Different communication channels between the scientists and the citizens can also be implemented, as was done by Weatherall et al. (Reference Weatherall, O’Connor and Bruner2020). Lastly, we assumed that once a dissident becomes conservative after running experiments favoring theory $A$ , they will not perform any new experiment (by our definition of a conservative scientist). In the real world, however, one may expect that scientists do not give up so easily and keep experimenting for several iterations.Footnote 8

We have used an expression for the posterior degree of belief in evidence reported by a scientist that depends on the agent’s affinity bias and the difference between their prior credences on which theory to favor, but—unlike O’Connor and Weatherall (Reference O’Connor and Owen Weatherall2018)—not on the prior probability of the evidence (eq. 7). Qualitatively, this simplification did not seem to affect our results, but we flag a systematic robustness study of different implementations of this bias and empirical validation as potential avenues for future research.

Our results suggest that the structure and size of the scientific community affect the uptake of correct theories by citizens but also that the direction of this effect depends on the degree of affinity bias. Without this bias, the probability that the public ends up favoring the correct theory decreases as the connectivity of the scientific network increases. When affinity bias is present, however, the probability that the public favors the correct theory is highest for a moderately connected scientific network. Both effects are more pronounced when the number of scientists increases.

Acknowledgments

We are grateful for helpful comments and suggestions from two anonymous reviewers. The authors thank Valérie Pattyn and Steven Van de Walle for their valuable feedback and C. O’Connor and J. O. Weatherall for insightful discussions.

CRediT author statement

Sacha Ferrari: conceptualization, investigation, methodology, software, visualization, writing—original draft. Wouter Lammers: writing—review and editing. Sylvia Wenmackers: funding acquisition, project administration, supervision, writing—review and editing.

Funding information

This work was supported by KU Leuven Internal Funds (grant C14/20/029).

Footnotes

1 The simultaneous effect of social informational sharing and confirmation bias on polarization versus consensus has been studied by Del Vicario et al. (Reference Del Vicario, Scala, Caldarelli and Eugene Stanley2017).

2 As a historical example, theory $A$ may represent geocentrism, and theory $B$ may be heliocentrism. The latter theory had been suggested in antiquity but was ignored because there were no measurable effects. Early telescopic observations in the 17th century showed evidence of moons that revolve around planets other than Earth, which provided direct empirical confirmation of heliocentrism relative to geocentrism.

3 In the previous example, a hybrid theory was indeed proposed: geo-heliocentrism (see, e.g., Blair, Reference Blair1990).

4 Like Zollman (Reference Zollman2007), we assume that the device never produces false-negative results (100% specificity). Hence, we use the terms accuracy and sensitivity interchangeably.

5 We also compared our results with those obtained by using O’Conner and Weatherall’s (Reference O’Connor and Owen Weatherall2018) expression for affinity bias. The results differ only slightly quantitatively, and the qualitative conclusions remain the same.

6 In general, there are other options, such as a journalist who publishes the most interesting research results, a science educator, or an opinion maker (Burns et al., Reference Burns, John O’Connor and Stocklmayer2003).

7 One could also describe the network with average path length: the average number of steps to connect two nodes by the shortest path. Real-world communities tend to have a small-world network: a high clustering coefficient and a low average path length.

8 Moreover, as indicated in the introduction, our current model does not aim to represent confirmation bias, which does affect how real-world agents deal with uncertain evidence and may be crucial to understanding belief polarization (see, e.g., Kelly, Reference Kelly2008; Dorst, Reference Dorst2023).

References

Bala, Venkatesh, and Goyal, Sanjeev. 1998. “Learning from Neighbours.” Review of Economic Studies 65 (3):595621. https://doi.org/10.1111/1467-937X.00059.Google Scholar
Baumgaertner, Bert, and Justwan, Florian. 2022. “The Preference for Belief, Issue Polarization, and Echo Chambers.” Synthese 200 (5):412. https://doi.org/10.1007/s11229-022-03880-y.Google Scholar
Blair, A. 1990. “Tycho Brahe’s Critique of Copernicus and the Copernican System.” Journal of the History of Ideas 51 (3):355–77. https://doi.org/10.2307/2709620.Google Scholar
Broomell, Stephen B., and Bodilly Kane, Patrick. 2017. “Public Perception and Communication of Scientific Uncertainty.” Journal of Experimental Psychology: General 146 (2):286304. https://doi.org/10.1037/xge0000260.Google Scholar
Bruch, Elizabeth, and Atwell, Jon. 2015. “Agent-Based Models in Empirical Social Research.” Sociological Methods & Research 44 (2):186221. https://doi.org/10.1177/0049124113506405.Google Scholar
Burns, Terry W., John O’Connor, D., and Stocklmayer, Susan M.. 2003. “Science Communication: A Contemporary Definition.” Public Understanding of Science 12 (2):183202. https://doi.org/10.1177/09636625030122004.Google Scholar
Chinn, Sedona, Lane, Daniel S., and Hart, Philip S.. 2018. “In Consensus We Trust? Persuasive Effects of Scientific Consensus Communication.” Public Understanding of Science 27 (7):807–23. https://doi.org/10.1177/0963662518791094.Google Scholar
Dandekar, Pranav, Goel, Ashish, and Lee, David T.. 2013. “Biased Assimilation, Homophily, and the Dynamics of Polarization.” Proceedings of the National Academy of Sciences 110 (15):5791–96. https://doi.org/10.1073/pnas.1217220110.Google Scholar
DeGroot, Morris H. 1974. “Reaching a Consensus.” Journal of the American Statistical Association 69 (345):118–21. https://doi.org/10.1080/01621459.1974.10480137.Google Scholar
Del Vicario, Michael, Scala, Antonio, Caldarelli, Guido, Eugene Stanley, H., and Walter Quattrociocchi. 2017. “Modeling Confirmation Bias and Polarization.” Scientific Reports 7 (1):40391. https://doi.org/10.1038/srep40391.Google Scholar
Dorst, Kevin. 2023. “Rational Polarization.” Philosophical Review 132 (3):355458. https://doi.org/10.1215/00318108-10469499.Google Scholar
Ferrari, S. 2025. “The Impact of Scientific Networks, Affinity Bias and Scientific Uncertainty on the Public Uptake of Science (Version 1.0.0).” https://www.comses.net/codebases/a0db6176-e1a4-4450-ac7e-a6cd78b9b235/releases/1.0.0/. Accessed January 28, 2025.Google Scholar
Fischbach, Kai, Marx, Johannes, and Weitzel, Tim. 2021. “Agent-Based Modeling in Social Sciences.” Journal of Business Economics 91:1263–70. https://doi.org/10.1007/s11573-021-01070-9.Google Scholar
Fischhoff, Baruch, and Davis, Alex L.. 2014. “Communicating Scientific Uncertainty.” Proceedings of the National Academy of Sciences 111 (supplement 4):13664–71. https://doi.org/10.1073/pnas.1317504111.Google Scholar
Gabriel, Nathan, and O’Connor, Cailin. 2024. “Can Confirmation Bias Improve Group Learning?Philosophy of Science 91 (2):329–50. https://doi.org/10.1017/psa.2023.176.Google Scholar
Giles, Jim. 2002. “Scientific Uncertainty: When Doubt Is a Sure Thing.” Nature 418 (6897):476–79. https://doi.org/10.1038/418476a.Google Scholar
Gustafson, Abel, and Rice, Ronald E.. 2019. “The Effects of Uncertainty Frames in Three Science Communication Topics.” Science Communication 41 (6):679706. https://doi.org/10.1177/1075547019870811.Google Scholar
Harker, David. 2015. Creating Scientific Controversies: Uncertainty and Bias in Science and Society. Cambridge: Cambridge University Press.Google Scholar
Hedström, Peter, and Ylikoski, Petri. 2010. “Causal Mechanisms in the Social Sciences.” Annual Review of Sociology 36:4967. https://doi.org/10.1146/annurev.soc.012809.102632.Google Scholar
Jeffrey, Richard C. 1990. The Logic of Decision. Chicago: University of Chicago Press.Google Scholar
Kampourakis, Kostas, and McCain, Kevin. 2019. Uncertainty: How It Makes Science Advance. Oxford: Oxford University Press.Google Scholar
Kelly, Thomas. 2008. “Disagreement, Dogmatism, and Belief Polarization.” Journal of Philosophy 105 (10):611–33. https://doi.org/10.5840/jphil20081051024.Google Scholar
Knight, David. 2006. Public Understanding of Science: A History of Communicating Scientific Ideas. Vol. 26. New York: Routledge.Google Scholar
Kumar, Sameer. 2015. “Co-authorship Networks: A Review of the Literature.” Aslib Journal of Information Management 67 (1):5573. https://doi.org/10.1108/AJIM-09-2014-0116.Google Scholar
Lord, Charles G., Ross, Lee, and Lepper, Mark R.. 1979. “Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence.” Journal of Personality and Social Psychology 37 (11):2098–109. https://doi.org/10.1037/0022-3514.37.11.2098.Google Scholar
Mahoney, Michael J. 1977. “Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System.” Cognitive Therapy and Research 1:161–75. https://doi.org/10.1007/bf01173636.Google Scholar
Miller, Jon D. 1998. “The Measurement of Civic Scientific Literacy.” Public Understanding of Science 7 (3):203–23. https://doi.org/10.1088/0963-6625/7/3/001.Google Scholar
Newman, Mark E. J. 2001a. “Scientific Collaboration Networks. I. Network Construction and Fundamental Results.” Physical Review E 64 (1):016131. https://doi.org/10.1103/PhysRevE.64.016131.Google Scholar
Newman, Mark E. J. 2001b. “The Structure of Scientific Collaboration Networks.” Proceedings of the National Academy of Sciences 98 (2):404–9. https://doi.org/10.1073/pnas.98.2.404.Google Scholar
O’Connor, Cailin, and Owen Weatherall, James. 2018. “Scientific Polarization.” European Journal for Philosophy of Science 8 (3):855–75. https://doi.org/10.1007/s13194-018-0213-9.Google Scholar
Pellizzoni, Luigi. 2003. “Knowledge, Uncertainty and the Transformation of the Public Sphere.” European Journal of Social Theory 6 (3):327–55. https://doi.org/10.1177/13684310030063004.Google Scholar
Peters, Uwe. 2021. “Illegitimate Values, Confirmation Bias, and Mandevillian Cognition in Science.” British Journal for the Philosophy of Science 72:1061–81. https://doi.org/10.1093/bjps/axy079.Google Scholar
Peters, Uwe. 2022. “What Is the Function of Confirmation Bias?Erkenntnis 87 (3):1351–76. https://doi.org/10.1007/s10670-020-00252-1.Google Scholar
Rekker, Roderik. 2021. “The Nature and Origins of Political Polarization over Science.” Public Understanding of Science 30 (4):352–68. https://doi.org/10.1177/0963662521989193.Google Scholar
Roussos, Joe. 2021. “Expert Deference as a Belief Revision Schema.” Synthese 199 (1–2):3457–84. https://doi.org/10.1007/s11229-020-02942-3.Google Scholar
Schmid-Petri, Hannah, and Bürger, Moritz. 2020. “Modeling Science Communication: From Linear to More Complex Models.” In Science Communication, edited by Leßmöllmann, Annette, Dascal, Marcelo, and Gloning, Thomas, vol. 17 of Handbooks of Communication Science, 105–22. Berlin: De Gruyter Mouton. https://doi.org/10.1515/9783110255522-005.Google Scholar
Schumm, Walter R. 2021. “Confirmation Bias and Methodology in Social Science: An Editorial.” Marriage & Family Review 57 (4):285–93. https://doi.org/10.1080/01494929.2021.1872859.Google Scholar
Seethaler, Sherry, Evans, John H., Gere, Cathy, and Rajagopalan, Ramya M.. 2019. “Science, Values, and Science Communication: Competencies for Pushing beyond the Deficit Model.” Science Communication 41 (3):378–88. https://doi.org/10.1177/1075547019847484.Google Scholar
Šešelja, Dunja. 2023. “Agent-Based Modeling in the Philosophy of Science.” In The Stanford Encyclopedia of Philosophy, edited by Edward, N. Zalta and Nodelman, Uri. Stanford: Stanford University Press. https://plato.stanford.edu/archives/win2023/entries/agent-modeling-philscience/.Google Scholar
Shwed, Uri, and Bearman, Peter S.. 2010. “The Temporal Structure of Scientific Consensus Formation.” American Sociological Review 75 (6):817–40. https://doi.org/10.1177/0003122410388488.Google Scholar
Sloane, Neil J. A. 2024. “A001349: Number of Simple Connected Graphs on n Unlabeled Nodes.” https://oeis.org/A001349. Accessed June 18, 2024.Google Scholar
Smart, Paul R. 2018. “Mandevillian Intelligence.” Synthese 195:4169–200. https://doi.org/10.1007/s11229-017-1414-z.Google Scholar
Steele, Katie. 2012. “Testimony as Evidence: More Problems for Linear Pooling.” Journal of Philosophical Logic 41:983–99. https://doi.org/10.1007/s10992-012-9227-5.Google Scholar
Trench, Brian. 2008. “Towards an Analytical Framework of Science Communication Models.” In Communicating Science in Social Contexts, edited by Cheng, Donghong, Claessens, Michel, Gascoigne, Toss, Metcalfe, Jenni, Schiele, Bernard, and Shi, Shunke, 119–35. Dordrecht, Netherlands: Springer. https://doi.org/10.1007/978-1-4020-8598-7_7.Google Scholar
Uddin, Shahadat, Hossain, Liaquat, and Rasmussen, Kim. 2013. “Network Effects on Scientific Collaborations.” PloS One 8 (2):e57546. https://doi.org/10.1371/journal.pone.0057546.Google Scholar
Van Der Bles, Anne Marthe, Sander Van Der Linden, Freeman, Alexandra L. J., James Mitchell, Galvao, Ana B., Zaval, Lisa, and Spiegelhalter, David J.. 2019. “Communicating Uncertainty about Facts, Numbers and Science.Royal Society Open Science 6 (5):181870. https://doi.org/10.1098/rsos.181870.Google Scholar
van Stekelenburg, Aart, Gabi Schaap, Harm Veling, van’t Riet, Jonathan, and Buijzen, Moniek. 2022. “Scientific-Consensus Communication about Contested Science: A Preregistered Meta-Analysis.” Psychological Science 33 (12):19892008. https://doi.org/10.1177/09567976221083219.Google Scholar
Weatherall, James Owen, O’Connor, Cailin, and Bruner, Justin P.. 2020. “How to Beat Science and Influence People: Policymakers and Propaganda in Epistemic Networks.” British Journal for the Philosophy of Science 71 (4):1157–86. https://doi.org/10.1093/bjps/axy062.Google Scholar
Wilholt, Torsten. 2009. “Bias and Values in Scientific Research.” Studies in History and Philosophy of Science Part A 40 (1):92101. https://doi.org/10.1016/j.shpsa.2008.12.005.Google Scholar
Wu, J. 2023. “Epistemic Advantage on the Margin: A Network Standpoint Epistemology.” Philosophy and Phenomenological Research 106 (3):755–77. https://doi.org/10.1111/phpr.12895.Google Scholar
Wynne, Brian. 1991. “Knowledges in Context.” Science, Technology, & Human Values 16 (1):111–21. https://doi.org/10.1177/016224399101600108.Google Scholar
Zollman, Kevin J. S. 2007. “The Communication Structure of Epistemic Communities.” Philosophy of Science 74 (5):574–87. https://doi.org/10.1086/525605.Google Scholar
Zollman, Kevin J. S. 2010. “The Epistemic Benefit of Transient Diversity.” Erkenntnis 72 (1):1735. https://doi.org/10.1007/s10670-009-9194-6.Google Scholar
Figure 0

Table 1. Independent and Dependent Variables of the Model; Main Variables of Interest Indicated in Bold

Figure 1

Figure 1. The complete, isolated, cycle, and wheel networks.

Figure 2

Figure 2. Fraction of scientists and citizens who reached the correct conclusion in a society without affinity bias as a function of increasing experimental accuracy (or sensitivity) $0.5 + \varepsilon $ and graph geometry. In these simulations, ${N_{{\rm{sc}}}} = {N_{{\rm{cit}}}} = 20$, $\alpha = 0$, $n = 10$, and number of $runs = 500$.

Figure 3

Figure 3. Fraction of scientists and citizens who reached the correct conclusion in a society without affinity bias as a function of increasing experimental accuracy (or sensitivity) $0.5 + \varepsilon $ and the number of scientists in the case of a complete graph. In these simulations, ${N_{{\rm{cit}}}} = 20$, $\alpha = 0$, $n = 10$, and number of $runs = 1,000$.

Figure 4

Figure 4. Fraction of scientists and citizens who reached the correct conclusion in a society with affinity bias as a function of experimental accuracy $0.5 + \varepsilon $ and the graph geometry. In these simulations, ${N_{{\rm{sc}}}} = {N_{{\rm{cit}}}} = 20$, $\alpha = 2$, $n = 10$, and number of $runs = 200$.

Figure 5

Figure 5. Fraction of scientists and citizens who reached the correct conclusion and the stabilization time as a function of the clustering coefficient. $\alpha = 0$ for the three upper charts, and $\alpha = 2$ for the three lower ones. We fixed $\varepsilon = 0.05$, $n = 5$, number of $generations = 20$, and number of $graphs = 1,000$. The blue stars denote the complete graph, the green diamonds denote the cycle, the yellow squares denote the wheel, and the black tripods denote the isolated graph.