Epistemology has traditionally treated the minds of individuals as the primary locus of epistemic achievement. Its normative efforts have focused on the standards that all inquirers must satisfy to be rational, knowledgeable, or otherwise epistemically successful. Proposals for these standards include adopting beliefs on the basis of logically pristine justifications, robustly manifesting intellectual virtues, and updating one’s credences in a Bayesian fashion. While the details of these accounts differ, they all ignore the cognitive limitations of actual inquirers. A non-ideal epistemology that countenances these limitations must recognize that individuals cannot routinely approach these standards (McKenna Reference McKenna2023). Yet, we also cannot help but recognize that human beings are the most epistemically accomplished species on the planet. How can this be?
To answer this question, social epistemologists and philosophers of science have looked to a fundamental concept in economics: the division of labor. Adam Smith famously illustrates the power of the division of labor using the example of a pin factory (Smith Reference Smith1937 [1776]). A collection of the very best pin-makers working on their own will produce no more than a handful of pins in a single day. The same group can be enormously more efficient, producing tens of thousands of pins a day, when the manufacturing process is disaggregated, such that each component is distributed to a different group of workers and machines. Dividing the manufacturing labor in this way does not make the individual workers more productive; indeed, it makes them less productive by assigning them fewer, simpler tasks. But by giving them specialized tasks, it makes the factory vastly more than the sum of its parts.
The same logic can be applied to our epistemic endeavors. The principal source of our epistemic success is not our unlimited intellectual powers, but our tendency to work together to transcend the limitations of our individual minds. I substantiate this claim in Section 2, where I argue that collectives can intellectually deliberate and explore much more effectively than individuals because they can divide the cognitive labor required for these demanding activities. Moreover, some of the shortcomings in the ways that individuals think can actually enhance these powerful forms of collective cognition. Thus, as is the case with material production, collectives can often produce more when their constituent members are capable of producing less.
While the division of labor allows us to collectively transcend our limitations, Smith emphasizes that it does so by disempowering its individual workers. Towards the end of The Wealth of Nations, he laments:
The man whose whole life is spent in performing a few simple operations, of which the effects too are, perhaps, always the same, or very nearly the same, has no occasion to exert his understanding, or to exercise his invention in finding out expedients for removing difficulties which never occur…His dexterity at his own particular trade seems, in this manner, to be acquired at the expense of his intellectual, social, and martial virtues (Ibid. 734–35).
Marx expresses similar concerns in terms of the increasing alienation of industrial workers from the products and processes of labor, as well as from themselves and their fellow workers (Marx Reference Marx1992[1844]).
Here, too, a similar logic applies to the epistemic domain. The division of cognitive labor leads to what Barry has recently called epistemic alienation: a problematic separation of individuals from epistemic goods (Barry Reference Barry2024). According to Barry, individuals are alienated from epistemic virtues because an efficient division of cognitive labor requires them to manifest a lack of virtues, just as individual workers are more efficient collectively when they are less productive on their own. In Section 2, I argue that this is a mistaken diagnosis of the source of epistemic alienation. Participating in high-functioning collectives does not prevent individuals from being robustly virtuous; rather, our cognitive limitations make it impossible for us to live up to the highest standards of epistemic conduct. Collectives overcome these cognitive limitations to achieve what individuals cannot by harnessing those same limitations in their members.
In the remainder of the paper, I argue that we should distinguish the epistemic weaknesses that facilitate the division of cognitive labor from those that result from the division of labor. Only the latter are aspects of epistemic alienation. By collectively transcending our limitations, we have transformed our cognitive landscape in ways that separate individual inquirers from the products, processes, and environments of epistemic labor. These forms of separation are problematic insofar as they amplify intellectual vices, incapacitate reason, and induce cognitive biases. Thus, the problem of epistemic alienation is not that we can produce more collectively when individuals produce less; the problem is what happens when we maximize productivity at the level of collectives without regard for the increasingly ill effects that it will have on individuals.
1. Mandevillian weaknesses and weaknesses of alienation
Barry claims that the primary source of epistemic alienation is the inherent conflict between the conditions for collective and individual epistemic success: “…the production of knowledge is increasingly a group activity and yet the forces that make groups good at producing knowledge are, to a non-trivial degree, in conflict with what it takes for individuals to be intellectually virtuous” (Barry Reference Barry2024, 2). To substantiate his claim that there is such a conflict, he points to a growing literature that purports to show that putatively deleterious tendencies, including motivated reasoning, neophilia, overconformity, and dogmatism, serve vital functions in the division of cognitive labor (Mayo-Wilson et al. Reference Mayo-Wilson, Zollman and Danks2011; Kofi Bright Reference Kofi Bright2017; Smart Reference Smart2018a; Reference Smart, Carter, Clark and Kallestrup2018b; Levy and Alfano Reference Levy and Alfano2020). Given this conflict, it seems we must trade off epistemic virtues at the level of individuals for epistemic gains at the level of collectives. As we rely more extensively on collective cognitive processes, we increasingly alienate individuals from the epistemic virtues they would, or could, otherwise possess.
Barry’s view both overestimates the epistemic capacities of individuals and underestimates the efficacy of the division of cognitive labor. These two issues are related. If individuals have the capacity to approach virtue-theoretic standards of epistemic conduct when left to their own devices, then it makes sense to think of them as having to compromise their own intellectual character to facilitate an efficient division of cognitive labor. By contrast, if inquirers with our cognitive limitations can approach these epistemic standards only by dividing the labor required to do so, then there is very little that individuals must forsake to play a role in high-functioning collectives; they cannot be alienated from virtues that they could not possess to any significant degree in the first place. I will argue that this is, in fact, the case. We are bounded thinkers insofar as we lack the capacities to optimize our epistemic conduct; our bounded rationality consists, in large part, of relying on one another to transcend our limitations, so that we can achieve together what none of us could on our own (Simon Reference Simon1955, Reference Simon1956; Gigerenzer Reference Gigerenzer2008). The putative weaknesses that individuals must manifest to do so are what I will call Mandevillian weaknesses: they are harnessed by efficient forms of collective cognition without being considerably induced or magnified by them.Footnote 1 Harnessing such weaknesses does not require that we trade off the epistemic interests of individuals for those of the collective, since individuals are not thereby made appreciably worse off. To see this, let’s consider how Mandevillian weaknesses allow collectives to transcend the limitations of their constituent members when engaging in the intellectually demanding activities of deliberation and exploration.
To deliberate effectively, individuals must be open-minded, i.e., they must be willing and able to engage with all the relevant intellectual options – perspectives, sources, evidence, arguments, etc. – that bear on the topic of deliberation, even when doing so threatens their own beliefs (Battaly Reference Battaly2018). Like other intellectual virtues, open-mindedness admits of both deficiency and excess. Someone who fails to seriously entertain relevant alternatives to their own beliefs is closed-minded; someone who engages with irrelevant alternatives is indiscriminate (King Reference King2021, Ch. 10). Sometimes it’s easy to properly engage with relevant alternatives to our own beliefs. If I think that my dog is in the backyard and she walks into my office, I immediately recognize this as decisive evidence against my belief. But this doesn’t make me particularly open-minded, since it requires little, if any, deliberation. It is much more difficult to be open-minded with respect to beliefs whose contents are remote from the evidence of our senses. For example, I think that vaccines are safe and effective. I am aware that there are vaccine skeptics, but I don’t really know what reasons they have for being skeptical; I have written off their arguments as being unsound or irrelevant without even really knowing what they are. Furthermore, if I were presented with sophisticated arguments for vaccine skepticism, I probably couldn’t refute them, given my lack of medical knowledge and expertise. I doubt that I’m unusual in this respect; most people who think that vaccines are safe and effective have not surveyed the relevant literature and couldn’t do so even if they wanted to. Many of our beliefs are similarly insulated from the information and arguments that are relevant to them.Footnote 2 What this shows is not that people fail to value epistemic goods, but that open-minded deliberation requires more time, energy, knowledge, and skills than we can typically or readily muster.
Our closed-mindedness manifests itself in the ubiquitous myside bias, i.e., our tendency to search for, attend to, and evaluate evidence in ways that favor our existing beliefs (Nickerson Reference Nickerson1998; Stanovich Reference Stanovich2021). Myside bias has several detrimental effects on our reasoning, including belief perseverance (Anderson et al. Reference Anderson, Lepper and Ross1980), forecasting inaccuracy (Haran et al. Reference Haran, Ritov and Mellers2013), and overconfidence (Koriat et al. Reference Koriat, Lichtenstein and Fischhoff1980). And, unlike other biases, it does not seem to be mitigated by cognitive sophistication or education (Stanovich and West Reference Stanovich and West2007). This is further bad news for deliberation at the scale of individuals, but not for collective deliberation. In fact, Mercier and Sperber (Reference Mercier and Sperber2011, Reference Mercier and Sperber2017) argue that groups of moderately closed-minded interlocutors who exchange arguments for different views thereby divide the cognitive labor required for thoroughly open-minded deliberation. Each side will find and present the best arguments for their own view and the best criticisms of alternative views. As long as everyone is ultimately willing to change their minds when presented with good reasons to do so, groups can be more open-minded than any of their members, partly because their members are closed-minded. Hence, we should expect groups to deliberate more effectively than individuals. In fact, this expectation has been borne out by psychological experiments (Moshman and Geil Reference Moshman and Geil1998), forecasting tournaments (Tetlock and Garder Reference Tetlock and Gardner2015, Ch. 9), and deliberative polls (Fishkin Reference Fishkin2009). Furthermore, philosophers of science have argued that myside bias has a crucial role to play in the socially distributed process of theory testing (Longino Reference Longino1990, 80; Solomon Reference Solomon1992; Popper Reference Popper2002[1996], 426). When scientists are biased in favor of different theories, each theory gets thoroughly vetted by its detractors. Over time, the weaker theories drop out of circulation. When a consensus emerges among experts, this serves as social evidence that non-experts use to inform their beliefs without having to deliberate about the plausibility of each theoretical option (Oreskes Reference Oreskes2019; Pfänder et al. Reference Pfänder, Kerzreho and Mercier2025). By deferring to expert communities on matters that fall within their expertise and outside of our own, we delegate to them cognitive labor that we could not possibly manage ourselves (Goldberg Reference Goldberg2011).
Deferring to experts is one way that we exploit the knowledge of others. As natural and seasoned social learners, we are adept at many types of information scrounging that yield useful knowledge at minimal cost to ourselves. But the more we rely on this information, the less likely we are to find new and better information for ourselves. This is the explore-exploit tradeoff. Each of us can attempt to negotiate this tradeoff by finding the golden mean between being overly deferential and insufficiently deferential. The optimal strategy for doing so is to defer to others when their knowledge is well-founded, stably applicable, and readily accessible, and to strike out on our own otherwise (Richerson and Boyd Reference Richerson and Boyd2005, 118). Discerning when the first two of these conditions obtain, however, is a very tall order, even for the most adept social learners. Knowing when an expert’s testimony is well-founded and stably applicable often requires a level of knowledge that novices don’t possess; if they did, they would be experts. This is what makes it so difficult to know, for example, whether we are better off using the services of a financial advisor or playing the market for ourselves. Consequently, we are more likely to fall on one or the other of the extremes of the explore-exploit continuum than we are to consistently achieve the optimal balance.
When we exhibit too much or too little deference in the right proportions, however, we can divide the cognitive labor required to reach and maintain a beneficial explore-exploit equilibrium at the level of the collective. Independent explorers will fail more often than they succeed, but their successes can be exploited by others; deferential exploiters won’t acquire much knowledge for themselves, but they will safeguard and transmit the knowledge they receive from others. Species that engage in collective foraging often use these sorts of mixed strategies: most individuals exploit reliable food sources, while a few intrepid individuals look for new sources that can be tapped once the existing stores are depleted (Aljadeff et al. Reference Aljadeff, Giraldeau and Lotem2020; Chittka Reference Chittka2022, 229). Simulated epistemic communities tell the same story; they perform best when they include a small number of independent explorers and a larger contingent of conformists who follow the crowd (Kameda and Nakanishi Reference Kameda and Nakanishi2002; de Courson et al. Reference De Courson, Fitouchi, Bouchaud and Benzaquen2021). This dynamic is also essential to an effective division of labor in scientific research; normal science exploits existing theories to generate new results; revolutionary science posits entirely new theories (Weisberg and Muldoon Reference Weisberg and Muldoon2009; Stanford Reference Stanford2019). Hence, scientific progress requires what Kuhn calls ‘the essential tension’ between convergent and divergent thinking (Kuhn Reference Kuhn1977).
The only way we can achieve thoroughly open-minded deliberation and approach an optimal explore-exploit equilibrium is by dividing the cognitive labor required to do so, which requires that we harness individual failures to meet these standards. When collectives do so, they are not thereby separating individuals from epistemic virtues they would otherwise possess; they are capitalizing on Mandevillian weaknesses that already and invariably exist. We are not made more closed-minded by participating in open-minded deliberation, nor are we less efficient in our social learning by participating in collective intellectual exploration. Indeed, the reverse is more likely true. Our Mandevillian weaknesses may be initially harnessed and eventually tempered in our ongoing participation in distributed cognitive processes. In any case, they are not the source of our epistemic alienation.
Epistemic alienation is not a precondition of our collective epistemic endeavors, but a growing side-effect of their remarkable successes. By dividing our cognitive labor over larger populations of agents and artifacts, we have massively accelerated our rate of epistemic productivity but have thereby created conditions that are variously inhospitable for the thinking of individuals by alienating them from the products, processes, and environments of inquiry. We are increasingly and unknowingly separated from the knowledge of others in ways that make us less apt to recognize the limits of our own knowledge. The procedures and practices that we use to generate knowledge are becoming hyperspecialized, making them illegible to most people. Our informational environments are becoming more complex and hostile to the cognitive heuristics we use to generate judgments and decisions under uncertainty. These forms of epistemic alienation make us more intellectually arrogant, less capable of reasoning with others, and more susceptible to cognitive biases. These are weaknesses of alienation, which, unlike Mandevillian weaknesses, are induced and/or exacerbated by the division of cognitive labor without contributing much, if anything, thereto. Indeed, they often compromise the effectiveness of our collective cognition as well. Weaknesses of alienation are the price we pay for overcoming our Mandevillian weaknesses.Footnote 3
2. Intellectual arrogance
By dividing cognitive labor over collectives, we can transcend our epistemic limitations. But the same phenomenon also renders us insensitive to our own limitations, and therefore, less intellectually humble. Intellectual humility is the epistemic virtue of recognizing and owning one’s intellectual limitations (Whitcomb et al. Reference Whitcomb, Battaly, Baehr and Howard-Snyder2017). An intellectually humble person will routinely know when they don’t know or understand something, ask for help when they need it, and readily recognize conditions of uncertainty. Someone who lacks intellectual humility is intellectually arrogant; they regularly overestimate their epistemic capacities, abilities, and accomplishments in ways that lead to a variety of negative outcomes (Porter et al. Reference Porter2021). Among the common manifestations of our intellectual arrogance is the illusion of explanatory depth, i.e., our propensity to overestimate our understanding of how things work (Rozenblit and Keil Reference Rozenblit and Keil2002). We should not think of this as a stable tendency strictly seated in individual minds, but as a side-effect of our participation in divisions of cognitive labor that increasingly separate us from the products of that collective labor. In other words, it is a weakness of alienation.
The most fruitful collective cognitive process in which we take part is cumulative cultural evolution (Henrich Reference Henrich2016). Culture includes knowledge, practices, institutions, and artifacts that are socially transmitted throughout groups. These packages are essential to our ability to adapt to a wide range of environmental challenges. Cumulative cultural evolution happens when these packages accumulate over generations to produce increasingly better adaptations that no individual could have possibly generated on their own. The inventions that we associate with single names – Gutenberg, Edison, Jobs – were made possible by all the relevant technologies that preceded them. The same is true of the artistic works of Shakespeare and Bach, and the scientific theories of Darwin and Curie. The total labor required to produce these cultural items, while completed by individuals, was divided across generations (Muthukrishna and Henrich Reference Muthukrishna and Henrich2016).
Human beings have generated cumulative culture, in part, because of our unique proficiency as social learners (Hermann et al. Reference Hermann, Call, Hernandez-Lloreda, Hare and Tomasello2007, Reference Hermann, Hernandez-Lloreda, Call, Hare and Tomasello2010). We rely much more frequently and thoroughly on socially transmitted information than other species. For example, apes are more likely than children to deviate from the behavior of an exemplar in an effort to find more efficient ways of completing a task (Nagell et al. Reference Nagell, Olguin and Tomasello1993). When engaged in transparently simple tasks, such as using a rake to pull in food that is out of reach, this strategy of emulation often pays off; apes were more likely to recognize that they could pull in more of the small items by flipping the rake around, so that the flat side faces the ground. For more complex tasks whose causal dynamics are opaque, such as detoxifying edible plants or knapping stone, we are better off exploiting the knowledge of others rather than exploring for ourselves. And by doing so, we faithfully preserve and transmit cultural knowledge so that it can be used, transmitted, and improved by others. Hence, the cultural knowledge of any particular person tends to be broad but shallow. We might know the process for detoxifying a poisonous plant without knowing how or why the process works.
Cultural transmission is facilitated not only by our aptitude for social learning, but by our proficiency at teaching (Csibra and Gergely Reference Csibra, Gergely, Munakata and Johnson2006; Csibra Reference Csibra2007). We are adept at modeling behavior, simplifying tasks, and imparting knowledge through representations. Just as importantly, we are adept at minimizing the amount of explicit teaching we must do. One of the ways we do this is through intuitive design, i.e., designing artifacts that can be used with minimal understanding of how they work. For example, though computers have become vastly more powerful and complex, they have also become much easier to use. Well-designed interfaces make it possible for users with minimal knowledge of the workings of hardware and software to do more with their computers than the world’s foremost experts could have done with theirs just a few decades ago. Intuitive design makes the complex simple, and the unfamiliar familiar, thereby minimizing the amount of explicit instruction required to use artifacts for their intended purposes. However, it also makes us think that we understand the workings of these artifacts much better than we actually do. This illusion is revealed when artifacts don’t work as they should. When my computer crashes, I have virtually no hope of restoring it to working order because I don’t know how it works. It can also be revealed by asking people to make their understanding, or lack thereof, explicit. Rozenblit and Keil found that asking subjects to describe the workings of mundane objects, such as a zipper, significantly lowered the ratings of their own understanding of those objects (Rozenblit and Keil Reference Rozenblit and Keil2002).
As cultures become larger, more diverse, and more interconnected, the pace of cultural evolution and the complexity of cumulative culture increase. Consequently, both cultures and their individual members become more knowledgeable, but not at the same rate; the knowledge in our social environment increases much faster than we can learn it. This condition necessitates a division of labor that makes us profoundly dependent on one another. We underestimate this interdependence because we do not accurately distinguish the knowledge in our heads from the readily accessible knowledge in our environments (Sloman and Fernbach Reference Sloman and Fernbach2017). When others know something and are known to know it, we often think that we know it too. Therefore, as the knowledge in our social environment increases and becomes increasingly accessible, we become less adept at identifying our personal epistemic limitations. Sloman and Fernbach lament: “The Internet’s knowledge is so accessible and so vast that we may be fashioning a society where everyone with a smartphone and a Wi-Fi connection becomes a self-appointed expert in multiple domains” (Ibid. 138). Our unprecedented access to information also makes us think that we can become vastly more knowledgeable on topics on which others are knowledgeable. We see this in the increasingly common refrain to ‘do your own research’ on complex topics, such as anthropogenic climate change and vaccine safety. The best research that laypeople can do on these topics is to look for a consensus within expert communities (Ballantyne et al. Reference Ballantyne, Celniker and Dunning2022). By contrast, the idea that non-experts can survey the first-order evidence for themselves and arrive at reliably accurate judgments reflects an insensitivity to the epistemic limits of those individuals and the complexity of the issues they are ‘researching’. We should expect this insensitivity to increase with the complexity and accessibility of cumulative culture.
Cumulative cultural evolution is a self-perpetuating process. Among its products are innovations that accelerate the generation and transmission of cumulative culture, such as technologies that make it easier to communicate, coordinate, and cooperate. These same technologies have also generated massively complex systems, such as markets and networks, whose behavior is impossible for anyone to precisely and reliably predict or explain. The proliferation and elaboration of these systems have led to conditions of much greater uncertainty, which we systematically underestimate because of the illusion of explanatory depth and other intellectually arrogant tendencies. The hindsight bias makes past events seem more predictable than they actually were (Fischhoff Reference Fischhoff1975), the narrative fallacy makes them seem much more intelligible than they are (Taleb Reference Taleb2010), and the illusion of validity leads us to believe that future events are more predictable and intelligible than they are (Kahneman and Tversky Reference Kahneman and Tversky1973). While these tendencies are unlikely to increase over time, their effects are likely to do so as our culturally constructed environments become ever more complex. In other words, they are weaknesses of alienation.
3. Incapacitating reason
While the California school of cultural evolution emphasizes the role of trust in cultural transmission, the Parisian school emphasizes the importance of epistemic vigilance (Sterelny Reference Sterelny2017). If we always trusted and never verified, they argue, communication could not be a stable strategy, for deception would become commonplace, which would favor the conservative strategy of rejecting all testimony. Epistemic vigilance prevents the collapse of communication by facilitating the rejection of false and unjustified testimony, thereby keeping the contents of communication reliably veridical. Mercier and Sperber claim that our capacity to engage in reasoning – to produce and evaluate reasons – evolved to facilitate the epistemic vigilance that enables cultural transmission (Mercier and Sperber Reference Mercier and Sperber2011, Reference Mercier and Sperber2017). Our ability to produce reasons empowers us to share true information with others who would otherwise reject it; our ability to evaluate reasons allows us to reject false information we would otherwise accept. By increasing both the quantity and the quality of socially transmitted information, our unique capacity for reasoning supports our unique proficiency for social learning.
Our ability to engage in reasoning with one another depends on our sharing common ground, including knowledge, standards, forms of arguments, and terminology. Common ground is difficult to maintain in the face of the specialization typically required for an efficient division of labor. As our cognitive labor becomes increasingly specialized, we become alienated from the reasoning of others, and therefore, less capable of exercising epistemic vigilance with respect to their testimony. This incapacity is a weakness of alienation; it is a result, rather than an accelerant, of our highly coordinated epistemic enterprises.
In a non-specialized division of labor, it makes little difference who does which task because everyone can perform each function comparably well. When a family makes dinner together, it doesn’t matter who cuts the vegetables, who sautés them, and who makes the stir-fry sauce. And because everyone knows how to perform each function, they can understand and evaluate the contributions of everyone else. If the vegetables are burnt, everyone knows who is at fault and what they did wrong. This kind of transparency and accountability is possible only when the tasks are similar or simple. More complex and sophisticated collaborative projects require a specialized division of labor where it makes a significant difference who does what because each function requires different skills, sensibilities, priorities, and knowledge that are not uniformly distributed within the collective. Building a skyscraper requires the expertise of many different consultants, architects, engineers, designers, managers, contractors, fabricators, trades workers, etc. This expertise takes years of specialized training; anyone without it will not be able to perform the expert tasks, nor evaluate the performance of those who do. Project managers are unlikely to know when an engineer has made a mistake, and vice versa. As specialization increases, it becomes more difficult for non-specialists to understand what specialists are doing, why they’re doing it, and whether or not they’re doing it well. This is what Millgram calls the problem of hyperspecialization (Millgram Reference Millgram2015).
Hyperspecialization is a natural consequence of our collective epistemic progress: as knowledge accumulates, it fragments. A gifted polymath born in the eighteenth century, such as William Whewell, could master entire scientific disciplines. The progress that has been made since then has rendered this an impossibility. A scientific field such as chemistry, which was born during the Enlightenment, now has thousands of specialized journals publishing new issues every few months, each one running hundreds of pages. Even trained chemists cannot possibly keep up with anything more than a fraction of their sub-specialization. Furthermore, specialists differ not only in what they know, but in how they know; they have their own lexicons, standards, forms of argument, techniques, equipment, etc. As is the case with hyperspecialization more generally, non-specialists cannot reliably understand and assess the epistemic labor of experts. The result, claims Millgram, is that “…we should be prepared to find that we live in a society of people who are logical aliens with respect to one another. And that is what we do find” (Ibid. 32–3). I know that climate scientists believe that the earth’s atmosphere is warming at an alarming rate, and I have a vague sense of the mechanisms that they believe are responsible for global warming, but I have almost no idea what their evidence consists of. I don’t know what data they have, where they came from, what their models say, or what inferences they’ve drawn from them. Most people are in a similar situation. In general, we rely on experts whose reasons we cannot understand and whose conclusions we cannot reliably evaluate.
This incapacitation of our reasoning in specialized domains presents us with the dilemma of hyperspecialization. In the absence of transparent and legible reasons to inform our beliefs, we can access the expertise of specialists only by adopting an attitude of unquestioning trust (Nguyen Reference Nguyen2022). This is fine, and even beneficial, when our trust is well placed, but we have limited means of discerning when this is the case, which leaves us vulnerable to deception and manipulation (see §4). In the absence of this trust, specialists must appeal to their audiences by making their reasons transparent and legible to non-specialists. This enables the kind of epistemic vigilance that is required for public oversight and expert accountability, but it comes at the cost of degrading the reliability of expertise.
Non-specialists have access to nothing more than a small corner of the logical space available to experts because they are uninitiated in the terminologies, methods, techniques, standards, and knowledge that define that space. Requirements of public transparency and legibility force experts to work disproportionately in those small corners, such that they can explain themselves to laypersons.Footnote 4 Nguyen (Reference Nguyen2021) calls this the problem of epistemic intrusiveness. It takes several forms: experts invent surrogate justifications for public consumption, limit their actions to those they can rationalize to the public, and/or pursue publicly accessible reasons in their own deliberations. The last of these forms is the most intrusive. By guiding the justificatory efforts of experts, demands for public legibility can distort the very expertise it is supposed to reveal. Nguyen sees this force at work in publicly accessible metrics that are introduced to evaluate expert performance. For example, the watchdog organization Charity Navigator assessed the performance of non-profit charities using the single criterion of overhead expenses. In order to climb their rankings and garner public favor and funds, charities cut expenses, often making themselves too lean to function effectively. Thus, while Charity Navigator sought to help the public navigate the complex space of charitable efforts to maximize philanthropic outcomes, its selective transparency and legibility created a system of incentives that undermined the very project it was engaged in. More generally, when the work of experts is guided by the public’s idea of what that work should look like, it is often not done as well as it would have been if it were done outside of the public’s purview.
The dilemma of hyperspecialization is becoming more acute with the automation of epistemic labor. In many areas, including medical diagnosis, psychiatric prognosis, risk assessment, and jurisprudential decision-making, human experts are being replaced by fine-tuned algorithms. These algorithms are typically opaque for several reasons (Burrell Reference Burrell2016). One is intentional concealment. Producing effective algorithms is an expensive endeavor that requires access to large amounts of data and computing power. Producers of algorithms, usually companies and governments, often have an interest in protecting their investments from exploitation or duplication by keeping them under wraps. A second source of algorithmic opacity is technical illiteracy. Reading and writing computer code are specialized skills that most people do not possess; they wouldn’t know what to make of an algorithm if it were presented to them. Complexity is a third source of opacity. Optimized algorithms must incorporate many parameters to fit the data they are trained on; this makes them very difficult to understand and explain, even for those with technical expertise. This fact reintroduces a version of the dilemma of hyperspecialization: making algorithms more intelligible will typically make them less accurate. Hence, we are epistemically alienated from the algorithms that replace expert judgment no less than we are from the reasoning of experts.
In fact, we are on the precipice of being more alienated from algorithmic judgment than we are from expert judgment. Linear algorithms run the danger of overfitting the data they are trained on, especially in complex and volatile domains where rare events can upset longstanding trends (Katsikopoulos et al. Reference Katsikopoulos2020; Gigerenzer Reference Gigerenzer2022). This danger can be mitigated using non-linear forms of machine learning, such as deep neural networks, that can detect subtle relationships between predictive cues in enormous datasets. These new forms of AI can master entire disciplines; they can be brought to the frontiers of knowledge much faster and more thoroughly than human beings. But, unlike linear algorithms, they are essentially opaque because of their staggering complexity and black box learning design. For example, deep neural networks consist of many hidden layers of nodes between the input layer, which receives the information from outside the network, and the output layer, which generates the result; the workings of these hidden layers are no more intelligible to us than the workings of the neurons inside our own skulls. As a result, their hyperspecialized labor may not be understood by anyone, including human experts. In these cases, nobody knows why AIs succeed or how they might fail; they are, in effect, techno-oracles.
Unlike Mandevillian weaknesses, our inability to reason effectively across disparate specializations is an epistemic impediment for both individuals and collectives. It is a weakness of alienation. By interfering with the exchange of ideas and arguments that expand the frontiers of our collective knowledge, hyperspecialization slows that expansion by suppressing innovation (Muthukrishna Reference Muthukrishna2023, 154). Yet, hyperspecialization is a natural consequence of our epistemic progress. It is like the surface tension of a balloon that increases as the balloon inflates and makes inflation progressively more difficult. It is an epistemic weakness that results from our growing alienation from the processes of cognitive labor, rather than a weakness that can be harnessed to more efficiently distribute cognitive labor.
4. Inducing cognitive bias
It has become commonplace in cognitive psychology to think of human beings as ‘predictably irrational’ (Ariely Reference Ariely2009). When making judgments and decisions under uncertainty, we systematically depart from the strictures of statistics, probability theory, and rational choice theory. These biases are supposed to be the result of our overreliance on intuitive heuristics whose ease and efficiency are purchased at the expense of their accuracy (Tversky and Kahneman Reference Tversky and Kahneman1974; Kahneman Reference Kahneman2011; Stanovich Reference Stanovich2011). The dominant heuristics and biases tradition has provoked a counter-movement that sees heuristics and other intuitive inferential mindware not as cognitive bugs, but as valuable adaptations to the environments in which our ancestors evolved (Gigerenzer Reference Gigerenzer1991, Reference Gigerenzer2008; Cosmides and Tooby Reference Cosmides and Tooby1996). On this adaptationist view, cognitive biases are generally the result of systematic disparities between our modern environments and the mindware we have inherited to handle the tasks that were common in ancestral environments. In other words, they are better attributed to mindware mismatches than to our natural tendency towards cognitive miserliness (Nguyen Reference Nguyen2023; Page Reference Page2023; Bland Reference Bland2024). These mismatches are perpetuated by the disparity between the slow pace of genetic evolution and the increasingly rapid pace of cultural evolution. Modern environments are, to an unprecedented degree, culturally constructed; as cultural evolution intensifies with the division of cognitive labor, our environments will continue to change in ways that cannot be easily accommodated by our genetic inheritance. Our resulting cognitive biases are thus weaknesses of alienation.
Many aspects of statistics, probability theory, and rational choice theory do not come naturally to most people. This shouldn’t be surprising since their fundamental principles went unarticulated until relatively recently. Like other products of cumulative cultural evolution, these theories express insights that rest on centuries of knowledge, very little of which needs to be understood to be used by those who have inherited portions of them through cultural transmission. And our limited use of these theories, together with the fact that others are known to understand them, leads to the illusion of explanatory depth; we think we understand them better than we actually do (see §2). By making casual use of powerful theories and concepts that we don’t fully or easily understand, we thereby alienate ourselves from the informational environments in which we make judgments and decisions under uncertainty. This weakness of alienation, when unrecognized, masquerades as an irrational reliance on cognitive heuristics, such as the representativeness heuristic.
A classic instance of the representativeness bias is invoked by the following task (Tversky and Kahneman Reference Tversky, Kahneman, Gilovich, Griffin and Kahneman2002):
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which of the following statements is more likely to be true of Linda?
(B) Linda is a bank teller.
(B&F) Linda is a feminist bank teller.
Tversky and Kahneman found that 85% of subjects answered that (B&F) is more likely than (B). This is a conjunction error: a conjunction cannot be more probable than either of its conjuncts. Linda’s being a feminist bank teller entails that she is a bank teller, but she could be a bank teller who is not a feminist. Most people make this error, argue Tversky and Kahneman, because they rely on the representativeness heuristic rather than logic and probability theory: the description of Linda is more representative of a feminist bank teller than it is of a bank teller. Gigerenzer (Reference Gigerenzer1991) offers a different explanation. He claims that these conjunction errors are attributable to the framing of the task in terms of abstract probabilities. Our ancestors evolved to think about natural frequencies, rather than abstract probabilities, and our genetic inheritance has yet to catch up to our cultural inheritance. To test this diagnosis, Hertwig and Gigerenzer (Reference Hertwig and Gigerenzer1999) re-framed the problem as follows:
There are 100 people who fit the description above (i.e., Linda’s).
How many of them are:
(a) bank tellers
(b) bank tellers who are active in the feminist movement
They found that only 15% of subjects committed the conjunction error when presented with this framing. Gigerenzer claims that this insight generalizes: many well-documented biases, including base-rate neglect and overconfidence, can be significantly attenuated when experimental tasks are framed in terms of natural frequencies rather than mathematical probabilities. In one sense, this is good news: these cognitive failures are not the result of our inherent irrationality. In another sense, it’s not, since probabilities are often framed abstractly, as in weather forecasts, election predictions, batting averages, medical prognoses, etc. By populating our informational environments with these abstract notions, we are making it unnecessarily difficult for most people to arrive at sound judgments and decisions under conditions of uncertainty.
In addition to concepts and theories that can be used to think about the information at our disposal, cumulative culture includes more efficient ways of transmitting information. Unlike our ancient ancestors, we needn’t rely on our memories, and the memories of our kith and kin, to know what is going on in our communities, and in the world at large. We have outsourced much of this cognitive labor to news media companies that compete with one another to present us with this information. To succeed in this market, it is not enough to deliver accurate information; the news must also be captivating. Media coverage tends to be biased in favor of extraordinary, recent, vivid, and poignant information because this is what captures our attention. This bias in our informational landscape leads to a stronger availability bias in people’s reasoning. The availability heuristic treats the ease with which events of a particular kind come to mind as an index of their frequency (Tversky and Kahneman Reference Tversky and Kahneman1973). This will be a reliable rule of thumb when cognitive availability is influenced primarily by experienced frequency, and our experiences form a representative sample of the larger population. This would have generally been the case for our ancient ancestors. If they could more easily recall seeing a particular kind of plant in one location than in another, it was probably because the plant was more abundant in that location. By contrast, in the modern media landscape, ordinary events get under-reported, while extraordinary ones get over-reported, which causes our use of the availability heuristic to backfire. This may be why, for example, the fear of flying is more widespread than the fear of driving, even though the former is a much safer mode of transportation than the latter (Savage Reference Savage2013). Furthermore, cognitive availability is influenced by factors besides experienced frequency, such as recency, vividness, and poignancy. In the wake of a terrorist attack or nuclear disaster, which inevitably receives extensive media coverage, people tend to overestimate how often such events occur (Slovic et al. Reference Slovic, Fischhoff, Lichtenstein, Jungermann and De Zeeuw1976). Thus, while we have access to vastly more socially transmitted information than our ancestors did, the inferences we draw from that information are more likely to be biased by our continued reliance on heuristics that are adapted to more information-scarce environments.
There is also a growing mismatch between our social learning heuristics and our modern informational environments. The hyperspecialization that has resulted from the division of cognitive labor makes it increasingly difficult to personally vet socially transmitted information (§3). But argumentation is only one of the mechanisms of epistemic vigilance at our disposal. In domains where it is of little use, we use social learning heuristics that leverage discernible cues about the credibility of testifiers – rather than their testimony – to determine who is worth listening to. For example, we take the testimony of majorities and prestigious figures seriously (Richerson and Boyd Reference Richerson and Boyd2005). When these cues are veridical, they are reliable indicators of accurate information. We should accept the consensus view of the Intergovernmental Panel on Climate Change, even though most of us cannot assess the evidence on which it is based, because its membership is restricted to respected researchers in relevant and independent fields. When signals of credibility are not veridical, they are the source of what Levy calls bad beliefs, i.e., beliefs that straightforwardly contravene the available evidence (Levy Reference Levy2022b). The view that climate change has nothing to do with human activities is one such belief. Levy rejects the commonplace view that bad beliefs are the result of irrational reasoning. Deferring to others who seem to be in a better epistemic position on topics about which we have little or no specialized expertise is precisely what we should be doing. The problem is that it can be difficult for non-specialists to discern true signs of credibility, especially when false signs proliferate. This type of epistemic pollution is common because it is easier to fake signals of credibility than it is to generate real ones. Individuals and collectives can adopt the trappings of epistemic authority without having earned it by mimicking the credentials of real experts (Guerrero Reference Guerrero and Peels2017). For example, the Nongovernmental International Panel on Climate Change has been designed to resemble the Intergovernmental Panel on Climate Change in everything but genuine competence. The portrayal of false balance in the media – giving equal time and expression to both sides of ‘controversial’ issues, such as climate change – has widened the gap between expert consensus and the public’s estimation of expert opinion (Cook et al. Reference Cook, Oreskes, Doran, Anderegg, Verheggen, Maibach, Carlton, Lewandowsky, Skuce, Green and Nuccitelli2016). The fact that signals of specialized knowledge are often easy to produce but difficult to accurately discern renders the social learning heuristics on which we increasingly rely less effective in modern informational environments.
Recognizing that our alienation from the environments of evolutionary adaptation plays a central role in our biased judgments and decisions has important implications for debiasing efforts. Rather than focusing almost exclusively on interventions designed to change how individuals think – e.g., making them more statistically savvy and resistant to misinformation – we should also try to make our shared informational environments more hospitable to our ancient ways of thinking – e.g., framing probabilities as natural frequencies and wisely regulating news media (Gigerenzer Reference Gigerenzer2008; Levy Reference Levy2022b).
5. Conclusion
Epistemic alienation is the problematic separation of inquirers from important aspects of their intellectual lives. The division of cognitive labor is perhaps the single biggest source of our epistemic alienation. It is also one of the primary drivers of our epistemic progress. Distinguishing the alienating from the empowering aspects of our collective epistemic endeavors is an important task for epistemologists. To that end, this paper distinguishes two types of weaknesses exhibited by individuals that relate to the division of cognitive labor, only one of which is the result of epistemic alienation. Mandevillian weaknesses, such as closed-mindedness, undermine the epistemic prospects of individuals but improve the prospects of well-coordinated collectives. When collectives harness these shortcomings, they do not separate their members from the epistemic ideals to which they aspire; they enable themselves to approach these ideals in the only way possible: by dividing the cognitive labor required to do so. By contrast, weaknesses of alienation, such as intellectual arrogance, the incapacity of reason, and cognitive biases, are the result of a problematic separation of inquirers from the products, processes, and environments of collective inquiry. When left unchecked, these weaknesses are intensified, rather than mitigated, by increasingly efficient divisions of cognitive labor. Having drawn this distinction, we can now articulate an important objective for social epistemology: finding ways of effectively harnessing Mandevillian weaknesses while minimizing weaknesses of alienation.
Acknowledgment
I would like to thank Mark Alfano, Mandi Astola, Mara Neijzen, Adam Piovarchy, Ritsaart Reimann, Paul Smart, and the anonymous reviewer and associate editor for their helpful comments on earlier drafts of this paper.