1. Introduction
The spread of false and misleading information online is often framed as a problem of social media platforms. This framing suggests that the primary solutions should focus on regulating these platforms, either through state intervention or through self-regulation. Much less attention, however, is given to the people who use these platforms and to their agency in the process of spreading misinformation. This neglect is partly understandable, as proposals to regulate individual behavior online are frequently met with fears of widespread censorship. Yet, it may be possible to substantially reduce the harmful effects of disinformation by shifting the focus toward individuals, without introducing broad restrictions that would provoke legitimate objections. The key is to concentrate not on everyone, but on those who contribute most significantly – on superspreaders, which DeVerna et al. define as “accounts that introduce low-credibility content, which then disseminates widely” (DeVerna et al. Reference DeVerna, Aiyappa, Pacheco, Bryden and Menczer2024: 2).
Archer et al. argue that “a very small number of social-media accounts enjoy the vast majority of online attention, lending their controllers massive agenda-setting powers.” (Archer et al. Reference Archer, Alfano and Dennis2024, 761). A report from the Center for Countering Digital Hate on COVID-19 misinformation shows that most of the misinformation was propagated by just 12 individuals (“The Disinformation Dozen: Why Platforms Must Act on Twelve Leading Online Antivaxxers”, 2021). DeVerna et al. (Reference DeVerna, Aiyappa, Pacheco, Bryden and Menczer2024) demonstrate that removing only ten Twitter accounts would eliminate approximately 34% of all future retweets of low-credibility content, revealing an extreme concentration of influence among a very small number of users. An equally important finding from their study is that accounts with very large followings are significantly less likely to be suspended, suggesting that platforms may exercise leniency toward prominent or verified users. These insights indicate that drafting law targeting even a few key actors could substantially reduce the circulation of toxic or misleading information, while also highlighting a structural asymmetry in platform governance: those who possess greater “epistemic power” are less likely to face moderation or removal.
What is more, there is an increasing need to rethink strategies for combating disinformation and misinformation in light of the rapid development of AI technologies. The emergence of tools capable of producing convincing text, images, and videos at minimal cost has fundamentally altered the landscape of information production. As several studies demonstrate, the potential for abuse is substantial. Goldstein et al. (Reference Goldstein, Chao, Grossman, Stamos and Tomz2024) and Salvi et al. (Reference Salvi, Ribeiro, Gallotti and West2025) show that modern AI systems can generate persuasive propaganda quickly, cheaply, and with little technical expertise, enabling the effortless creation of manipulative content on a scale previously unimaginable (Goldstein et al. Reference Goldstein, Chao, Grossman, Stamos and Tomz2024; Salvi et al. Reference Salvi, Ribeiro, Gallotti and West2025). Similarly, Bai et al. (Reference Bai, Voelkel, Muldowney, Eichstaedt and Willer2025) find that AI tools can produce politically persuasive messages at high volume and speed, dramatically lowering the barriers to coordinated misinformation campaigns (Bai et al. Reference Bai, Voelkel, Muldowney, Eichstaedt and Willer2025). The situation is further exacerbated by advances in synthetic video technologies, including deepfakes, which make it increasingly difficult for audiences to distinguish between authentic and fabricated material (Hynek et al. Reference Hynek, Gavurova and Kubak2025; Diel et al. Reference Diel, Lalgi, Schröter, MacDorman, Teufel and Bäuerle2024).
Taken together, these developments suggest that the problem of disinformation will intensify, both in scale and sophistication. Consequently, existing approaches may no longer suffice. In this context, it becomes even more pressing to differentiate responsibility based on epistemic power, as the ability to amplify and legitimize AI-generated falsehoods will not be evenly distributed. Those with significant influence, reach, or credibility can transform algorithmically generated content from mere noise into socially consequential misinformation, thereby reinforcing the argument that legal and moral duties should scale with epistemic power.
In this paper, I draw on the phrase popularized by the Spiderman series, “with great power comes great responsibility,” and argue that, in the face of rampant mis/disinformation – now dramatically amplified by the capacities of generative AI – we must be willing to consider more radical regulatory measures. Specifically, I propose that those who wield significant epistemic power – that is, those capable of shaping what others believe and know – should bear legal responsibility for the dissemination of fake news. In other words, the maxim might today be reformulated as: with epistemic power comes criminal responsibility.
This proposal contributes to the ongoing discussion on combating fake news. The idea of criminalization is often met with strong opposition, typically grounded in concerns about the potential for widespread censorship. Limiting this proposal to individuals who hold epistemic power serves as a response to that objection. Criminalization would not concern everyone, but would apply only to those who occupy a significant position in the distribution of knowledge. The dependence of criminal responsibility on the characteristics of the subject is already well known in criminal law. Some legal systems recognize similar restrictions on criminal responsibility for individuals with specific roles or capacities. In certain jurisdictions, public officials can be held criminally liable for actions that would not constitute crimes if committed by ordinary citizens. One justification for these stricter standards is the potential abuse of power. The idea developed in this paper follows the same logic: it addresses another form of power abuse – the abuse of epistemic power.
This paper is organized as follows. After the introduction, I briefly discuss existing methods of combating disinformation. I then review ideas concerning the objective scope of criminalizing fake news. The following section focuses on subjective perspectives on criminalization. Next, I examine the proposed offense of abusing epistemic power to spread misinformation. Issues related to freedom of speech are addressed thereafter. The paper concludes with some final remarks.
Before proceeding further, some terminological clarification is necessary. In the title of this paper, I use the term disinformation. I follow the understanding proposed by Wardle, who introduces the broad concept of information disorder and distinguishes three categories within it: misinformation, disinformation, and malinformation (Wardle Reference Wardle2018). Both misinformation and disinformation consist of false information; the key difference lies in the intent of the person spreading it. Those who share false information without the intention to harm spread misinformation, whereas those who spread it with the intention to harm engage in disinformation. Malinformation, by contrast, consists of true information shared with the intent to cause harm. By focusing on disinformation, I aim to highlight the element of harmful intent in the dissemination of information – an aspect particularly relevant from the perspective of criminal law. In a non-criminal context, I use the terms both mis/disinformation. Occasionally, I also use the term fake news in the context of criminalization; the definition of fake news (see e.g., Katsirea Reference Katsirea2018; Lazer et al. Reference Lazer, Baum, Benkler, Berinsky, Greenhill, Menczer, Metzger, Nyhan, Pennycook, Rothschild, Schudson, Sloman, Sunstein, Thorson, Watts and Zittrain2018; Molina et al. Reference Molina, Shyam Sundar, Le and Dongwon2019; Southwell et al. Reference Southwell, Thorson and Sheble2017), whenever I do, it should be understood as referring to disinformation.
2. Fighting with mis/disinformation
In this section, I will briefly discuss the existing methods of combating misinformation and disinformation and situate my proposal within that broader framework. At a general level, such measures can be divided into two groups: those that address false information after it has already spread, and those that aim to prevent its dissemination before it occurs. The first group focuses on what to do once misinformation is already in circulation – how to identify, debunk, or counteract it. The second group seeks to prevent misinformation from being published in the first place, or to mitigate its potential destructive impact before it reaches an audience (after all, information ceases to be harmful if no one believes it). Both kinds of measures are necessary. In this paper, however, I defend the thesis that there should exist incentive structures – grounded in the criminal law apparatus – that discourage the publication of disinformation. This proposal is not intended to stand alone but rather to supplement existing approaches.
Numerous measures have been proposed to combat misinformation and disinformation after its publication. Those ideas focus on platforms that vary significantly, from deleting information and removing accounts that spread it (Bursztynsky Reference Bursztynsky2021). The content is, after intervention, not available to other users, and the author of the intervention cannot post again on that platform. Fighting with mis/disinformation does not need to involve the deletion of the content. For example, flagging problematic content as false might lessen the influence of that information. (see, e.g., Kim et al. Reference Kim, Tabibian, Oh, Schölkopf and Gomez-Rodriguez2018; Aruguete et al. Reference Aruguete, Bachmann, Calvo, Valenzuela and Ventura2025; Steensen et al. Reference Steensen, Kalsnes and Westlund2024). A similar strategy could involve promoting, next to the false information, other sources with a credible explanation of the phenomenon (see, e.g., Alemanno Reference Alemanno2018). The other mechanism, the reduction of the visibility of problematic content, might also be significantly limited (Cotter et al. Reference Cotter, DeCook and Kanthawala2022).
Education is a measure that belongs to both categories. Implemented before the spread of misinformation, it can prevent people from publishing false information (“I will not post this because I know it is untrue”) or help neutralize potentially harmful content (“I do not believe this information because I know it is false”). Educational efforts can also occur after the publication of misinformation (“I no longer believe this information because I have been persuaded that it is false”). However, educational strategies face significant practical challenges, which justify considering additional preventive measures. Before turning to these, let us first examine where the core problem lies.
One might think that education and the explanation of why certain information is false could help; however, this is not always the case. People are not easily persuaded (Gottlieb Reference Gottlieb2016; Klein Reference Klein2021), and meta-analyses suggest that attempts at debunking misinformation are often unsuccessful (Chan and Albarracín Reference Chan and Albarracín2023). Studies show, for instance, that even a few minutes of exposure to false information about vaccine safety can decrease people’s willingness to vaccinate (Betsch et al. Reference Betsch, Renkewitz, Betsch and Ulshöfer2010). Moreover, the process of debunking false information can, in some cases, lead to undesirable outcomes: more people may become exposed to harmful ideas, thereby increasing the visibility of information that should not be epistemically available (Mosleh et al. Reference Mosleh, Martel, Eckles and Rand2021).
Another issue concerns the motivations behind spreading misinformation and disinformation, for example, religious beliefs, political preferences, fame, money, and even geopolitics (Dhama et al. Reference Dhama, Sharun, Tiwari, Dhawan, Emran, Rabaan and Alhumaid2021; Ahmed Reference Ahmed2021; Scott Reference Scott2020; Size Reference Size2020; Melchior and Oliveira Reference Melchior and Oliveira2024). If someone already knows that the information is false (or at least recognized as false by the mainstream), educational efforts are likely to be ineffective. Moreover, as mentioned above, the public process of debunking such information may unintentionally expose others to harmful content, thereby amplifying rather than reducing its impact.
What lesson can be drawn from this? It may be better not to be exposed to false information at all, because once it becomes available, its influence cannot simply be undone. This suggests that certain kinds of false information should perhaps not be available at all. Dennis and Lindberg, in their paper about the threat of misinformation to public health, argue that there is “an urgent need for primary prevention, […] Anything less means misinformation – and its societal consequences – will continue to spread” (Denniss and Lindberg Reference Denniss and Lindberg2025: 1) The core proposition of this paper is coherent with that way of thinking and aim to limit the spread of misinformation by creating incentives not to publish it in the first place.
One might ask why I classify criminal responsibility as a preventive measure. After all, criminal responsibility is typically imposed after a crime has been committed. However, I refer here to criminal law as a preventive tool in a utilitarian sense. In brief, the justification for criminal law can be either backward-looking – we punish because offenders deserve it – or forward-looking – we punish to prevent future crimes (on theories of punishment, Canton Reference Canton, Focquaert, Shaw and Waller2020). Both aspects are present in most legal systems, but in this context, I understand the criminalization of the abuse of epistemic power to spread misinformation primarily as a preventive measure. It is intended to serve as a deterrent for those who hold epistemic power and might otherwise be inclined to publish or engage with potentially harmful content (on deterrence, see Apel and Nagin Reference Apel, Nagin and Tonry2011; Robinson and Darley Reference Robinson and Darley2004). This deterrent effect can operate regardless of the individual’s initial motivation: whether the person believes the information to be true or not, the mere possibility of criminal liability may function as a negative incentive to refrain from posting it. Of course, if such an offense were introduced, some individuals would ultimately be punished under it; yet these cases would also serve as a lesson to others, reinforcing the broader preventive purpose. In practice, criminal responsibility applied to a few can have a much wider social effect.
3. An objective approach to the criminalization of disinformation
Using criminal law to combat false information predates the problem of fake news spread over the Internet. One kind of law that could be interpreted in that way is the crime of Holocaust denial, which is part of the legal system in various countries (see, e.g., Kahn Reference Kahn2004; Teachout Reference Teachout2005). In brief, this crime might be committed when someone claims that the Holocaust did not happen. It could be considered, then, as anti-fake news regulation, because it protects the truth about a particular historical event, and if someone publicly claims differently, they could be criminally charged. For the record, there are voices that such a law should be repealed (see, e.g., Singer Reference Singer2016).
Nevertheless, the idea of introducing criminal responsibility to fight against false information is not new, and the usual formulation of the problem is to focus on spreading concrete categories of fake news, for example, disinformation in the electoral period or publishing deep fake porn (see, e.g., Beech Reference Beech2018; Statt Reference Statt2019; Lecher Reference Lecher2019; Gold and Washington Reference Gold2025).
I will illustrate the core idea of this paper using a public health example that I have developed in my other work. These issues are, to some extent, independent. One might agree that there is a need to introduce criminal responsibility for superspreaders with epistemic power without necessarily endorsing my specific formulation of the crime of spreading medical disinformation. However, for the purposes of this discussion, it will be easier to present the concept of abuse of epistemic power for the spread of disinformation with a concrete example in mind.
Elsewhere, I proposed the criminalization of spreading medical fake news (Mamak Reference Mamak2021). The core idea stemmed from the observation that the Internet has become a powerful medium for disseminating false health-related information, which has led to the rise of vaccine hesitancy. In that article, I focused particularly on anti-vaccination claims, such as the false allegation that childhood vaccines cause autism. Empirical data at the time showed an increase in the incidence of diseases that could have been prevented through vaccination, suggesting that misinformation had tangible, harmful public-health effects.
In that context, I examined the limitations of non-criminal strategies for countering fake news – such as education, fact-checking initiatives, and self-regulation by digital platforms – and argued that if these measures prove ineffective, a narrowly tailored criminal provision may be justified. The proposed offense targeted the public dissemination of information that is evidently discrepant with established medical knowledge (“Whoever publicly disseminates information evidently discrepant with medical knowledge is subject to a penalty.”), aiming to deter the circulation of health-related falsehoods that endanger others. For the record, it should also be noted that not all false information about medicine would qualify as prohibited; only that which has the potential to cause substantial harm should fall within the scope of the restriction (Mamak Reference Mamak, Faintuch and Faintuch2022).
I acknowledged that such a measure could be perceived as a restriction of freedom of expression. However, as I argued there, this type of restriction can be constitutionally defensible, since constitutional systems typically permit limitations on certain freedoms when they conflict with other fundamental goods – here, the protection of public health and human life. The proposed approach thus sought to balance freedom of speech with the state’s duty to safeguard citizens against serious health risks created by the deliberate or reckless spread of medical misinformation.
The proposal I will now discuss builds on ideas I first formulated before the recent pandemic. Observing the spread of COVID-19 and the accompanying wave of misinformation convinced me that pandemics require special treatment. Future outbreaks are likely, and one of the key factors influencing the effectiveness of any response will be the quality of information available to the public. Even if safe and effective vaccines are developed, they may not be widely used if misinformation portrays them as dangerous or deadly.
In my recent publication, I revisited my earlier proposal for the criminalization of media-based fake news and adapted it to the specific context of pandemics (Mamak Reference Mamak, Faintuch and Faintuch2025). I proposed the following offense:
Whoever publicly disseminates information discrepant with medical knowledge during a pandemic is subject to a penalty.
The difference between this provision and the earlier, more general one is twofold. First, it applies a lower standard of certainty regarding what counts as medical knowledge. Second, it is limited to periods of pandemics. During such times, controlling the spread of information is crucial, as it can directly affect public health outcomes. The COVID-19 pandemic revealed how fragile public trust in vaccines can be, and widespread disinformation may lead to preventable deaths. Therefore, restricting the circulation of vaccine-related information during pandemics may be justified.
Importantly, the lower threshold of certainty about medical knowledge does not mean that any publication referring to medical matters qualifies as “medical knowledge.” Rather, this term should be understood as information recognized as such by experts. This assessment may depend on factors such as the place of publication (e.g., reputable versus predatory journals), the methodology employed, and the general acceptance of findings within the scientific community.
In sum, it is useful to ground future deliberations on the abuse of epistemic power in specific examples of misinformation. There are many possible cases to consider; however, in this paper, I have focused on two offenses related to public health issues.
4. A subjective approach to criminalization of disinformation: epistemic power
We now turn to the central part of this paper, where I present the justification for a subjective differentiation of criminal liability for abuse of epistemic power. The argument unfolds in three steps. First, I will show that legal systems sometimes differentiate criminal responsibility based on the subjective characteristics of the individual. I will use the example of public officials, who can be punished for abuse of power – conduct that constitutes a crime only when committed by those holding power, not by ordinary citizens. Second, I will argue that epistemic power is a meaningful category and that we can distinguish online actors according to whether or not they possess such power. Third, I will demonstrate that the law already differentiates legal evaluation according to the scale of the actor’s influence. Smaller actors are often treated differently from larger ones; this is evident, for instance, in the regulation of abuse of market power. Together, these three steps aim to show that criminalizing only those who abuse epistemic power could be a defensible strategy for combating disinformation. The points presented address a potential counterargument that the proposal might be considered unacceptable because it violates the principle of equality before the law. This section explains why it could nonetheless be acceptable – even in the context of criminalization – to treat different actors differently when they engage in the same act of disseminating disinformation.
4.1. Abuse of power by public officials
Public officials, by virtue of their position within the legal system, are bound not only by the general laws that apply to all citizens but also by special legal duties arising from their official role. Certain acts that would be lawful if performed by an ordinary citizen may constitute a criminal offense when committed by a public official. One justification for this asymmetry lies in the authority and trust vested in officeholders, which must not be abused.
Horder conceptualizes misconduct in public office precisely as an abuse of power (Horder Reference Horder2018, 17). Similarly, Leib and Kent (Reference Leib and Kent2021) argue that public officials stand in a fiduciary relationship to the public, emphasizing that abuse of power is a central concern of public law. Aronson also refers to such provisions as being “driven by a sense of moral outrage at the abuse of collective power.” (Aronson Reference Aronson2011: 15). In a recent paper, Ros and Gehrke notice that in the last decades, there has been an increase in the officials (including former prime ministers and presidents) convicted of corruption by courts in their own countries. They also explain that in corruption cases, criminal responsibility is the enforcement of sanctions “against public officials who engaged in abuse of office for private or partisan gain” (Ros and Gehrke Reference Ros and Gehrke2024: 963).
By deliberately framing the issue in terms of abuse of power, I wish to highlight a broader point: legal systems already recognize that holding power entails heightened responsibility and the potential for abuse. Thus, it is neither novel nor unjustified to suggest that citizens who possess comparable epistemic power – for instance, influential online figures – might also be subject to differentiated legal treatment, given that their actions can have disproportionate social impact.
4.2. Abuse of epistemic power
I already use the term “epistemic power,” and now I want to expand on that. I want to show that the positions of certain people might reflect holding a specific kind of power.
Archer et al. define epistemic power as follows: “A person has epistemic power to the extent she is able to influence what people think, believe, and know, and to the extent she is able to enable and disable others from exerting epistemic influence (Archer et al. Reference Archer, Cawston, Matheson and Geuskens2020: 29).
As these authors observe, almost everyone possesses some degree of epistemic power, since most people can influence the beliefs of those around them to a limited extent. In this paper, however, I refer to substantial epistemic power – the kind of influence held by celebrities, opinion leaders, or other prominent figures on the Internet whose statements can shape the beliefs and attitudes of large audiences.
Just as public officials are subject to special legal duties because of the authority and trust vested in their offices, individuals who possess epistemic power – such as celebrities or influential online figures – may also justifiably bear heightened responsibilities. Archer et al. (Reference Archer, Alfano and Dennis2024) argue that celebrities’ testimonies reach vast audiences, giving them a form of epistemic power that can significantly shape what others believe and know. This influence, while often unearned, carries the potential to produce serious harms, particularly when misinformation undermines public trust in experts or institutions. Consequently, these authors maintain that those with epistemic power have negative (moral) duties not to spread false information or direct attention toward unreliable sources, and in certain contexts – such as public health crises – positive duties to use their influence responsibly.
This mirrors the legal logic applied to public officials: greater power entails greater responsibility. Both cases rest on the same normative foundation – that the possession of authority, whether political or epistemic, increases the potential for harm and thus justifies differentiated legal treatment. Recognizing epistemic power as a basis for special responsibility extends an already familiar legal principle into the digital domain, where influence and persuasion have become new forms of power.
4.3. The wrongness of the abuse of market power
A further analogy can be drawn from competition law, particularly the concept of abuse of market power. In that domain, not all market behavior is treated equally: certain actions become unlawful only when performed by an entity with a dominant position. As Vickers (Reference Vickers2005) explains, abuse of market power is one of the three core pillars of competition law, alongside anti-competitive agreements and mergers. The law does not prohibit conduct such as setting low prices in general; rather, it prohibits such behavior only when it is used by a dominant firm to distort competition. For example, predatory pricing – selling below cost to eliminate competitors – is considered abusive only when carried out by a firm with substantial market power. If the same pricing strategy is used by a small market participant, it would typically fall outside the scope of regulatory intervention.
In other words, the same act can be evaluated differently by the law depending on the “size” or influence of the actor who performs it. Transferring this reasoning to the question of responsibility, it is conceivable to treat differently the act of posting the same false information about vaccines depending on the number of followers the person has. It is a qualitatively different matter if an anti-vaccine message is posted by an anonymous account with only a few followers than if the identical message is published by a person with hundreds of thousands of followers. For the record, the act of disseminating misinformation is morally wrong regardless of the status of the person committing it. However, the legal evaluation may vary depending on the impact of the actor involved.
To sum up, in this section, I have aimed to make three points. First, individuals who hold power can be held criminally responsible for acts that would not constitute crimes if committed by ordinary citizens; thus, a person’s status or position can legitimately affect the assessment of criminal liability. Second, some individuals possess epistemic power – the ability to shape what others believe and know – and this form of influence, like political or institutional power, can also be abused. Third, from a legal perspective, it is both possible and conceptually consistent to differentiate legal responses to the same act based on the scale or influence of the actor who performs it.
5. Crime of abuse of epistemic power for spreading disinformation
In this section, I outline how the idea of criminalizing the abuse of epistemic power could be conceptualized and operationalized. I present and explain a legislative proposal that serves as a basis for a more in-depth exploration of the underlying idea. This proposal should not be understood as a fully developed legal solution ready for implementation, but rather as a starting point for further discussion and refinement.
The intention is not to suggest that every instance of disinformation disseminated by individuals with epistemic power should be criminalized. Instead, the aim is to consider whether, in cases where criminalization of specific categories of false information is already justified, such regulation might be limited to those who possess epistemic power. In other words, before addressing who should fall within the scope of criminal liability, we must first establish a substantive justification for prohibiting particular categories of disinformation.
Let us assume, for the sake of discussion, that there is broad agreement that a specific category of false information – for instance, disinformation posing a threat to public health – is sufficiently dangerous to warrant criminal regulation. However, it is also likely that a broad or indiscriminate criminalization of such content would be socially and politically unacceptable, as it could give rise to concerns about widespread censorship. (For present purposes, I set aside the question of whether such concerns are justified.)
In such a scenario, a pragmatic way forward would be to narrow the subjective scope of regulation – that is, to limit who can be held criminally liable under the proposed framework. This could be achieved by confining the regulation to individuals who possess epistemic power, thereby targeting those whose speech has the greatest potential impact on public understanding and behavior. The proposed limitation could be formulated as follows:
§ 2. The perpetrator of a crime provided in § 1 could be a person who, at the moment of the commission of the act, was followed by at least 10,000 observers.
I need to unpack this little bit. The concept of epistemic power remains ambiguous and requires clarification within the context of regulation. As previously noted, epistemic authority is multidimensional; follower counts alone are insufficient and should be complemented by indicators such as average reach, engagement rate, or the individual’s social role (e.g., journalist or medical professional) (Bartsch et al. Reference Bartsch, Neuberger, Stark, Karnowski, Maurer, Pentzold, Quandt, Quiring and Schemer2025). Nevertheless, for regulatory purposes, epistemic power must be operationalized in a manner that is easily understandable, particularly for individuals who might face criminal liability.
The most straightforward way to translate the concept of epistemic power into a clear legislative standard is to express it numerically. For instance, Conde and Casais (Reference Conde and Casais2023) classify Instagram influencers into categories based on follower count: micro-influencers (1000–100,000 followers), macro-influencers (100,000–1 million), and mega-influencers (over 1 million). Other researchers adopt stricter definitions; for example, Park et al. (Reference Park, Lee, Xiong, Septianto and Seo2021) define micro-influencers as those with at least 10,000 followers. Interestingly, their study also finds that smaller influencers are often more persuasive than mega-influencers, as they tend to be perceived as more authentic. Similar findings appear in Kay et al. (Reference Kay, Mulcahy and Parkinson2020), which suggests that smaller influencers may exert greater persuasive power than larger ones, at least within marketing contexts.
Walter et al. (Reference Walter, Föhl and Zagermann2025) further compare micro- and mega-influencers across dimensions such as trustworthiness, expertise, attractiveness, authenticity, and similarity, showing that micro-influencers (over 10,000 followers) are generally perceived as more authentic and trustworthy, and outperform mega-influencers particularly in perceived similarity.
Taking these findings together, it seems unwise to set the regulatory threshold for epistemic power too high, given that micro-influencers can possess substantial persuasive influence, sometimes exceeding that of mega-influencers. I therefore propose a threshold of 10,000 followers as a preliminary benchmark. This figure should, of course, be context-sensitive, as follower counts may vary in meaning across platforms, countries, and audiences. Nonetheless, a single clear numerical threshold provides clarity and legal certainty, allowing for an objective distinction between those who possess significant epistemic power and those who do not.
While there may be sound reasons to adjust this threshold, the proposal offered here should be viewed as an invitation to discussion – a starting point for developing a workable and transparent standard for regulating epistemic power.
Focusing solely on follower count reduces epistemic power in the proposal to social media presence. Yet, epistemic power is a much broader concept, traditionally encompassing individuals and groups who shape collective understanding and belief beyond digital platforms. These include, for example, scientists, religious leaders, and public intellectuals – figures who may hold considerable epistemic influence even in the absence of a substantial online following (on epistemic authority, see Zagzebski Reference Zagzebski2012).
There is, of course, potential overlap between these categories. A scientist or religious leader might also maintain an active online presence, thereby combining traditional and digital forms of epistemic authority. Nevertheless, this is not always the case; many individuals wielding significant epistemic influence do so primarily through offline institutions, such as academia, religious organizations, or the media.
The starting point of the current proposal is social media, as they represent the primary channels through which misinformation and disinformation spread today (e.g., Denniss and Lindberg Reference Denniss and Lindberg2025). However, future regulatory frameworks could consider how to incorporate other sources of epistemic authority that operate outside or alongside digital networks. Expanding the scope in this way would allow for a more comprehensive understanding of epistemic power as a multi-domain phenomenon, ensuring that regulation keeps pace with the evolving landscape of knowledge production and dissemination.
A further issue concerns the timing of the follower count used to determine epistemic power. The number of followers must be assessed at the moment when the act is committed or the content is published. It is conceivable that a particular post – perhaps one containing misleading or sensational information – could itself lead to a rapid increase in followers, pushing an account above the 10,000 threshold. However, such a change in status should not retroactively affect responsibility for that earlier act.
In other words, if an individual had fewer than 10,000 followers at the time of posting, they would not fall within the scope of the proposed regulation for that specific content, even if their follower count subsequently exceeded the threshold as a result of the post’s popularity. Only content published after the account surpasses the threshold would fall under the new regulatory regime.
This temporal clarification is essential for ensuring legal certainty and avoiding retroactive liability, which would be inconsistent with fundamental principles of criminal law. A clear rule – tying epistemic power to the number of followers at the time of the act – allows individuals to foresee when their speech becomes subject to stricter regulatory standards. It also prevents arbitrary enforcement and ensures that the regulation targets those who, at the time of communicating information, already hold a significant and stable epistemic influence over others.
6. Free speech issues
Now, I want to spend some time on freedom of speech, which is highly connected with regulations on fake news (cf. Nuñez Reference Nuñez2020; Helm and Nasu Reference Helm and Nasu2021; Jacobs Reference Jacobs2022). Helm and Nasu argue, however, that “[…] that some level of restriction on freedom of expression is inevitable due to the need to discourage the creation and distribution of fake news, rather than just preventing its spread” (Helm, and Nasu, Reference Helm and Nasu2021: 303).
I mentioned that this proposal might be more acceptable from the perspective of freedom of speech than other propositions that call for the use of criminal law to mitigate the spread of disinformation.
Now, I want to explain why. Limiting criminal liability for disinformation to speakers who wield epistemic power is less invasive of freedom of expression than a blanket offense that applies to everyone, because it is narrowly tailored to the locus of greatest potential harm while preserving the expressive space of ordinary users. First, it advances the legitimate aim (e.g., protecting public health) by focusing on actors with amplified reach and persuasive capacity: the law targets the few whose speech predictably causes outsized effects. Second, it reduces chilling effects on everyday participation in public debate; most citizens can speak, criticize, and err without fearing criminal liability, which safeguards the “breathing space” essential to democratic discourse and scientific contestation. Third, using a clear, ex ante threshold (e.g., 10,000 followers, assessed at the time of the act) provides foreseeability and legal certainty, enabling affected speakers to understand when stricter duties apply, while avoiding retroactive punishment. Finally, by calibrating liability to capacity to influence, the scheme is proportionate to responsibility: those who benefit from heightened authority and attention bear correspondingly greater obligations, whereas ordinary citizens retain the broadest protection for their speech.
Some practical obstacles must be acknowledged. The proposal developed here is primarily normative: it outlines what, in my view, would be justified and how regulation could ideally be organized. Nevertheless, I recognize that certain practical difficulties could hinder its implementation. The criminalization of disinformation would face many of the same challenges as other crimes committed via the internet, such as jurisdictional issues arising from border-dependent criminal law and the general difficulty of enforcement in virtual spaces (see, e.g., Zając Reference Zając2019). All these problems would apply here as well. However, limiting the scope of criminalization to individuals with epistemic power would partly mitigate these obstacles. Those who possess epistemic power are often publicly known, operate under their real names, and frequently monetize their online presence. Compared to anonymous actors or state-sponsored trolls, such individuals would be far easier to identify and hold accountable.
7. Conclusions
In this paper, I have argued that, in combating disinformation, we should not focus on everyone who spreads it, but rather on superspreaders – those who possess the power to shape the beliefs of others, that is, those who hold epistemic power. Recent studies demonstrate that not everyone has the same capacity to disseminate misinformation; there are fundamental differences between various groups of actors. Targeting those few who contribute the most could make a substantial difference in reducing the spread of harmful and misleading information.
I have framed this proposal as the creation of a new criminal offense: the abuse of epistemic power for the purpose of spreading disinformation. This approach contributes to ongoing debates by shifting the focus of regulation. By concentrating on a small group of individuals who wield disproportionate influence, the proposal avoids the usual objections of widespread censorship that accompany many regulatory initiatives. Instead, it raises a different, but more manageable, issue – equality before the law.
I have also shown why superspreaders can justifiably be treated differently under criminal law. Legal systems already recognize mechanisms for differentiating legal responsibility based on the role and position of the actor – for example, in the case of public officials or those who hold market power. Possessing epistemic power represents a similar turning point: it justifies treating these individuals differently because of their unique ability to influence collective belief and knowledge. Consequently, I conclude that with epistemic power should come criminal responsibility.
Several limitations of this proposal should be acknowledged. In particular, further work is required to refine proportionality reasoning in constitutional terms and to justify more precisely the criteria for identifying sufficiently high levels of epistemic power, both with respect to the appropriate follower thresholds and the possible restriction of criminal responsibility to narrowly defined domains of high-risk public interest, such as public health.
Competing interests
None.
Financial support
ERC Robocrim, 101164310.