Hostname: page-component-699b5d5946-xvxh6 Total loading time: 0 Render date: 2026-03-05T06:38:35.065Z Has data issue: false hasContentIssue false

Disinformation and Superspreaders: With Epistemic Power Should Come Criminal Responsibility

Published online by Cambridge University Press:  03 March 2026

Kamil Mamak*
Affiliation:
Department of Criminal Law, Jagiellonian University , Poland
Rights & Permissions [Opens in a new window]

Abstract

Despite years of efforts to combat disinformation, we remain far from a satisfactory set of solutions. The rise of generative AI, which enables the creation of highly credible fake content at scale, suggests that the problem is likely to grow even more severe. Lessons from the recent pandemic also call for a reconsideration of how disinformation should be addressed. This paper proposes a new approach that focuses not only on regulating everyone who spreads false information, but also on those who hold epistemic power – individuals with the capacity to shape what others know or believe. Such a strategy has the potential to move the debate forward, as it avoids the most common objection to disinformation regulation: the fear of widespread censorship. The paper argues that an individual’s epistemic position can justifiably differentiate their legal duties, and that those who possess epistemic power should bear corresponding legal – specifically, criminal – responsibility for the abuse of that power in spreading disinformation.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press

1. Introduction

The spread of false and misleading information online is often framed as a problem of social media platforms. This framing suggests that the primary solutions should focus on regulating these platforms, either through state intervention or through self-regulation. Much less attention, however, is given to the people who use these platforms and to their agency in the process of spreading misinformation. This neglect is partly understandable, as proposals to regulate individual behavior online are frequently met with fears of widespread censorship. Yet, it may be possible to substantially reduce the harmful effects of disinformation by shifting the focus toward individuals, without introducing broad restrictions that would provoke legitimate objections. The key is to concentrate not on everyone, but on those who contribute most significantly – on superspreaders, which DeVerna et al. define as “accounts that introduce low-credibility content, which then disseminates widely” (DeVerna et al. Reference DeVerna, Aiyappa, Pacheco, Bryden and Menczer2024: 2).

Archer et al. argue that “a very small number of social-media accounts enjoy the vast majority of online attention, lending their controllers massive agenda-setting powers.” (Archer et al. Reference Archer, Alfano and Dennis2024, 761). A report from the Center for Countering Digital Hate on COVID-19 misinformation shows that most of the misinformation was propagated by just 12 individuals (“The Disinformation Dozen: Why Platforms Must Act on Twelve Leading Online Antivaxxers”, 2021). DeVerna et al. (Reference DeVerna, Aiyappa, Pacheco, Bryden and Menczer2024) demonstrate that removing only ten Twitter accounts would eliminate approximately 34% of all future retweets of low-credibility content, revealing an extreme concentration of influence among a very small number of users. An equally important finding from their study is that accounts with very large followings are significantly less likely to be suspended, suggesting that platforms may exercise leniency toward prominent or verified users. These insights indicate that drafting law targeting even a few key actors could substantially reduce the circulation of toxic or misleading information, while also highlighting a structural asymmetry in platform governance: those who possess greater “epistemic power” are less likely to face moderation or removal.

What is more, there is an increasing need to rethink strategies for combating disinformation and misinformation in light of the rapid development of AI technologies. The emergence of tools capable of producing convincing text, images, and videos at minimal cost has fundamentally altered the landscape of information production. As several studies demonstrate, the potential for abuse is substantial. Goldstein et al. (Reference Goldstein, Chao, Grossman, Stamos and Tomz2024) and Salvi et al. (Reference Salvi, Ribeiro, Gallotti and West2025) show that modern AI systems can generate persuasive propaganda quickly, cheaply, and with little technical expertise, enabling the effortless creation of manipulative content on a scale previously unimaginable (Goldstein et al. Reference Goldstein, Chao, Grossman, Stamos and Tomz2024; Salvi et al. Reference Salvi, Ribeiro, Gallotti and West2025). Similarly, Bai et al. (Reference Bai, Voelkel, Muldowney, Eichstaedt and Willer2025) find that AI tools can produce politically persuasive messages at high volume and speed, dramatically lowering the barriers to coordinated misinformation campaigns (Bai et al. Reference Bai, Voelkel, Muldowney, Eichstaedt and Willer2025). The situation is further exacerbated by advances in synthetic video technologies, including deepfakes, which make it increasingly difficult for audiences to distinguish between authentic and fabricated material (Hynek et al. Reference Hynek, Gavurova and Kubak2025; Diel et al. Reference Diel, Lalgi, Schröter, MacDorman, Teufel and Bäuerle2024).

Taken together, these developments suggest that the problem of disinformation will intensify, both in scale and sophistication. Consequently, existing approaches may no longer suffice. In this context, it becomes even more pressing to differentiate responsibility based on epistemic power, as the ability to amplify and legitimize AI-generated falsehoods will not be evenly distributed. Those with significant influence, reach, or credibility can transform algorithmically generated content from mere noise into socially consequential misinformation, thereby reinforcing the argument that legal and moral duties should scale with epistemic power.

In this paper, I draw on the phrase popularized by the Spiderman series, “with great power comes great responsibility,” and argue that, in the face of rampant mis/disinformation – now dramatically amplified by the capacities of generative AI – we must be willing to consider more radical regulatory measures. Specifically, I propose that those who wield significant epistemic power – that is, those capable of shaping what others believe and know – should bear legal responsibility for the dissemination of fake news. In other words, the maxim might today be reformulated as: with epistemic power comes criminal responsibility.

This proposal contributes to the ongoing discussion on combating fake news. The idea of criminalization is often met with strong opposition, typically grounded in concerns about the potential for widespread censorship. Limiting this proposal to individuals who hold epistemic power serves as a response to that objection. Criminalization would not concern everyone, but would apply only to those who occupy a significant position in the distribution of knowledge. The dependence of criminal responsibility on the characteristics of the subject is already well known in criminal law. Some legal systems recognize similar restrictions on criminal responsibility for individuals with specific roles or capacities. In certain jurisdictions, public officials can be held criminally liable for actions that would not constitute crimes if committed by ordinary citizens. One justification for these stricter standards is the potential abuse of power. The idea developed in this paper follows the same logic: it addresses another form of power abuse – the abuse of epistemic power.

This paper is organized as follows. After the introduction, I briefly discuss existing methods of combating disinformation. I then review ideas concerning the objective scope of criminalizing fake news. The following section focuses on subjective perspectives on criminalization. Next, I examine the proposed offense of abusing epistemic power to spread misinformation. Issues related to freedom of speech are addressed thereafter. The paper concludes with some final remarks.

Before proceeding further, some terminological clarification is necessary. In the title of this paper, I use the term disinformation. I follow the understanding proposed by Wardle, who introduces the broad concept of information disorder and distinguishes three categories within it: misinformation, disinformation, and malinformation (Wardle Reference Wardle2018). Both misinformation and disinformation consist of false information; the key difference lies in the intent of the person spreading it. Those who share false information without the intention to harm spread misinformation, whereas those who spread it with the intention to harm engage in disinformation. Malinformation, by contrast, consists of true information shared with the intent to cause harm. By focusing on disinformation, I aim to highlight the element of harmful intent in the dissemination of information – an aspect particularly relevant from the perspective of criminal law. In a non-criminal context, I use the terms both mis/disinformation. Occasionally, I also use the term fake news in the context of criminalization; the definition of fake news (see e.g., Katsirea Reference Katsirea2018; Lazer et al. Reference Lazer, Baum, Benkler, Berinsky, Greenhill, Menczer, Metzger, Nyhan, Pennycook, Rothschild, Schudson, Sloman, Sunstein, Thorson, Watts and Zittrain2018; Molina et al. Reference Molina, Shyam Sundar, Le and Dongwon2019; Southwell et al. Reference Southwell, Thorson and Sheble2017), whenever I do, it should be understood as referring to disinformation.

2. Fighting with mis/disinformation

In this section, I will briefly discuss the existing methods of combating misinformation and disinformation and situate my proposal within that broader framework. At a general level, such measures can be divided into two groups: those that address false information after it has already spread, and those that aim to prevent its dissemination before it occurs. The first group focuses on what to do once misinformation is already in circulation – how to identify, debunk, or counteract it. The second group seeks to prevent misinformation from being published in the first place, or to mitigate its potential destructive impact before it reaches an audience (after all, information ceases to be harmful if no one believes it). Both kinds of measures are necessary. In this paper, however, I defend the thesis that there should exist incentive structures – grounded in the criminal law apparatus – that discourage the publication of disinformation. This proposal is not intended to stand alone but rather to supplement existing approaches.

Numerous measures have been proposed to combat misinformation and disinformation after its publication. Those ideas focus on platforms that vary significantly, from deleting information and removing accounts that spread it (Bursztynsky Reference Bursztynsky2021). The content is, after intervention, not available to other users, and the author of the intervention cannot post again on that platform. Fighting with mis/disinformation does not need to involve the deletion of the content. For example, flagging problematic content as false might lessen the influence of that information. (see, e.g., Kim et al. Reference Kim, Tabibian, Oh, Schölkopf and Gomez-Rodriguez2018; Aruguete et al. Reference Aruguete, Bachmann, Calvo, Valenzuela and Ventura2025; Steensen et al. Reference Steensen, Kalsnes and Westlund2024). A similar strategy could involve promoting, next to the false information, other sources with a credible explanation of the phenomenon (see, e.g., Alemanno Reference Alemanno2018). The other mechanism, the reduction of the visibility of problematic content, might also be significantly limited (Cotter et al. Reference Cotter, DeCook and Kanthawala2022).

Education is a measure that belongs to both categories. Implemented before the spread of misinformation, it can prevent people from publishing false information (“I will not post this because I know it is untrue”) or help neutralize potentially harmful content (“I do not believe this information because I know it is false”). Educational efforts can also occur after the publication of misinformation (“I no longer believe this information because I have been persuaded that it is false”). However, educational strategies face significant practical challenges, which justify considering additional preventive measures. Before turning to these, let us first examine where the core problem lies.

One might think that education and the explanation of why certain information is false could help; however, this is not always the case. People are not easily persuaded (Gottlieb Reference Gottlieb2016; Klein Reference Klein2021), and meta-analyses suggest that attempts at debunking misinformation are often unsuccessful (Chan and Albarracín Reference Chan and Albarracín2023). Studies show, for instance, that even a few minutes of exposure to false information about vaccine safety can decrease people’s willingness to vaccinate (Betsch et al. Reference Betsch, Renkewitz, Betsch and Ulshöfer2010). Moreover, the process of debunking false information can, in some cases, lead to undesirable outcomes: more people may become exposed to harmful ideas, thereby increasing the visibility of information that should not be epistemically available (Mosleh et al. Reference Mosleh, Martel, Eckles and Rand2021).

Another issue concerns the motivations behind spreading misinformation and disinformation, for example, religious beliefs, political preferences, fame, money, and even geopolitics (Dhama et al. Reference Dhama, Sharun, Tiwari, Dhawan, Emran, Rabaan and Alhumaid2021; Ahmed Reference Ahmed2021; Scott Reference Scott2020; Size Reference Size2020; Melchior and Oliveira Reference Melchior and Oliveira2024). If someone already knows that the information is false (or at least recognized as false by the mainstream), educational efforts are likely to be ineffective. Moreover, as mentioned above, the public process of debunking such information may unintentionally expose others to harmful content, thereby amplifying rather than reducing its impact.

What lesson can be drawn from this? It may be better not to be exposed to false information at all, because once it becomes available, its influence cannot simply be undone. This suggests that certain kinds of false information should perhaps not be available at all. Dennis and Lindberg, in their paper about the threat of misinformation to public health, argue that there is “an urgent need for primary prevention, […] Anything less means misinformation – and its societal consequences – will continue to spread” (Denniss and Lindberg Reference Denniss and Lindberg2025: 1) The core proposition of this paper is coherent with that way of thinking and aim to limit the spread of misinformation by creating incentives not to publish it in the first place.

One might ask why I classify criminal responsibility as a preventive measure. After all, criminal responsibility is typically imposed after a crime has been committed. However, I refer here to criminal law as a preventive tool in a utilitarian sense. In brief, the justification for criminal law can be either backward-looking – we punish because offenders deserve it – or forward-looking – we punish to prevent future crimes (on theories of punishment, Canton Reference Canton, Focquaert, Shaw and Waller2020). Both aspects are present in most legal systems, but in this context, I understand the criminalization of the abuse of epistemic power to spread misinformation primarily as a preventive measure. It is intended to serve as a deterrent for those who hold epistemic power and might otherwise be inclined to publish or engage with potentially harmful content (on deterrence, see Apel and Nagin Reference Apel, Nagin and Tonry2011; Robinson and Darley Reference Robinson and Darley2004). This deterrent effect can operate regardless of the individual’s initial motivation: whether the person believes the information to be true or not, the mere possibility of criminal liability may function as a negative incentive to refrain from posting it. Of course, if such an offense were introduced, some individuals would ultimately be punished under it; yet these cases would also serve as a lesson to others, reinforcing the broader preventive purpose. In practice, criminal responsibility applied to a few can have a much wider social effect.

3. An objective approach to the criminalization of disinformation

Using criminal law to combat false information predates the problem of fake news spread over the Internet. One kind of law that could be interpreted in that way is the crime of Holocaust denial, which is part of the legal system in various countries (see, e.g., Kahn Reference Kahn2004; Teachout Reference Teachout2005). In brief, this crime might be committed when someone claims that the Holocaust did not happen. It could be considered, then, as anti-fake news regulation, because it protects the truth about a particular historical event, and if someone publicly claims differently, they could be criminally charged. For the record, there are voices that such a law should be repealed (see, e.g., Singer Reference Singer2016).

Nevertheless, the idea of introducing criminal responsibility to fight against false information is not new, and the usual formulation of the problem is to focus on spreading concrete categories of fake news, for example, disinformation in the electoral period or publishing deep fake porn (see, e.g., Beech Reference Beech2018; Statt Reference Statt2019; Lecher Reference Lecher2019; Gold and Washington Reference Gold2025).

I will illustrate the core idea of this paper using a public health example that I have developed in my other work. These issues are, to some extent, independent. One might agree that there is a need to introduce criminal responsibility for superspreaders with epistemic power without necessarily endorsing my specific formulation of the crime of spreading medical disinformation. However, for the purposes of this discussion, it will be easier to present the concept of abuse of epistemic power for the spread of disinformation with a concrete example in mind.

Elsewhere, I proposed the criminalization of spreading medical fake news (Mamak Reference Mamak2021). The core idea stemmed from the observation that the Internet has become a powerful medium for disseminating false health-related information, which has led to the rise of vaccine hesitancy. In that article, I focused particularly on anti-vaccination claims, such as the false allegation that childhood vaccines cause autism. Empirical data at the time showed an increase in the incidence of diseases that could have been prevented through vaccination, suggesting that misinformation had tangible, harmful public-health effects.

In that context, I examined the limitations of non-criminal strategies for countering fake news – such as education, fact-checking initiatives, and self-regulation by digital platforms – and argued that if these measures prove ineffective, a narrowly tailored criminal provision may be justified. The proposed offense targeted the public dissemination of information that is evidently discrepant with established medical knowledge (“Whoever publicly disseminates information evidently discrepant with medical knowledge is subject to a penalty.”), aiming to deter the circulation of health-related falsehoods that endanger others. For the record, it should also be noted that not all false information about medicine would qualify as prohibited; only that which has the potential to cause substantial harm should fall within the scope of the restriction (Mamak Reference Mamak, Faintuch and Faintuch2022).

I acknowledged that such a measure could be perceived as a restriction of freedom of expression. However, as I argued there, this type of restriction can be constitutionally defensible, since constitutional systems typically permit limitations on certain freedoms when they conflict with other fundamental goods – here, the protection of public health and human life. The proposed approach thus sought to balance freedom of speech with the state’s duty to safeguard citizens against serious health risks created by the deliberate or reckless spread of medical misinformation.

The proposal I will now discuss builds on ideas I first formulated before the recent pandemic. Observing the spread of COVID-19 and the accompanying wave of misinformation convinced me that pandemics require special treatment. Future outbreaks are likely, and one of the key factors influencing the effectiveness of any response will be the quality of information available to the public. Even if safe and effective vaccines are developed, they may not be widely used if misinformation portrays them as dangerous or deadly.

In my recent publication, I revisited my earlier proposal for the criminalization of media-based fake news and adapted it to the specific context of pandemics (Mamak Reference Mamak, Faintuch and Faintuch2025). I proposed the following offense:

Whoever publicly disseminates information discrepant with medical knowledge during a pandemic is subject to a penalty.

The difference between this provision and the earlier, more general one is twofold. First, it applies a lower standard of certainty regarding what counts as medical knowledge. Second, it is limited to periods of pandemics. During such times, controlling the spread of information is crucial, as it can directly affect public health outcomes. The COVID-19 pandemic revealed how fragile public trust in vaccines can be, and widespread disinformation may lead to preventable deaths. Therefore, restricting the circulation of vaccine-related information during pandemics may be justified.

Importantly, the lower threshold of certainty about medical knowledge does not mean that any publication referring to medical matters qualifies as “medical knowledge.” Rather, this term should be understood as information recognized as such by experts. This assessment may depend on factors such as the place of publication (e.g., reputable versus predatory journals), the methodology employed, and the general acceptance of findings within the scientific community.

In sum, it is useful to ground future deliberations on the abuse of epistemic power in specific examples of misinformation. There are many possible cases to consider; however, in this paper, I have focused on two offenses related to public health issues.

4. A subjective approach to criminalization of disinformation: epistemic power

We now turn to the central part of this paper, where I present the justification for a subjective differentiation of criminal liability for abuse of epistemic power. The argument unfolds in three steps. First, I will show that legal systems sometimes differentiate criminal responsibility based on the subjective characteristics of the individual. I will use the example of public officials, who can be punished for abuse of power – conduct that constitutes a crime only when committed by those holding power, not by ordinary citizens. Second, I will argue that epistemic power is a meaningful category and that we can distinguish online actors according to whether or not they possess such power. Third, I will demonstrate that the law already differentiates legal evaluation according to the scale of the actor’s influence. Smaller actors are often treated differently from larger ones; this is evident, for instance, in the regulation of abuse of market power. Together, these three steps aim to show that criminalizing only those who abuse epistemic power could be a defensible strategy for combating disinformation. The points presented address a potential counterargument that the proposal might be considered unacceptable because it violates the principle of equality before the law. This section explains why it could nonetheless be acceptable – even in the context of criminalization – to treat different actors differently when they engage in the same act of disseminating disinformation.

4.1. Abuse of power by public officials

Public officials, by virtue of their position within the legal system, are bound not only by the general laws that apply to all citizens but also by special legal duties arising from their official role. Certain acts that would be lawful if performed by an ordinary citizen may constitute a criminal offense when committed by a public official. One justification for this asymmetry lies in the authority and trust vested in officeholders, which must not be abused.

Horder conceptualizes misconduct in public office precisely as an abuse of power (Horder Reference Horder2018, 17). Similarly, Leib and Kent (Reference Leib and Kent2021) argue that public officials stand in a fiduciary relationship to the public, emphasizing that abuse of power is a central concern of public law. Aronson also refers to such provisions as being “driven by a sense of moral outrage at the abuse of collective power.” (Aronson Reference Aronson2011: 15). In a recent paper, Ros and Gehrke notice that in the last decades, there has been an increase in the officials (including former prime ministers and presidents) convicted of corruption by courts in their own countries. They also explain that in corruption cases, criminal responsibility is the enforcement of sanctions “against public officials who engaged in abuse of office for private or partisan gain” (Ros and Gehrke Reference Ros and Gehrke2024: 963).

By deliberately framing the issue in terms of abuse of power, I wish to highlight a broader point: legal systems already recognize that holding power entails heightened responsibility and the potential for abuse. Thus, it is neither novel nor unjustified to suggest that citizens who possess comparable epistemic power – for instance, influential online figures – might also be subject to differentiated legal treatment, given that their actions can have disproportionate social impact.

4.2. Abuse of epistemic power

I already use the term “epistemic power,” and now I want to expand on that. I want to show that the positions of certain people might reflect holding a specific kind of power.

Archer et al. define epistemic power as follows: “A person has epistemic power to the extent she is able to influence what people think, believe, and know, and to the extent she is able to enable and disable others from exerting epistemic influence (Archer et al. Reference Archer, Cawston, Matheson and Geuskens2020: 29).

As these authors observe, almost everyone possesses some degree of epistemic power, since most people can influence the beliefs of those around them to a limited extent. In this paper, however, I refer to substantial epistemic power – the kind of influence held by celebrities, opinion leaders, or other prominent figures on the Internet whose statements can shape the beliefs and attitudes of large audiences.

Just as public officials are subject to special legal duties because of the authority and trust vested in their offices, individuals who possess epistemic power – such as celebrities or influential online figures – may also justifiably bear heightened responsibilities. Archer et al. (Reference Archer, Alfano and Dennis2024) argue that celebrities’ testimonies reach vast audiences, giving them a form of epistemic power that can significantly shape what others believe and know. This influence, while often unearned, carries the potential to produce serious harms, particularly when misinformation undermines public trust in experts or institutions. Consequently, these authors maintain that those with epistemic power have negative (moral) duties not to spread false information or direct attention toward unreliable sources, and in certain contexts – such as public health crises – positive duties to use their influence responsibly.

This mirrors the legal logic applied to public officials: greater power entails greater responsibility. Both cases rest on the same normative foundation – that the possession of authority, whether political or epistemic, increases the potential for harm and thus justifies differentiated legal treatment. Recognizing epistemic power as a basis for special responsibility extends an already familiar legal principle into the digital domain, where influence and persuasion have become new forms of power.

4.3. The wrongness of the abuse of market power

A further analogy can be drawn from competition law, particularly the concept of abuse of market power. In that domain, not all market behavior is treated equally: certain actions become unlawful only when performed by an entity with a dominant position. As Vickers (Reference Vickers2005) explains, abuse of market power is one of the three core pillars of competition law, alongside anti-competitive agreements and mergers. The law does not prohibit conduct such as setting low prices in general; rather, it prohibits such behavior only when it is used by a dominant firm to distort competition. For example, predatory pricing – selling below cost to eliminate competitors – is considered abusive only when carried out by a firm with substantial market power. If the same pricing strategy is used by a small market participant, it would typically fall outside the scope of regulatory intervention.

In other words, the same act can be evaluated differently by the law depending on the “size” or influence of the actor who performs it. Transferring this reasoning to the question of responsibility, it is conceivable to treat differently the act of posting the same false information about vaccines depending on the number of followers the person has. It is a qualitatively different matter if an anti-vaccine message is posted by an anonymous account with only a few followers than if the identical message is published by a person with hundreds of thousands of followers. For the record, the act of disseminating misinformation is morally wrong regardless of the status of the person committing it. However, the legal evaluation may vary depending on the impact of the actor involved.

To sum up, in this section, I have aimed to make three points. First, individuals who hold power can be held criminally responsible for acts that would not constitute crimes if committed by ordinary citizens; thus, a person’s status or position can legitimately affect the assessment of criminal liability. Second, some individuals possess epistemic power – the ability to shape what others believe and know – and this form of influence, like political or institutional power, can also be abused. Third, from a legal perspective, it is both possible and conceptually consistent to differentiate legal responses to the same act based on the scale or influence of the actor who performs it.

5. Crime of abuse of epistemic power for spreading disinformation

In this section, I outline how the idea of criminalizing the abuse of epistemic power could be conceptualized and operationalized. I present and explain a legislative proposal that serves as a basis for a more in-depth exploration of the underlying idea. This proposal should not be understood as a fully developed legal solution ready for implementation, but rather as a starting point for further discussion and refinement.

The intention is not to suggest that every instance of disinformation disseminated by individuals with epistemic power should be criminalized. Instead, the aim is to consider whether, in cases where criminalization of specific categories of false information is already justified, such regulation might be limited to those who possess epistemic power. In other words, before addressing who should fall within the scope of criminal liability, we must first establish a substantive justification for prohibiting particular categories of disinformation.

Let us assume, for the sake of discussion, that there is broad agreement that a specific category of false information – for instance, disinformation posing a threat to public health – is sufficiently dangerous to warrant criminal regulation. However, it is also likely that a broad or indiscriminate criminalization of such content would be socially and politically unacceptable, as it could give rise to concerns about widespread censorship. (For present purposes, I set aside the question of whether such concerns are justified.)

In such a scenario, a pragmatic way forward would be to narrow the subjective scope of regulation – that is, to limit who can be held criminally liable under the proposed framework. This could be achieved by confining the regulation to individuals who possess epistemic power, thereby targeting those whose speech has the greatest potential impact on public understanding and behavior. The proposed limitation could be formulated as follows:

§ 2. The perpetrator of a crime provided in § 1 could be a person who, at the moment of the commission of the act, was followed by at least 10,000 observers.

I need to unpack this little bit. The concept of epistemic power remains ambiguous and requires clarification within the context of regulation. As previously noted, epistemic authority is multidimensional; follower counts alone are insufficient and should be complemented by indicators such as average reach, engagement rate, or the individual’s social role (e.g., journalist or medical professional) (Bartsch et al. Reference Bartsch, Neuberger, Stark, Karnowski, Maurer, Pentzold, Quandt, Quiring and Schemer2025). Nevertheless, for regulatory purposes, epistemic power must be operationalized in a manner that is easily understandable, particularly for individuals who might face criminal liability.

The most straightforward way to translate the concept of epistemic power into a clear legislative standard is to express it numerically. For instance, Conde and Casais (Reference Conde and Casais2023) classify Instagram influencers into categories based on follower count: micro-influencers (1000–100,000 followers), macro-influencers (100,000–1 million), and mega-influencers (over 1 million). Other researchers adopt stricter definitions; for example, Park et al. (Reference Park, Lee, Xiong, Septianto and Seo2021) define micro-influencers as those with at least 10,000 followers. Interestingly, their study also finds that smaller influencers are often more persuasive than mega-influencers, as they tend to be perceived as more authentic. Similar findings appear in Kay et al. (Reference Kay, Mulcahy and Parkinson2020), which suggests that smaller influencers may exert greater persuasive power than larger ones, at least within marketing contexts.

Walter et al. (Reference Walter, Föhl and Zagermann2025) further compare micro- and mega-influencers across dimensions such as trustworthiness, expertise, attractiveness, authenticity, and similarity, showing that micro-influencers (over 10,000 followers) are generally perceived as more authentic and trustworthy, and outperform mega-influencers particularly in perceived similarity.

Taking these findings together, it seems unwise to set the regulatory threshold for epistemic power too high, given that micro-influencers can possess substantial persuasive influence, sometimes exceeding that of mega-influencers. I therefore propose a threshold of 10,000 followers as a preliminary benchmark. This figure should, of course, be context-sensitive, as follower counts may vary in meaning across platforms, countries, and audiences. Nonetheless, a single clear numerical threshold provides clarity and legal certainty, allowing for an objective distinction between those who possess significant epistemic power and those who do not.

While there may be sound reasons to adjust this threshold, the proposal offered here should be viewed as an invitation to discussion – a starting point for developing a workable and transparent standard for regulating epistemic power.

Focusing solely on follower count reduces epistemic power in the proposal to social media presence. Yet, epistemic power is a much broader concept, traditionally encompassing individuals and groups who shape collective understanding and belief beyond digital platforms. These include, for example, scientists, religious leaders, and public intellectuals – figures who may hold considerable epistemic influence even in the absence of a substantial online following (on epistemic authority, see Zagzebski Reference Zagzebski2012).

There is, of course, potential overlap between these categories. A scientist or religious leader might also maintain an active online presence, thereby combining traditional and digital forms of epistemic authority. Nevertheless, this is not always the case; many individuals wielding significant epistemic influence do so primarily through offline institutions, such as academia, religious organizations, or the media.

The starting point of the current proposal is social media, as they represent the primary channels through which misinformation and disinformation spread today (e.g., Denniss and Lindberg Reference Denniss and Lindberg2025). However, future regulatory frameworks could consider how to incorporate other sources of epistemic authority that operate outside or alongside digital networks. Expanding the scope in this way would allow for a more comprehensive understanding of epistemic power as a multi-domain phenomenon, ensuring that regulation keeps pace with the evolving landscape of knowledge production and dissemination.

A further issue concerns the timing of the follower count used to determine epistemic power. The number of followers must be assessed at the moment when the act is committed or the content is published. It is conceivable that a particular post – perhaps one containing misleading or sensational information – could itself lead to a rapid increase in followers, pushing an account above the 10,000 threshold. However, such a change in status should not retroactively affect responsibility for that earlier act.

In other words, if an individual had fewer than 10,000 followers at the time of posting, they would not fall within the scope of the proposed regulation for that specific content, even if their follower count subsequently exceeded the threshold as a result of the post’s popularity. Only content published after the account surpasses the threshold would fall under the new regulatory regime.

This temporal clarification is essential for ensuring legal certainty and avoiding retroactive liability, which would be inconsistent with fundamental principles of criminal law. A clear rule – tying epistemic power to the number of followers at the time of the act – allows individuals to foresee when their speech becomes subject to stricter regulatory standards. It also prevents arbitrary enforcement and ensures that the regulation targets those who, at the time of communicating information, already hold a significant and stable epistemic influence over others.

6. Free speech issues

Now, I want to spend some time on freedom of speech, which is highly connected with regulations on fake news (cf. Nuñez Reference Nuñez2020; Helm and Nasu Reference Helm and Nasu2021; Jacobs Reference Jacobs2022). Helm and Nasu argue, however, that “[…] that some level of restriction on freedom of expression is inevitable due to the need to discourage the creation and distribution of fake news, rather than just preventing its spread” (Helm, and Nasu, Reference Helm and Nasu2021: 303).

I mentioned that this proposal might be more acceptable from the perspective of freedom of speech than other propositions that call for the use of criminal law to mitigate the spread of disinformation.

Now, I want to explain why. Limiting criminal liability for disinformation to speakers who wield epistemic power is less invasive of freedom of expression than a blanket offense that applies to everyone, because it is narrowly tailored to the locus of greatest potential harm while preserving the expressive space of ordinary users. First, it advances the legitimate aim (e.g., protecting public health) by focusing on actors with amplified reach and persuasive capacity: the law targets the few whose speech predictably causes outsized effects. Second, it reduces chilling effects on everyday participation in public debate; most citizens can speak, criticize, and err without fearing criminal liability, which safeguards the “breathing space” essential to democratic discourse and scientific contestation. Third, using a clear, ex ante threshold (e.g., 10,000 followers, assessed at the time of the act) provides foreseeability and legal certainty, enabling affected speakers to understand when stricter duties apply, while avoiding retroactive punishment. Finally, by calibrating liability to capacity to influence, the scheme is proportionate to responsibility: those who benefit from heightened authority and attention bear correspondingly greater obligations, whereas ordinary citizens retain the broadest protection for their speech.

Some practical obstacles must be acknowledged. The proposal developed here is primarily normative: it outlines what, in my view, would be justified and how regulation could ideally be organized. Nevertheless, I recognize that certain practical difficulties could hinder its implementation. The criminalization of disinformation would face many of the same challenges as other crimes committed via the internet, such as jurisdictional issues arising from border-dependent criminal law and the general difficulty of enforcement in virtual spaces (see, e.g., Zając Reference Zając2019). All these problems would apply here as well. However, limiting the scope of criminalization to individuals with epistemic power would partly mitigate these obstacles. Those who possess epistemic power are often publicly known, operate under their real names, and frequently monetize their online presence. Compared to anonymous actors or state-sponsored trolls, such individuals would be far easier to identify and hold accountable.

7. Conclusions

In this paper, I have argued that, in combating disinformation, we should not focus on everyone who spreads it, but rather on superspreaders – those who possess the power to shape the beliefs of others, that is, those who hold epistemic power. Recent studies demonstrate that not everyone has the same capacity to disseminate misinformation; there are fundamental differences between various groups of actors. Targeting those few who contribute the most could make a substantial difference in reducing the spread of harmful and misleading information.

I have framed this proposal as the creation of a new criminal offense: the abuse of epistemic power for the purpose of spreading disinformation. This approach contributes to ongoing debates by shifting the focus of regulation. By concentrating on a small group of individuals who wield disproportionate influence, the proposal avoids the usual objections of widespread censorship that accompany many regulatory initiatives. Instead, it raises a different, but more manageable, issue – equality before the law.

I have also shown why superspreaders can justifiably be treated differently under criminal law. Legal systems already recognize mechanisms for differentiating legal responsibility based on the role and position of the actor – for example, in the case of public officials or those who hold market power. Possessing epistemic power represents a similar turning point: it justifies treating these individuals differently because of their unique ability to influence collective belief and knowledge. Consequently, I conclude that with epistemic power should come criminal responsibility.

Several limitations of this proposal should be acknowledged. In particular, further work is required to refine proportionality reasoning in constitutional terms and to justify more precisely the criteria for identifying sufficiently high levels of epistemic power, both with respect to the appropriate follower thresholds and the possible restriction of criminal responsibility to narrowly defined domains of high-risk public interest, such as public health.

Competing interests

None.

Financial support

ERC Robocrim, 101164310.

References

Ahmed, I. (2021). ‘Dismantling the Anti-Vaxx Industry.’ Nature Medicine 27(3), 366–. https://doi.org/10.1038/s41591-021-01260-6.CrossRefGoogle ScholarPubMed
Alemanno, A. (2018). ‘How to Counter Fake News? A Taxonomy of Anti-Fake News Approaches.’ European Journal of Risk Regulation 9(1), 15. https://doi.org/10.1017/err.2018.12.CrossRefGoogle Scholar
Apel, R. and Nagin, D.S. (2011). ‘General Deterrence.’ In Tonry, M. (ed), The Oxford Handbook of Crime and Criminal Justice, pp. 179206. Oxford: Oxford University Press. https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780195395082.001.0001/oxfordhb-9780195395082-e-7.Google Scholar
Archer, A., Alfano, M. and Dennis, M. (2024). ‘On the Uses and Abuses of Celebrity Epistemic Power.’ Social Epistemology 38(6), 759–73. https://doi.org/10.1080/02691728.2022.2153351.CrossRefGoogle Scholar
Archer, A., Cawston, A., Matheson, B. and Geuskens, M. (2020). ‘Celebrity, Democracy, and Epistemic Power.’ Perspectives on Politics 18(1), 2742. https://doi.org/10.1017/S1537592719002615.CrossRefGoogle Scholar
Aronson, M. (2011). ‘Misfeasance in Public Office: A Very Peculiar Tort.’ Melbourne University Law Review 35, 151. https://papers.ssrn.com/abstract=1831963.Google Scholar
Aruguete, N., Bachmann, I., Calvo, E., Valenzuela, S. and Ventura, T. (2025). ‘Truth Be Told: How ‘True’ and ‘False’ Labels Influence User Engagement with Fact-Checks.’ New Media & Society 27(3), 1443–64. https://doi.org/10.1177/14614448231193709.CrossRefGoogle Scholar
Bai, H., Voelkel, J.G., Muldowney, S., Eichstaedt, J.C. and Willer, R. (2025). ‘LLM-Generated Messages Can Persuade Humans on Policy Issues.’ Nature Communications 16(1), 6037. https://doi.org/10.1038/s41467-025-61345-5.CrossRefGoogle ScholarPubMed
Bartsch, A., Neuberger, C., Stark, B., Karnowski, V., Maurer, M., Pentzold, C., Quandt, T., Quiring, O. and Schemer, C. (2025). ‘Epistemic Authority in the Digital Public Sphere. An Integrative Conceptual Framework and Research Agenda.’ Communication Theory 35(1), 3750. https://doi.org/10.1093/ct/qtae020.CrossRefGoogle Scholar
Beech, H. (2018). “As Malaysia Moves to Ban ‘Fake News,’ Worries About Who Decides the Truth.” World. The New York Times, April 5. https://www.nytimes.com/2018/04/02/world/asia/malaysia-fake-news-law.html.Google Scholar
Betsch, C., Renkewitz, F., Betsch, T. and Ulshöfer, C. (2010). ‘The Influence of Vaccine-Critical Websites on Perceiving Vaccination Risks.’ Journal of Health Psychology 15(3), 446–55. https://doi.org/10.1177/1359105309353647.CrossRefGoogle ScholarPubMed
Bursztynsky, J. (2021). “YouTube Bans High-Profile Anti-Vaccine Accounts, Says It Will Block All Vaccine Misinformation.” Technology. CNBC, September 29. https://www.cnbc.com/2021/09/29/youtube-bans-high-profile-anti-vaccine-accounts.html.Google Scholar
Canton, R. (2020). ‘Theories of Punishment.’ In Focquaert, F., Shaw, E. and Waller, B.N. (eds), The Routledge Handbook of the Philosophy and Science of Punishment, pp. 517. Routledge. https://doi.org/10.4324/9780429507212.CrossRefGoogle Scholar
Chan, M.S. and Albarracín, D. (2023). ‘A Meta-Analysis of Correction Effects in Science-Relevant Misinformation.’ Nature Human Behaviour 15, 112. https://doi.org/10.1038/s41562-023-01623-8.Google Scholar
Conde, R. and Casais, B. (2023). ‘Micro, Macro and Mega-Influencers on Instagram: The Power of Persuasion Via the Parasocial Relationship.’ Journal of Business Research 158, 113708. https://doi.org/10.1016/j.jbusres.2023.113708.CrossRefGoogle Scholar
Cotter, K., DeCook, J.R. and Kanthawala, S. (2022). ‘Fact-Checking the Crisis: COVID-19, Infodemics, and the Platformization of Truth.’ Social Media + Society 8(1), 20563051211069048. https://doi.org/10.1177/20563051211069048.CrossRefGoogle Scholar
Denniss, E. and Lindberg, R. (2025). ‘Social Media and the Spread of Misinformation: Infectious and a Threat to Public Health.’ Health Promotion International 40(2), daaf023. https://doi.org/10.1093/heapro/daaf023.CrossRefGoogle Scholar
DeVerna, M.R., Aiyappa, R., Pacheco, D., Bryden, J. and Menczer, F. (2024). ‘Identifying and Characterizing Superspreaders of Low-Credibility Content on Twitter.’ Plos One 19(5), e0302201. https://doi.org/10.1371/journal.pone.0302201.CrossRefGoogle ScholarPubMed
Dhama, K., Sharun, K., Tiwari, R., Dhawan, M., Emran, T.B., Rabaan, A.A. and Alhumaid, S. (2021). ‘COVID-19 Vaccine Hesitancy – Reasons and Solutions to Achieve a Successful Global Vaccination Campaign to Tackle the Ongoing Pandemic.’ Human Vaccines & Immunotherapeutics 17(10), 3495–99. https://doi.org/10.1080/21645515.2021.1926183.CrossRefGoogle ScholarPubMed
Diel, A., Lalgi, T., Schröter, I.C., MacDorman, K.F., Teufel, M. and Bäuerle, A. (2024). ‘Human Performance in Detecting Deepfakes: A Systematic Review and Meta-Analysis of 56 Papers.’ Computers in Human Behavior Reports 16), 100538. https://doi.org/10.1016/j.chbr.2024.100538.CrossRefGoogle Scholar
Gold, M. and Cecilia KangReporting from Washington (2025). “House Passes Bill to Ban Sharing of Revenge Porn, Sending It to Trump.” U.S. The New York Times, April 28. https://www.nytimes.com/2025/04/28/us/politics/house-revenge-porn-bill.html.Google Scholar
Goldstein, J.A, Chao, J., Grossman, S., Stamos, A. and Tomz, M. (2024). ‘How Persuasive Is AI-Generated Propaganda?PNAS Nexus 3(2), pgae034. https://doi.org/10.1093/pnasnexus/pgae034.CrossRefGoogle ScholarPubMed
Gottlieb, S.D. (2016). ‘Vaccine Resistances Reconsidered: Vaccine Skeptics and the Jenny McCarthy Effect.’ BioSocieties 11(2), 152–74. https://doi.org/10.1057/biosoc.2015.30.CrossRefGoogle Scholar
Helm, R.K. and Nasu, H. (2021). ‘Regulatory Responses to ‘Fake News’ and Freedom of Expression: Normative and Empirical Evaluation.’ Human Rights Law Review 21(2), 302–28. https://doi.org/10.1093/hrlr/ngaa060.CrossRefGoogle Scholar
Horder, J.. (2018). Criminal Misconduct in Office: Law and Politics. OUP Oxford.10.1093/oso/9780198823704.001.0001CrossRefGoogle Scholar
Hynek, N., Gavurova, B. and Kubak, M. (2025). ‘Risks and Benefits of Artificial Intelligence Deepfakes: Systematic Review and Comparison of Public Attitudes in Seven European Countries.’ Journal of Innovation & Knowledge 10(5), 100782. https://doi.org/10.1016/j.jik.2025.100782.CrossRefGoogle Scholar
Jacobs, L. (2022). ‘Freedom of Speech and Regulation of Fake News.’ The American Journal of Comparative Law 70(Supplement_1), i278311. https://doi.org/10.1093/ajcl/avac010.CrossRefGoogle Scholar
Kahn, R. (2004). Holocaust Denial and the Law: A Comparative Study. Springer.10.1057/9781403980502CrossRefGoogle Scholar
Katsirea, I. (2018). ‘‘Fake News’: Reconsidering the Value of Untruthful Expression in the Face of Regulatory Uncertainty.’ Journal of Media Law 10(2), 159–88. https://doi.org/10.1080/17577632.2019.1573569.CrossRefGoogle Scholar
Kay, S., Mulcahy, R. and Parkinson, J. (2020). ‘When Less Is More: The Impact of Macro and Micro Social Media Influencers’ Disclosure.’ Journal of Marketing Management 36(3–4), 248–78. https://doi.org/10.1080/0267257X.2020.1718740.CrossRefGoogle Scholar
Kim, J., Tabibian, B., Oh, A., Schölkopf, B. and Gomez-Rodriguez, M. (2018) “Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation.” In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (New York, NY, USA), WSDM ’18, February 2, 324–32. https://doi.org/10.1145/3159652.3159734.CrossRefGoogle Scholar
Klein, E. (2021). “Opinion | What If the Unvaccinated Can’t Be Persuaded?” Opinion. The New York Times, July 29. https://www.nytimes.com/2021/07/29/opinion/covid-vaccine-hesitancy.html.Google Scholar
Lazer, D.M.J., Baum, M.A., Benkler, Y., Berinsky, A.J., Greenhill, K.M., Menczer, F., Metzger, M.J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S.A., Sunstein, C.R., Thorson, E.A., Watts, D.J. and Zittrain, J.L. (2018). ‘The Science of Fake News.’ Science 359(6380), 1094–96. https://doi.org/10.1126/science.aao2998.CrossRefGoogle ScholarPubMed
Lecher, C. (2019). “California Has Banned Political Deepfakes during Election Season.” The Verge, October 7. https://www.theverge.com/2019/10/7/20902884/california-deepfake-political-ban-election-2020.Google Scholar
Leib, E. and Kent, A. (2021). ‘Fiduciary Law and the Law of Public Office.’ William & Mary Law Review 62(4), 1297.Google Scholar
Mamak, K. (2021). ‘Do We Need the Criminalization of Medical Fake News?Medicine, Health Care and Philosophy 24, 235–45. https://doi.org/10.1007/s11019-020-09996-7.CrossRefGoogle ScholarPubMed
Mamak, K. (2022). ‘Categories of Fake News from the Perspective of Social Harmfulness.’ In Faintuch, J. and Faintuch, S. (eds), Integrity of Scientific Research: Fraud, Misconduct and Fake News in the Academic, Medical and Social Environment, pp. 351–7. Springer International Publishing. https://doi.org/10.1007/978-3-030-99680-2_35.CrossRefGoogle Scholar
Mamak, K. (2025). ‘Informational Quarantine: Should It Be Punishable to Spread Medical Disinformation During a Pandemic?’ In Faintuch, J. and Faintuch, S. (eds), Business Ethics in the Healthcare Industry, pp. 145–58. Springer Nature Switzerland. https://doi.org/10.1007/978-3-032-07649-6_10.CrossRefGoogle Scholar
Melchior, C. and Oliveira, M (2024). ‘A Systematic Literature Review of the Motivations to Share Fake News on Social Media Platforms and How to Fight Them.’ New Media & Society 26(2), 1127–50. https://doi.org/10.1177/14614448231174224.CrossRefGoogle Scholar
Molina, M.D., Shyam Sundar, S., Le, T. and Dongwon, L. (2019). ‘Fake News’ Is Not Simply False Information: A Concept Explication and Taxonomy of Online Content.’ American Behavioral Scientist 65(2), 180212. https://journals.sagepub.com/doi/full/10.1177/0002764219878224.CrossRefGoogle Scholar
Mosleh, M., Martel, C., Eckles, D. and Rand, D. (2021) “Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (New York, NY, USA), CHI ’21, May 6, 1–13. https://doi.org/10.1145/3411764.3445642.CrossRefGoogle Scholar
Nuñez, F. (2020). ‘Disinformation Legislation and Freedom of Expression.’ UC Irvine Law Review 10(2), 783.Google Scholar
Park, J., Lee, J.M., Xiong, V.Y., Septianto, F. and Seo, Y. (2021). ‘David and Goliath: When and Why Micro-Influencers Are More Persuasive than Mega-Influencers.’ Journal of Advertising 50(5), 584602. https://doi.org/10.1080/00913367.2021.1980470.CrossRefGoogle Scholar
Robinson, P.H. and Darley, J.M. (2004). ‘Does Criminal Law Deter? A Behavioural Science Investigation.’ Oxford Journal of Legal Studies 24(2), 173205. https://doi.org/10.1093/ojls/24.2.173.CrossRefGoogle Scholar
Ros, L.D. and Gehrke, M. (2024). ‘Convicting Politicians for Corruption: The Politics of Criminal Accountability.’ Government and Opposition 59(3), 951–75. https://doi.org/10.1017/gov.2023.48.CrossRefGoogle Scholar
Salvi, F., Ribeiro, M.H., Gallotti, R. and West, R.. (2025). ‘On the Conversational Persuasiveness of GPT-4.’ Nature Human Behaviour 9(8), 1645–53. https://doi.org/10.1038/s41562-025-02194-6.CrossRefGoogle ScholarPubMed
Scott, M. (2020). “Russia and China Push ‘Fake News’ Aimed at Weakening Europe: Report.” Politico, April 1. https://www.politico.eu/article/russia-china-disinformation-coronavirus-covid19-facebook-google/.Google Scholar
Singer, P. (2016). Ethics in the Real World: 82 Brief Essays on Things That Matter. Princeton University Press.Google Scholar
Size, R. (2020). ‘Publishing Fake News for Profit Should Be Prosecuted as Wire Fraud.’ Santa Clara Law Review 60, 29.Google Scholar
Southwell, B.G., Thorson, E.A. and Sheble, L. (2017). ‘The Persistence and Peril of Misinformation.’ American Scientist 105, 368–71. https://www.americanscientist.org/article/the-persistence-and-peril-of-misinformation.Google Scholar
Statt, N. (2019). “China Makes It a Criminal Offense to Publish Deepfakes or Fake News without Disclosure.” The Verge, November 29. https://www.theverge.com/2019/11/29/20988363/china-deepfakes-ban-internet-rules-fake-news-disclosure-virtual-reality.Google Scholar
Steensen, S., Kalsnes, B. and Westlund, O. (2024). ‘The Limits of Live Fact-Checking: Epistemological Consequences of Introducing a Breaking News Logic to Political Fact-Checking.’ New Media & Society 26(11), 6347–65. https://doi.org/10.1177/14614448231151436.CrossRefGoogle Scholar
Teachout, P.R. (2005). ‘Making Holocaust Denial a Crime: Reflections on European Anti-Negationist Laws from the Perspective of U.S. Constitutional Experience.’ Vermont Law Review 30, 655.Google Scholar
The Disinformation Dozen: Why Platforms Must Act on Twelve Leading Online Antivaxxers. (2021). The Center for Countering Digital Hate. https://www.counterhate.com/disinformationdozen.Google Scholar
Vickers, J. (2005). ‘Abuse of Market Power.’ The Economic Journal 115(504), F244–61. https://doi.org/10.1111/j.1468-0297.2005.01004.x.CrossRefGoogle Scholar
Walter, N., Föhl, U. and Zagermann, L (2025). ‘Big or Small? Impact of Influencer Characteristics on Influencer Success, with Special Focus on Micro- Versus Mega-Influencers.’ Journal of Current Issues & Research in Advertising 46(2), 160–82. https://doi.org/10.1080/10641734.2024.2366198.CrossRefGoogle Scholar
Wardle, C. (2018). ‘The Need for Smarter Definitions and Practical, Timely Empirical Research on Information Disorder.’ Digital Journalism 6(8), 951–63. https://doi.org/10.1080/21670811.2018.1502047.CrossRefGoogle Scholar
Zagzebski, L.T. (2012). Epistemic Authority: A Theory of Trust, Authority, and Autonomy in Belief. Oxford University Press.10.1093/acprof:oso/9780199936472.001.0001CrossRefGoogle Scholar
Zając, D. (2019). The Method of Interpretation of Penal Norms in the International Context. Krakowski Instytut Prawa Karnego Fundacja.10.2139/ssrn.3387351CrossRefGoogle Scholar