The second part of this book focuses on freedom of expression. Freedom of expression is a cornerstone of human rights and democratic governance, ‘providing the vehicle for the exchange and development of opinions’.Footnote 1 The internet (social media platforms in particular) has transformed how people communicate and exchange ideas, democratised access to information, and decentralised public discourse. At the same time, it has created a broad range of new threats, such as disinformation, platform monopolies, online hate speech, regulatory tensions, and the potential to undermine democratic principles.Footnote 2 This part looks at some of these challenges while aiming to answer the following core question: How should freedom of expression be applied in the digital environment? Examining this question is essential to ensure that this right continues to foster democratic participation and safeguard individual autonomy.
This part includes four chapters on different issues connected to the freedom of expression online – positive state obligations, new media doctrines, disinformation, and the potential digital imperialism of the EU. The combination of these topics tries to demonstrate the complexities of freedom of expression and provide a basis for understanding how freedom of expression can continue to be protected, balanced with other rights, and adapted to the realities of the digital age. This is achieved by examining the obligations of different actors, global regulatory tensions, and the challenges to democratic principles.
Chapter 9. Freedom of Expression and Positive Obligations of the State in Social Media
Chapter 9 opens the discussion with Artūrs Kučs’ exploration of the evolving role of states in safeguarding freedom of expression within the digital environment, particularly on social media platforms. States have both negative obligations (to avoid undue interference) and positive obligations (to actively protect and ensure the right to freedom of expression). The latter include creating a favourable environment for public debate, protecting speakers from harm, and ensuring media pluralism. These obligations extend to relations between private parties, including users and internet intermediaries. The chapter responds to the core question of this part of the volume by noting the following aspects that should be central to applying freedom of expression online: (a) balancing it with protections against harmful content; (b) maintaining ultimate responsibility for safeguarding freedom of expression with states (not outsourcing decisions on limits of that freedom to private entities); (c) ensuring judicial oversight over content moderation; and (d) providing adequate legal regulation with effective remedies.
Chapter 10. The Development of New Media Doctrines on Freedom of Expression: How to Defend Democratic Society and the Rule of Law
Chapter 10 examines the evolution of freedom of expression doctrines in the context of digital media. Jukka Viljanen and Tomoe Watashiba look at how the European Court of Human Rights has adapted traditional doctrines to address challenges in the new media environment. Social media has expanded the concept of ‘public watchdogs’ beyond traditional media to include non-governmental organisations, bloggers, and activists, which may amplify the risk of disinformation and online hate and require a rethinking of liability issues. The authors explore how freedom of expression can balance individual rights, societal interests, and the regulation of disinformation and hate speech in the digital age. In relation to the core question, the chapter draws attention to the following aspects: (a) limitations on freedom of expression need to be guided by principles such as proportionality and the prohibition of the abuse of rights; (b) freedom of expression in digital spaces requires balancing individual rights with broader societal needs; (c) traditional doctrines (such as ensuring pluralism and protecting public watchdogs) must evolve to include the unique dynamics of digital media; (d) clear rules should be set for social media platforms without enabling censorship; and (e) pre-emptive measures, such as promoting media literacy and safeguarding public debate, must be adopted to protect democracy in the face of disinformation and authoritarian threats.
Chapter 11. Disinfodemic Threats. Real, False, and Fake News: A Contribution to Fight Disinformation without Affecting the Freedom of Expression
Chapter 11 delves even deeper into the issue of disinformation. Oscar Puccinelli shows how the rapid dissemination of false, manipulated, and misleading information poses a threat to human rights, democracy, and public trust. Disinformation erodes public trust, affects electoral integrity, threatens public health, and harms individuals’ rights to information and privacy. The chapter outlines the historical roots of disinformation, the technological factors that enable it, and the responses by public and private actors to mitigate its harmful effects. Puccinelli emphasises the need for cooperation among governments, civil society, and technology companies to develop rights-based responses to disinformation. He stresses the importance of international frameworks, such as the United Nations declarations, to guide balanced approaches. In response to the core question of this part – How should freedom of expression be applied in the digital environment? – the chapter asserts the following: (a) efforts to regulate false information must avoid overreach and comply with the principles of legality, necessity, and proportionality; (b) measures must address the harms caused by disinformation (e.g., hate speech, incitement to violence, threats to public health); (c) digital platforms need to have greater transparency in their operations; and (d) media literacy programmes need to be promoted to empower individuals to critically evaluate information, reduce the impact of disinformation, and to use their freedom of expression without spreading disinformation.
Chapter 12. Online Freedom of Expression: A New EU Imperialism?
In Chapter 12, the final chapter of this part, Philippe Jougleux examines the evolving role of the European Union (EU) in regulating online freedom of expression and the global implications of these efforts. The author frames EU interventions, such as the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA), as examples of digital imperialism, whereby the EU’s legal frameworks impose European values and legal standards on other jurisdictions (and create potential for global legal conflicts). The chapter explores key topics such as copyright law, content moderation, hate speech, and the right to be forgotten, while contrasting the EU’s approach with the US’s philosophy of free speech. The EU’s rights-based approach to balancing free expression with societal harm contrasts sharply with the US emphasis on the marketplace of ideas and minimal restrictions. In response to the core question, the chapter focuses on the following aspects: (a) freedom of expression must be balanced against other societal interests, such as privacy, hate speech mitigation, and copyright protection; (b) there is a need for international consensus on freedom of expression in digital spaces (and subsequent joint regulation); (c) it is important to recognise the central role of online platforms in public debate and, accordingly, regulating them to ensure accountability while safeguarding freedom of expression; and (d) a balanced approach that reconciles regional legal traditions with the global nature of the internet should be adopted.
Shared Themes and Interconnections
Although these four chapters explore diverse aspects of freedom of expression in the digital environment, they converge on the challenges of balancing this fundamental right with other human rights, societal needs, regulatory obligations, and technological realities. In the following, parallels are drawn through shared themes and how each chapter addresses the core question of this part: How should freedom of expression be applied in the digital environment?
A. Balancing Freedom of Expression with Regulation
All four chapters highlight the tension between protecting freedom of expression and regulating harmful content or behaviour. They advocate for a balance between preventing harm (e.g., through disinformation and hate speech) and avoiding overly restrictive or arbitrary regulations that hinder democratic debate.
Chapter 9 on the positive obligations of states in social media stresses the need for states to protect freedom of expression while setting clear rules for content moderation, ensuring private platforms do not overstep in censoring speech.
Chapter 10 on new media doctrines on freedom of expression emphasises proportionality and context in applying freedom of expression doctrines, noting that digital platforms must balance rights with responsibilities, particularly in kerbing disinformation and hate speech.
Chapter 11 on the threat of the disinfodemic discusses how combating disinformation requires targeted measures, but warns against overreach that stifles legitimate expression.
Chapter 12 on EU imperialism critiques the EU’s regulatory frameworks (e.g., DSA, GDPR) for imposing global standards, which can over-regulate platforms and risk excessive censorship.
B. The Role of States and Platforms
Each of the chapters engages with the issue of the division of responsibility between states and platforms in safeguarding freedom of expression. The authors maintain that states should assume primary responsibility for protecting freedom of expression and setting clear regulatory frameworks while ensuring that platforms are transparent and accountable in their content moderation practices.
Chapter 9 focuses on the positive obligations of states to provide judicial oversight and emphasises the need to prevent private platforms from being the ones setting the boundaries of freedom of expression.
Chapter 10 advocates for state-level interventions to ensure platforms uphold free expression without succumbing to profit-driven biases.
Chapter 11 is critical of the delegation of the regulation of disinformation to platforms and calls for stronger state oversight to protect users from arbitrary content removal.
Chapter 12 warns that EU regulations often delegate too much power to platforms, enabling them to act as gatekeepers of public discourse and risking private censorship.
C. Transparency and Accountability
The importance of transparency in platform decision-making and accountability in content moderation processes is a recurring theme in the chapters of this part. Transparent, rights-based approaches to content moderation are essential for safeguarding freedom of expression in the digital age.
Chapter 9 supports judicial oversight and user access to remedies as safeguards against opaque moderation practices by platforms.
Chapter 10 argues that platforms must adopt transparent practices to foster public trust and accountability while addressing systemic risks such as hate speech.
Chapter 11 advocates for platforms to disclose their moderation criteria and establish mechanisms for users to contest content removal decisions.
Chapter 12 critiques the lack of transparency in algorithmic content moderation and calls for mechanisms to address issues such as shadow banning and disproportionate content filtering.
All four chapters also point out the need for international cooperation and international standards, which are necessary for freedom of expression to exist consistently across the global digital landscape, and to avoid fragmentation.
The chapters in this part collectively emphasise the intricate balance required to uphold freedom of expression in the digital age. By addressing the evolving obligations of states, the role of platforms, and the challenges posed by disinformation and regulatory disparities, these contributions provide insights into how this fundamental right can be preserved and adapted to the digital environment. Together, they emphasise the importance of safeguarding democratic principles, fostering accountability, and promoting international collaboration to ensure that freedom of expression remains a cornerstone of human rights in the face of rapid technological and societal changes.
9.1 Introduction
The European Court of Human Rights (ECtHR) has long established that freedom of expression is crucial to the existence of a democratic society. A free society is impossible without a free exchange of information and opinions. Freedom of expression also entails both the imparting as well as receiving of information. The internet has given unprecedented power for new platforms to realise this right.
Information society services and especially intermediary services, including social media platforms, have become an important part of our daily lives. At the same time, the digital transformation and increased use of those services has also resulted in new risks and challenges, both for individual users and for society as a whole.Footnote 1
The ECtHR and other bodies in numerous cases have pointed to a positive obligation for the state to protect the right to freedom of expression. At the same time, such an obligation has so far not been expressly attributed to the online environment by the ECtHR, while some other human rights protection bodies and institutions have pointed towards it only indirectly.
Therefore, the aim of this chapter is to analyse the positive obligations of states to protect freedom of expression in the online environment, especially on social media platforms. To achieve this aim, the first part of the chapter will set the background by analysing the general positive obligations of the state in the context of freedom of expression as found by various international bodies, most notably the ECtHR. The second part of the chapter will look at how the current regulation of social media and internet intermediaries impacts freedom of expression. Consequently, the third part will evaluate the requirements of the state in terms of its positive obligations and current regulation in the context of freedom of expression.
It must be noted that this chapter is not intended as a general overview of the definition and characteristics of positive obligations under human rights law or as exhaustive research on the rules governing the liability of social media platforms and internet intermediaries. It is rather a merging of both of these topics, drawing attention to problematic situations and indicating their possible solutions.
9.2 Freedom of Expression and Positive Obligations of the State
Almost all human rights protection documents recognise the positive obligation of the state to ensure the protection of the rights enshrined in each particular document. For example, Article 1 of the European Convention on Human Rights (ECHR) provides that the Member States have a duty to secure for everyone within their jurisdiction the rights and freedoms defined therein. In addition, Article 13 of the ECHR guarantees the availability, at the national level, of a remedy to enforce the substance of ECHR rights and freedoms in whatever form they might happen to be secured in the domestic legal order. This requires the provision of a domestic remedy to deal with the substance of a complaint under the ECHR and to grant appropriate relief.Footnote 2 Therefore, states have a positive obligation to investigate allegations of a human rights infringement. The procedures followed must enable the competent body to decide on the merits of the complaint of the violation of the Convention and to sanction any violation found but also to guarantee the execution of decisions taken.Footnote 3
Similarly, under Article 2 (3) of the International Covenant on Civil and Political Rights (ICCPR), state parties must ensure that persons whose rights under the Covenant have been violated have an effective remedy.Footnote 4 The Charter of Fundamental Rights of the European Union (CFREU) contains a rule that is comparable to the previously mentioned Article in the ICCPR and the ECHR. Namely Article 51 of the CFREU requires EU institutions and the Member States to ‘respect the rights, observe the principles and promote the application thereof’. At the same time, it must be noted that obligations contained in the CFREU have some limitations. The same article prescribes that this obligation is applicable only as far as EU law is being implemented and does not extend the field of the application of EU law.
It is the ECtHR that can be thought of as the most prominent advocate of imposing positive obligations on the state to protect particular human rights. In addition, it holds that although the essential object of many provisions of the ECHR is to protect the individual against arbitrary interference by public authorities, there may in addition be positive obligations inherent in an effective respect of the rights concerned. The Court has emphasised that the effective exercise of certain freedoms does not depend merely on the state’s duty not to interfere, but may require positive measures of protection.Footnote 5 Although the ECtHR has not provided a general definition of the concept of positive obligation, from its case law it can be deduced that the prime characteristic of positive obligations is that they require in practice national authorities to take the necessary measures to safeguard a right.Footnote 6
The positive obligations of the state can be divided into several groups. First, there are substantive positive obligations and procedural positive obligations. Second, there are positive obligations of a vertical kind or those that protect the individual from the state and positive obligations of a horizontal kind or those that protect individuals against other individuals. Third, there are positive obligations that relate to the legal and administrative frameworks and those that encompass more practical measures that states need to take.Footnote 7
These positive obligations can be found in relation to almost every human right laid down in the Convention, including freedom of expression. As the ECtHR has put it: ‘Genuine, effective exercise of this freedom does not depend merely on the State’s duty not to interfere, but may require positive measures of protection, even in the sphere of relations between individuals.’Footnote 8 This is because of the key importance of freedom of expression as one of the preconditions for a functioning democracy; therefore, states must ensure that private individuals can effectively exercise the right of communication between themselves.Footnote 9
In deciding whether a positive obligation relating to the freedom of expression exists, the ECtHR has emphasised that there must be regard for the kind of rights of expression at stake, their capability to contribute to public debate, the nature and scope of restrictions on expression rights, the ability of alternative venues for expression, and the weight of the countervailing rights of others or the public.Footnote 10 It should be noted, however, that the Court itself looks at these criteria cumulatively in each particular case, attributing more or less weight to any one or set of these criteria, depending on the circumstances.
The ECtHR has emphasised several positive obligations of the state in the context of freedom of expression. For example, the state has to protect the right to freedom of expression by ensuring a reasonable opportunity to exercise the right of reply and an opportunity to contest a newspaper’s refusal in suing for a right to reply in the courts.Footnote 11 It has also recognised a rather broad obligation to create a favourable environment for participation in public debate for all persons concerned, enabling them to express their opinions and ideas without fear.Footnote 12 Moreover, the state has a positive obligation to protect speakers, especially journalists,Footnote 13 from physical attacks from other individuals in connection with the exercise of its freedom of expression.Footnote 14 Recognising that there can be no democracy without pluralism,Footnote 15 in the context of access to the broadcast market, the ECtHR has emphasised that states must put in place an appropriate legislative and administrative framework to guarantee effective pluralism.Footnote 16
Other human rights protection bodies have also recognised that states not only have a duty to refrain from limiting the right to freedom of expression, but also have a positive obligation to protect and guarantee it. Although not explicitly recognising the concept of positive obligations under Article 11 of the CFREU, the Court of Justice of the EU has argued that it must be possible for national courts to check that interference of the information rights of internet users is justified.Footnote 17
Furthermore, the United Nations (UN) Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in his report has emphasised that governments must adopt and implement laws and policies that protect private development and the provision of technical measures, products, and services that advance freedom of expression.Footnote 18
Therefore, it can be argued that freedom of expression not only prohibits unjustified interferences from states, but may also require an active participation on behalf of them. At the same time, the positive obligations of the state should not be understood too broadly. They have been recognised as applicable to a limited number of situations touching upon the very essence of the freedom of expression. In addition, they are more often than not formulated in a manner that provides states with a rather wide margin of appreciation.
In the second part of this chapter, I will analyse whether the existing rules on regulating social media platforms and internet intermediaries in general ensure the positive obligations of the state in protecting and ensuring freedom of expression in the online environment.
9.3 Regulation of Social Media Platforms and Internet Intermediaries: A Threat to Freedom of Expression?
There are several types of actors participating in the flow of information in the online environment. First, there are internet users – natural persons – who access information stored online. Sometimes these also share information, thereby in essence becoming content producers, which are the second category of online actors. Possibly the most complex of the three is the third category – the internet intermediaries, including social media platforms. They differ from content producers as those individuals or organisations who are responsible for producing information in the first place and posting it online.Footnote 19 Internet intermediaries give access to, host, transmit, and index content originated by third parties or provide internet-based services to third parties.Footnote 20 The means by which they do this is almost without exception technical, meaning that the intermediaries are generally not aware of the content of the information they process and provide accessibility to. Such a role is unprecedented in the offline environment. Exactly because of this complexity, the regulation of internet intermediaries, including social media platforms, and their impact on freedom of expression is one of the focal points of this chapter.
The general and so far traditional rules of liability for internet intermediaries in the EU are set out in the E-Commerce Directive and recently in the Digital Services Act (DSA). The Directive exempts intermediaries from liability for the content they manage if they fulfil certain conditions. First, the service providers hosting illegal content need to remove it or disable access to it as fast as possible once they are aware of the illegal nature of the content. Second, only services that play a neutral, merely technical and passive role towards the hosted content are covered by the liability exemption.Footnote 21 For the Directive to become effective, it was transposed by the EU Member States into their national laws. Therefore, the E-Commerce Directive laid the groundwork for a notice-and-take-down procedure but did not provide any additional guidelines with regard to its implementation. Instead, the Directive left the subject matter to the discretion of the Member States.Footnote 22
Certain Member States have developed more detailed, formal notice-and-take-down procedures. Possibly the most known of these is the German Network Enforcement Act, which was adopted in 2017. Among other rules, it orders big social networks to remove or block access to content that is manifestly unlawful within twenty-four hours of receiving a complaint about it.Footnote 23 Administrative penalties of up to €5 million can be applied in cases where the social network fails to comply with the rules set out in the Network Enforcement Act.
The majority of the Member States opted for a verbatim transposition of the Directive, resulting in a lack of any firm safeguards for the content removal procedures in most EU countries.Footnote 24 For example, the Latvian Law on Information Society Services in Article 10 provides rules for the liability of an intermediary service provider. Nevertheless, like the E-Commerce Directive, it does not specify what amounts to actual knowledge or immediate action of the service provider.
Recognising these issues and the need to harmonise intermediary liability rules throughout the EU, the Commission of the EU has proposed new rules for digital platforms.Footnote 25 The Regulation of the European Parliament and of the Council on a Single Market For Digital Services (DSA) amending the E-Commerce Directive, as stated by the Commission, strives to maintain the core principles of the E-Commerce Directive and at the same time provide more protection for fundamental rights in the online environment, as well as online anonymity wherever technically possible.Footnote 26 Although the proposed regulation clarifies some aspects that were previously unclear under the guidance of the E-Commerce Directive, the general rules on intermediary liability have been maintained.Footnote 27 For instance, the DSA maintains the exemption of online platforms from liability provided the social media platform was not actively involved in the transmission or took action to delete the illegal information upon obtaining knowledge. However, one noticeable distinction in regard to liability rules is the fact that the DSA further develops the due diligence obligations applicable to social media platforms and includes new rules related to illegal content, content moderation, and algorithm oversight. While such due diligence obligations are dependent on the role, size, and impact of social media platforms, the fines for non-compliance are high and can reach a maximum of 6 per cent of a company’s annual worldwide turnover.Footnote 28
Other pieces of EU legislation also impose an obligation on intermediaries deciding on which content should and which should not be available online. For example, the Audiovisual Media Services Directive promotes co-regulation and self-regulation.Footnote 29 It also imposes duties on video sharing platforms to eliminate such harmful content from their platforms as incitement to violence and hatred.Footnote 30 Under the General Data Protection Regulation, internet search engines are required to balance freedom of expression and privacy rights in applying the right to be forgotten.Footnote 31 The Code of Conduct on Countering Illegal Hate Speech Online announced by the Commission in May 2016 incentivises information technology companies to tackle hate speech online on their own initiative.Footnote 32 Therefore, for now, and it seems also for the foreseeable future, the balancing of various interests at stake is delegated to internet intermediaries.
Therefore, essentially it is the internet intermediaries, not the judiciary of states, that must take decisions on balancing freedom of expression, privacy, and other rights online. Although such a mechanism comes with some bonuses, such as speedy procedures and reduction of workload for the courts, there are several issues that arise from it, most visibly the threats to the freedom of expression of users of social media platforms and access to information removed by intermediaries by the public at large. Furthermore, after social media platforms imposed indefinite suspensions of former US President Trump, one can add prior censorship concerns. The one issue that is closely related to the positive obligations of the state to ensure protection mechanisms for freedom of expression is that in general this type of intermediary liability regime puts private intermediaries in the position of having to make decisions about the lawfulness or unlawfulness of the content and create incentives for private censorship.Footnote 33 Several international organisations,Footnote 34 and other stakeholders,Footnote 35 have pointed to the fact that intermediaries should not be expected to conduct a quasi-adjudicatory exercise that weighs the rights of their users. They have argued that the fact that intermediaries have the technical means to prevent access to content does not mean that they are the best placed to evaluate the legality of the content in question, and whether measures affecting fundamental rights should be applied by an independent court rather than by private bodies.Footnote 36 Intermediaries are commercial entities whose fear of potential liability, or lack of resources to fully address requests for removal of information, may motivate an overzealous response to individual requests that information be delisted.Footnote 37 As private actors, intermediaries are not necessarily going to consider the value of freedom of expression when making decisions about content created by third parties for which they might be held liable.Footnote 38
Another criticism of entrusting the balancing of fundamental rights to internet intermediaries is that by enlisting internet intermediaries as watchdogs, governments delegate online enforcement to algorithmic tools with limited or no accountability. Due process and fundamental guarantees are mauled by technological enforcement, curbing fair uses of content online and silencing speech according to the mainstream ethical discourse.Footnote 39 In addition, until the obligations contained in the DSA are implemented, the algorithms in most cases are known to the intermediary alone or are even considered to be a commercial secret. Such a lack of transparency in the intermediaries’ decision-making processes can obscure discriminatory practices or political pressure affecting the companies’ decisions. Consequently, transferring regulation and adjudication of rights to freedom on the internet to private actors does jeopardise fundamental rights in general – such as freedom of information, freedom of expression, and freedom of business – by limiting access to information, causing chilling effects, or curbing due process.Footnote 40 More particularly and closer to the topic of this chapter, it also influences the capacity of the states to carry out their positive obligations.
In the first part of this chapter, it was concluded that states must create a favourable environment for participation in public debate, protect speakers, and put in place an appropriate legislative and administrative framework to guarantee effective pluralism. We must remember that the ECtHR has recognised the state as an object of these duties, and human rights law does not, as a general matter, directly govern the activities or responsibilities of private business.Footnote 41 Yet, as emphasised earlier, the EU rules on intermediary liability follow different logics.
Since internet intermediaries through advertising or the payment of subscription fees benefit from disseminating third-party content online, it seems only fair that they should bear responsibility for preventing access to illegal or harmful material.Footnote 42 At the same time, while seeking the most effective enforcement mechanism, the regulation introduced by EU law and transposed into national law, shifts the balancing duty from the state to private companies – social media platforms. That means that not the states, but online platforms are the ones creating the environment for participation in public debate. Until recently, the limited potential to review decisions, often made using algorithms, is inevitably linked with limited possibilities to ensure the protection of speakers. It also involves very few options for influencing pluralism in the online environment, which is becoming the principal source of information and communication in many countries.
Therefore, although the intermediary liability regime itself does not interfere with the freedom of expression, it creates a situation of horizontal interference resulting from a failure of the legislature to effectively protect the right to freedom of expression – a form of ‘State interference by proxy’.Footnote 43 The measures introduced by the DSA, which focus on ensuring more transparency and better protection of citizens’ fundamental rights online, including the obligation to provide information to users and establish a complaint and redress mechanism, is a huge step in the right direction. However, these steps should not be considered a substitute for the positive obligations of the state arising from the human rights standards set by the ECtHR and other international bodies.
9.4 Safeguards for Freedom of Expression
As just concluded, states have an obligation to effectively protect human rights from interference by other private individuals. Although not specifically mentioning the positive obligations of the state, the ECtHR has already argued that the state should protect freedom of expression online, not only by avoiding any limitations to it, but also by creating an appropriate legal framework. In Ahmet Yıldırım v. Turkey, one of the most important cases dealing with the accessibility of information online,Footnote 44 it indicated that Article 10 of the ECHR requires a law to provide safeguards that are intended to protect against the over-removal of information from the internet.Footnote 45
In addition, according to the UN Guiding Principles on Business and Human Rights, which requires businesses to avoid causing or contributing to adverse impacts on human rights, the duty to protect and to provide access to an effective remedy for violations of human rights is essentially incumbent on states.Footnote 46 Therefore, even though the document emphasises the duties of private stakeholders, it still underlines the role of the state in the protection of human rights.
Arguments supporting the positive obligations of the state in the context of freedom of expression and intermediary liability can also be found in legal doctrine. These arguments are based upon the idea that as many powerful social media platforms have become central to communication and information exchange, the legal framework in which they exist must be compatible with human rights standards.Footnote 47 Therefore, the content removal mechanisms should have a sufficient basis in law. To meet this requirement, the legislature should introduce specific legal provisions to clarify removal procedures. Legislation providing for content removal procedures should meet the requirement of ‘quality’. This means that rules should be clear and sufficiently precise for those subject to them to foresee the consequences and adjust their behaviour accordingly.Footnote 48
The positive obligations of states to protect freedom of expression are perhaps of even more relevance if such interferences are accepted, or even encouraged by the states,Footnote 49 as is the case in the rules dealing with intermediary liability. The current intermediary liability regime, the most notable documents of which are the E-Commerce Directive and the DSA, includes (especially in the latter) some safeguards that could ensure the protection of the right to freedom of expression.Footnote 50 But its application across Europe is yet to be seen and it does not relieve the state as an ultimate actor to ensure fundamental rights also online. As stated earlier, the doctrine of the positive obligations of states in the context of freedom of expression may provide for the further legal protection of the users of social media platforms, their right to express themselves, and their right to have access to information.
9.5 Effective Remedies and Procedural Safeguards
When freedom of expression is violated, appropriate remedies to this situation may include access to information about the violation and grievance mechanisms.Footnote 51 In order for these safeguards for freedom of expression to be effective, the states should ensure that the principles of due process and access to independent and accountable redress mechanisms are respected in the application of them.Footnote 52
First of all, users should be provided with the right to learn about the removal of the information they have published online.Footnote 53 Otherwise, they practically cannot protect their right to freedom of expression.Footnote 54
Although there is a chance that such a practice could backfire and once removed information could resurface again through different channels, a notification to webmasters and content providers should be issued by internet intermediaries whenever they restrict access to information created by these actors. Such an approach is supported, for example, by the UN Special Rapporteur on the promotion and protection of freedom of opinion and expression.Footnote 55 It is also prevalent in the doctrine applied by the courts in cases concerning intellectual property rights in the US and Brazil.Footnote 56 The notification should, as far as possible, include the reasoning of the decision that, if necessary, can be later challenged. Even more, in accordance with the Manila Principles of intermediary liability, before such a removal becomes permanent, the intermediary should weigh the arguments of the author of the information in question.Footnote 57 The DSA apparently ensures this remedy by introducing the concept of ‘notice and action’ and obliging social media platforms to inform users of the removal of their information. Consequently, this rectifies the existing lacuna in the EU legal framework. However, the DSA only applies as far as to intermediaries who offer their services in the EU single market.
Second, users who are notified by the service provider that their content has been flagged as unlawful should have the option of challenging the block or filtering of their content and to seek clarifications and remedies.Footnote 58 The possibility to appeal could come in various forms – using procedures provided by the intermediary or by a competent judicial authority.Footnote 59
The DSA notes that first and foremost the internet intermediaries themselves should have a review procedure in place. Such an idea is nothing new and some intermediaries have worked on this already. In 2018, Facebook already announced that it will create an independent oversight body to adjudicate appeals on content moderation issues.Footnote 60 The Board reviews a select number of highly emblematic cases and determines if decisions were made in accordance with Facebook’s stated values and policies.Footnote 61 Individual users can bring appeals to the Board, and Facebook as a company is able to refer cases for expedited review if they could have urgent real-world consequences.Footnote 62 Most notably the Oversight Board accepted a case referral from Facebook to examine their decision to indefinitely suspend former US President Donald Trump’s access to post content on Facebook and Instagram. Facebook has also requested policy recommendations from the Board on suspensions when the user is a political leader.Footnote 63
At the same time, the opportunities to contest the decisions of online platforms should complement, yet leave unaffected in all respects, the possibility to seek judicial redress.Footnote 64 There should always exist the option of judicial redress to ensure effective legal protection of the right to freedom of expression.Footnote 65 Indeed, the safeguards would become redundant if there were no option to receive a judicial overview of the decisions made by internet intermediaries. The liability placed upon private companies to remove third-party content without judicial oversight would not be compatible with international human rights law and freedom of expression specifically.Footnote 66
Although internet intermediaries are technically the best placed to evaluate applications for content removal and act upon them, such a mechanism might not be completely compatible with the requirements of legality and quality of law. While the practice of implementing the transparency rules provided by the DSA on online platforms is unclear, so far the internal decision-making procedures of the intermediaries are not always very transparent. Therefore, removal orders issued by independent and impartial bodies provide a much greater degree of legal certainty.Footnote 67
Third, the independence of internal complaint and redress mechanisms and the readiness of social media platforms to follow the decisions of oversight boards depend also on the goodwill of the platforms. The change in ownership in 2022 of one of the biggest social media platforms, Twitter, illustrates how rapidly the existing policies of the social media platform can change and how fragile reliance only on internal redress mechanisms created by the platform can be. A court or similar authority would operate with greater safeguards for independence, autonomy, and impartiality,Footnote 68 and would have greater capacity to evaluate the rights at stake and offer the necessary assurances to the user.Footnote 69 Such authorities as public bodies (not private) would also be better placed to make the determination of whether a particular content is illegal, which requires careful balancing of competing interests and consideration of defences.Footnote 70
The positive obligation of states to protect freedom of expression entails the creation of a system that would allow individuals and other content creators to protect their freedom of expression also in case it is exercised through social media platforms and internet intermediaries. These intermediaries should abide by the rules of freedom of expression themselves, but at the end of the day, the state cannot rely on these private players to protect speech online. The DSA envisages a new enforcement mechanism, which will complement the internal complaint and redress tools established by social media platforms. This mechanism will consist of the Commission and independent national authorities to supervise how online intermediaries, including social media platforms, adapt their systems to new requirements. To safeguard the freedom of expression, the independence of these bodies and the possibility of challenging their decisions in the courts are of crucial importance. The criticism so far has been correctly expressed that the oversight of very large platforms is subject to the European Commission, which is not an independent regulator but the executive arm of the EU.Footnote 71
9.6 Rules on Balancing
At this point, it is clear that monitoring and restricting access to information online would be impossible without the involvement of internet intermediaries. They act as gatekeepers with direct access to the keys needed to close the gates to malicious or otherwise illegal information. The keys available to states are much more robust and take longer to activate. This means that to ensure effective protection of human rights online, the states have to cooperate with internet intermediaries. Therefore, besides the creation of a mechanism for protecting freedom of expression online, states have to set clear rules for internet intermediaries to follow in deciding which information remains accessible and which should be removed. The importance of clear rules at the national level is especially relevant, since only some illegal content is defined at the EU level, while some content might be found illegal and subject to removal only in certain states.
Internet users should always be able to understand why certain information has been removed.Footnote 72 The changes introduced by the DSA apparently ensure that social media platforms notify users of the reasons for removing their content and the option to contest such decisions.
The states should ensure that removal of information or blocking access should be undertaken in the observance of the principle of freedom of expression.Footnote 73 In mediating the public interest and individual rights online, a delicate regulatory balance is required.Footnote 74 Therefore, the state should provide guidance for intermediaries on how to achieve this.Footnote 75 The DSA could play a role in establishing standards and best practices in this area as it tries to balance measures imposed on platforms to remove harmful information against the restrictions to freedom of expression. Yet the application of the DSA both at the EU and national level is yet to be seen.
9.7 Conclusions
Social media platforms and internet intermediaries in general are becoming, and in many societies already are, the dominant actors in providing an environment for freedom of expression and communication. Yet the legal regulation of platforms has not always followed technological developments and changes in society so as to ensure respect for freedom of expression and other fundamental rights. The idea during the emergence of the online environment that it should be left free and unregulated to ensure the marketplace of ideas was gradually changed with increasing regulation from the states in order to protect other fundamental rights, for instance human dignity, and ensure non-discrimination. The signs of this shifting attitude of states and international bodies was evidenced by the judgement of the ECtHR in the case of Delfi v Estonia,Footnote 76 which for the first time imposed the liability on a news platform for not removing derogatory comments created by its users in a timely manner. The Court came to such a decision despite the fact that the platform acted quickly after the notice was received. This judgment was followed by laws restricting freedom of expression and imposing more obligations on social medial platforms in a number of European countries,Footnote 77 and even debates in the US about changes to section 230 of the Communications Act of 1934, which grants websites legal immunity for much of the content posted by their users.Footnote 78
While such moves by states are understandable in order to protect other fundamental rights and currently provide measures against the spread of misinformation and harmful content in the online environment, the safeguards for freedom of expression seems to have been forgotten in this process. Online intermediaries, which have implemented different technical tools to block and to remove information in response to demands by states, are actually now in charge of setting the boundaries of freedom of expression – a role that should be performed by states. The role of freedom of expression in democracies is essential, and states should not shirk their obligations in ensuring freedom of expression and deciding about its limits by outsourcing this function to private corporations. Social media platforms and internet intermediaries in general have neither democratic legitimacy nor do they in many cases share the goal of protecting this fundamental right, and the process of limiting expressions is in most cases accompanied by a lack of clear rules, transparency, and access to remedies. We must remember that the right to freedom of expression entails not only the negative obligations of the state, but also positive ones. The doctrine of these positive duties is still developing and awaits clarification, especially in the context of information published online. However, it can be argued that it can already be applied in order to improve the safeguards for freedom of expression for users of social media platforms, and discourage the platforms from the over-removal of content to avoid liability. The DSA is certainly a step forward; however, its scope is limited and its application is yet to be seen.
Last but not least, the majority of existing legal standards focus on the development of content moderation rules, which would balance freedom of expression and the removal of illegal content by social media platforms. Still, another and similarly important threat to freedom of expression in the online environment is the fact that the vast majority of public discourse takes place on a very small number of platforms, which hold excessive power over information flow.Footnote 79 As stated earlier, the ECtHR has emphasised that states must put in place an appropriate legislative and administrative framework to guarantee effective pluralism in the media ecosystem.Footnote 80 Therefore, states are obliged to decentralise the channels of public discourse and develop rules that favour the development of new social media platforms.
10.1 Introduction
Democracy and freedom of expression are intertwined in the European Court of Human Rights (hereinafter, ‘the ECtHR’ or ‘the Court’) case law. Article 10 (Freedom of Expression) of the European Convention on Human Rights (hereinafter, ‘the ECHR’ or ‘the Convention’) introduces obligations on Member States to protect the right to freedom of expression. The scope of the right is broad, and as such, it clarifies that any limitation on the freedom of expression must be prescribed by law, pursue a legitimate aim, and be necessary in a democratic society (e.g., protecting the reputation or rights of others). In addition, Articles 17 (Prohibition of abuse of rights) and 18 (Limitation on use of restrictions on rights) of the ECHR are also relevant in assessing the scope and legitimacy of restrictions against the freedom of expression. In protecting the right to the freedom of expression and robust public debate, which are the foundations and pre-conditions for a democratic society, strong safeguards have been developed by the ECtHR in its case law interpreting the Convention.
Within this case law, the defence of democracy as a legitimate aim of limiting the freedom of expression is a relatively unnoticed part of European case law. More often, the discussion has been on the abuse of rights provision, which is closely related to extreme political movements that try to promote undemocratic ideologies. In this connection, the Court has reiterated that the general purpose of Article 17 is to prevent individuals or groups with totalitarian aims from exploiting the principles enunciated in the Convention in their own interests.Footnote 1
There has been consensus on certain fundamental doctrines related to freedom of expression and defending democracy. However, these are more designed for a traditional media environment that does not reflect the prevailing internet-based system of expression in our current society. In the contemporary system, the defence of democracy is a more complex and multifaceted phenomenon than just guaranteeing the protection of democracy from extreme movements and the most serious hate speech. That is, pre-emptive measures have become a key part of the tools for defending democracy. This means that there need to be strong foundations for securing public debate, including a diverse and pluralist media environment that would enable safeguards against disinformation and other attacks on the democratic system. In contemporary society, the concept of the media as a public watchdog has also evolved into more of a diverse network of watchdogs, consisting of different actors, from the media to non-governmental organisations (NGOs) and even academics.
In addition, there is a fear of abuses of the defence of democracy argument. Sanctions and other restrictive measures could be used to silence critical voices and democratic opposition. This development in applying ulterior purposes is inevitable when looking at how Article 18 of the Convention has emerged in the Court’s case law. There is more evidence of using different sanctions for ulterior motives, especially against those in opposition.
Against this background, this chapter tries to systematise current interpretative doctrines that the ECtHR applies in cases related to defending democracy and the rule of law under Article 10. In addition, it looks at the approach to contextualism taken by the Facebook’s Oversight Board. The idea is to analyse these doctrines in light of present-day conditions and to evaluate how the doctrines are adaptable to the new media environment. In so doing, this chapter first focuses on the question of how European history and the historico-political context have been taken into account and what the key principles have been in assessing the width of the margin of appreciation in relation to the right to freedom of expression in the Court’s interpretation in particular regard to defending democracy. Section 2 will then further outline established doctrines of defending democracy in the Court’s case law. Section 3 of the chapter delves more into how the emergence of internet-based media and enhanced access to a space of social debate has further complicated the case law, where the focus has shifted more to encompassing freedom of expression online for upholding democracy as a system rather than merely restricting extremists or hate speech. The section also considers the threats of disproportionately restrictive measures on public debate, which might undermine democracy and fall under Article 18. The section also looks into the positive obligations and how a regulatory framework requiring pluralism would be essential as a pre-emptive measure against threats to democracy. The final section provides a concluding observation on how the ECtHR has dealt with the defence of democracy argumentation in relation to freedom of expression and the evolving nature of such established doctrines in the face of the current media environment and democratic society.
10.2 Historical Context and Stability of Democracy Argumentation
According to the Court’s case law, the defence of democracy argumentation must be interpreted in light of European history. Consequently, the argumentation of protecting democracy is founded on the history of the twentieth century and especially on how fragile democracy can be in times of transition and where certain resurfacing ideologies pose a threat to democracy, referring to the ideologies and even symbols of authoritarian regimes, whether under Nazism or Communism.
Context is part of the defending democracy doctrines, and the Court emphasises a careful examination of context,Footnote 2 particularly when symbols with multiple meanings are used for the expression of political views.Footnote 3 Therefore, the Court does not exclude that certain symbols are not always ‘equally permissible in all places and [at] all times’, stating that ‘the display of a contextually ambiguous symbol at the specific site [of mass murders] may in certain circumstances express identification with the perpetrators of those crimes’ and that a ban on demonstrations when held on a specific day of remembrance may ‘represent a pressing social need’ and thus ‘necessitate an interference with the right to freedom of expression’.Footnote 4
In the case of Vogt v. Germany, history was essential for assessing whether the restriction on the freedom of expression in the name of defending democracy was in accordance with the Convention. The Court analysed whether a legitimate aim existed to restrict the rights of civil servants when the political loyalty requirement was applied to a teacher in a public school who was a member of a communist party and was dismissed on account of her political activities. The political loyalty requirement requires that state employees are loyal in actively upholding a free democratic constitutional system. The Court noted that the notion of the civil service being ‘the guarantor of the Constitution and democracy has special importance in Germany because of that country’s experience under the Weimar Republic, which, when the Federal Republic was founded after the nightmare of Nazism, led to its constitution being based on the principle of a “democracy capable of defending itself” (wehrhafte Demokratie)’.Footnote 5
The same idea of historical experiences and using democratic processes to promote totalitarian movements was also mentioned in the Refah Partisi case (2003).Footnote 6 The communist authoritarian past was relevant in the Ždanoka v. Latvia case, where the Court established and consolidated the current criteria for the ‘self-defence’ of democracy. The Ždanoka judgment includes an extended doctrinal summary under the title Democracy and its protection in the Convention system, where it acknowledges that ‘democracy constitutes a fundamental element of “European public order”’ and the Preamble of the Convention makes a connection between the realisation and the maintenance of human rights and fundamental freedoms with ‘effective political democracy’.Footnote 7 It also refers to the common heritage of political traditions, ideals, freedom, and the rule of law of European countries enshrined within the Convention, where, in fact, on many occasions, the Court has reiterated that the Convention exists ‘to maintain and promote the ideals and values of a democratic society’.Footnote 8 According to the Ždanoka judgment, democracy is the only political model contemplated by the Convention and, accordingly, the only one compatible with it.Footnote 9
In the Ždanoka case, the Court referred to the principle that:
in order to guarantee the stability and effectiveness of a democratic system, the State may be required to take specific measures to protect itself. Thus, in the above-cited Vogt judgement, with regard to the requirement of political loyalty imposed on civil servants, the Court acknowledged the legitimacy of the concept of a ‘democracy capable of defending itself’ (paras. 51 and 59).Footnote 10
The Court applies very careful reference to the historico-political background, which makes the same measure acceptable in one country and yet incompatible in another with different historico-political contexts.Footnote 11 Regarding a state with a less established democratic tradition or institutions, or those with a totalitarian past, therefore, the Court considers that the margin of appreciation is different, noting in such a state, national authorities would be better placed to ‘assess the difficulties faced in establishing and safeguarding the democratic order’.Footnote 12 Under the Court’s scrutiny, the Latvian context was different because the Court acknowledged that there might be a threat against the new democratic order and a ‘resurgence of ideas which, if allowed to gain ground, might appear capable of restoring the former regime’.Footnote 13
One of the key arguments in this contextual approach was that in the field of election-related rights, there was a wide margin of appreciation, but also in the Latvian context, the Court affirmed that the Parliament should keep statutory restrictions under constant review.Footnote 14 The Court also referred to European integration as one of the key elements to be taken into account as a contributing factor for Latvia’s stability. Therefore, the election context impacts the proportionality test and gives more discretion to national authorities.
In this section, we have looked at the Court’s deliberation so far based on European history and historico-political contextuality in its judgments in relation to defending democracy and protecting the freedom of expression of individuals, especially when political views are expressed. We can say that the foundational correlation between democracy and the Convention has been clearly demarcated in its interpretations, and democratic society is a central concept in the ECHR. Moreover, the ECtHR has applied the democracy argumentation since the start of its work, particularly when talking about the general principles of interpretation, as the Court founded its argumentation in the Soering case that as ‘[t]he general spirit of the Convention, [is] an instrument designed to maintain and promote the ideals and values of a democratic society’.Footnote 15
Furthermore, ‘an effective political democracy’ is mentioned in the Preamble and ‘democratic society’ appears in six of the Articles (6, 8, 9, 10, 11, and 2 of Protocol 4) of the ECHR. The emphasis on the value of protecting the democratic system is deeply rooted in European history and human rights atrocities, and the reason to draft the Convention and establish its supervisory system in the spirit of the Universal Declaration of Human Rights was to ensure that such events would never be repeated in Europe.
10.3 Established Doctrines on Defending Democracy in the ECtHR Case Law and Heckler’s Veto
Now that we have looked at the approach that the Court has taken in regard to the defence of democracy and the freedom of expression in its case law, in this section, we will highlight some of the established doctrines and principles related to defending democracy.
The key doctrinal element in defending democracy is the Prohibition of the Abuse of Rights, enshrined in Article 17 of the Convention. Against the background of the provision is the idea that a state, group, or person does not have a right to activities aimed at the destruction of the rights and freedoms of others. For example, the Court has, in the case of Refah Partisi et al. v. Turkey, reiterated that ‘no one should be authorised to rely on the Convention’s provisions in order to weaken or destroy the ideals and values of a democratic society’.Footnote 16 In the case of Ždanoka v. Latvia, furthermore, by referring to the consideration during the drafting of the Convention, the Court acknowledged that certain rights could not be used in order to ‘destroy’ certain other rights set forth by the Convention, stating:
It was precisely this concern which led the authors of the Convention to introduce Article 17, which provides: ‘Nothing in this Convention may be interpreted as implying for any State, group or person any right to engage in any activity or perform any act aimed at the destruction of any of the rights and freedoms set forth herein or at their limitation to a greater extent than is provided for in the Convention’.Footnote 17
The Court has also set forth the principles for assessing the breadth of the margin of appreciation afforded to national authorities in its freedom of expression case law. Freedom of expression is closely related to democracy. According to the Court, freedom of expression constitutes one of the essential foundations of a democratic society, and the basic conditions for its progress and each individual’s self-fulfilment.Footnote 18 As such, the Court states, ‘in a democratic system, the actions or omissions of the government must be subject to the close scrutiny not only of the legislative and judicial authorities but also of public opinion’.Footnote 19 This means that authorities should tolerate even harder criticisms from the public, yet the key principle is set that ‘where such remarks incite violence against an individual or a public official or a sector of the population, the State authorities enjoy a wider margin of appreciation when examining the need for interference with freedom of expression’.Footnote 20 As stated in the joint dissenting opinion in Sürek and Özdemir v. Turkey, judges have expressed that even though freedom of expression constitutes one of the foundations of democratic society, speech inciting violence undermines democracy.Footnote 21
As such, the Court has accepted that states can limit rights guaranteed under the Convention in order to defend democracy.Footnote 22 Furthermore, these measures can also be implemented in a pre-emptive manner. That is, states cannot be required to wait for authoritarian forces ‘to take steps that might prejudice civil peace and the country’s democratic regime’ before intervening.Footnote 23 The Court referred to the possibility of preventing attempts against democratic regimes even before they occur. It approached the situation with the purpose of deference to national courts and other authorities. Therefore, the national courts, after detailed scrutiny of European supervision, can take restrictive measures to protect democracy. However, there are different contexts to be taken into account in the discourse of defending democracy; that is, how much the line should be stretched to tolerate minority opinions.
In contemplating this question, let us examine the Vajnai v. Hungary case, where the Court deliberated over disproportionate restrictions in the name of defending democracy and was concerned that free speech and opinion would be subjected to a heckler’s veto, which would negate freedom of expression.Footnote 24 The heckler’s veto is invoked in situations where hecklers or demonstrators silence a speaker without the intervention of the law. The heckler’s veto is thought to happen when the government considers restrictions on a speech with the anticipated or actual reactions of the opponents of the speech.Footnote 25
In this case, Attila Vajnai was convicted of using a symbol of an international workers’ movement, which was considered a totalitarian symbol and prohibited in Hungary. The Court recognised the systematic terror of communist rule in several countries and regarded that ‘it remains a serious scar in the mind and heart of Europe’.Footnote 26 It is understandable that these symbols can cause uneasiness or seem disrespectful. Such sentiments cannot limit the freedom of expression. However, the Court went on to consider that this was about feelings rather than rational fears. The Court reviewed the proportionality of restrictions against Vajnai and considered that such a restriction based on satisfying ‘the dictates of public feeling – real or imaginary – cannot be meeting the pressing social need requirement in a democratic society’.Footnote 27
The disproportionate conviction obviously reflects the legislative framework that did not previously provide any further qualification for wearing certain symbols. The authorities did not have to weigh wearing the symbol and other interests in order to find a fair balance. In their report to the Committee of Ministers, they referred to the new formulation of legislation, which does not base criminal conviction on the mere display of a symbol, but also includes an additional formulation, stating ‘in a manner that is capable of disturbing public peace, in particular that violates the human dignity of victims of totalitarian systems or the due reverence for the dead’.Footnote 28 The national legislation now determines more precisely the condition under which wearing such a symbol is prohibited and leads to a conviction.
So far, the established doctrine reflects an era where democracy was organised by political parties. However, a contemporary democratic system has a more complex structure with different layers. In addition to the model of a representative democratic system, there is also a direct democracy element, with NGOs having a major impact on the public debate on specific questions of interest.
10.4 Distinguishing between Traditional and Internet-Based Media and Freedom of Expression
In terms of assessing the balance of journalistic freedom and the legitimate restriction of the freedom of expression by a state in the context of traditional media, the Court identified four key elements to be considered in the hate speech case of Jersild v. Denmark in 1994. Those were (a) the manner in which the feature was prepared, (b) its contents, (c) the context in which it was broadcast, and (d) the purpose of the programme.Footnote 29 In this case, the applicant, a journalist, made a documentary containing extracts from a television interview he had conducted with three members of a group of young people calling themselves the Greenjackets, who had made abusive and derogatory remarks about immigrants and ethnic groups in Denmark. The applicant was convicted of aiding and abetting the dissemination of racist remarks, and he alleged a breach of his right to freedom of expression.
While the national courts laid emphasis on issues different from those of the Court, considering that the applicant had intended to include the racist remarks and edited the feature in that way, the Court took a different view and considered that the objective purpose of the feature was not the propagation of racist views and ideas. The Court drew a distinction between the members of the Greenjackets, who had made openly racist remarks, and the applicant, who had sought to expose, analyse, and explain this particular group of youths and to deal with ‘specific aspects of a matter that already then was of great public concern’.Footnote 30 It was opined that the documentary as a whole was not aimed at propagating racist views and ideas but at informing the public about a social issue. Accordingly, the Court held that there had been a violation of Article 10 of the Convention.
What was interesting in the different approaches taken by the national courts and the European Court was the consideration of the scope of the assessment. That is, the Court considered that there were editorial decisions, especially related to the news value of the items that were broadcast by the media. According to the Court, there was no reason to question the editorial staff’s own appreciation of the news or information value of the impugned item, and emphasis was also placed on the fact that it was part of a serious news programme.Footnote 31
Now, how has the established case law since the Jersild case reflected the situation in social media and the responsibility of social media giants such as Twitter/X and Facebook? How can we apply the four factors mentioned in the Jersild case to internet-based media? First of all, the ‘context in which something is broadcast’ is one of the trickiest parts of transforming the broadcast context to social media because one of the key features of social media has been its ability to provide everyone with access to the social discussion in a more or less public sphere. It should not matter whether a person is mostly providing his or her own personal comments or whether a person is representing his or her institutional position. But are there, or should there be, different requirements for those with a more formal media status?
For some time, the internet’s role in the media environment and public debate has been acknowledged in the Court’s case law. Internet-based media has become one of the key focal points in the Court’s interpretation of freedom of expression in recent years, as seen in the case of Delfi v. Estonia. In this case, the Court reminded us of the important role played by the internet in enhancing the public’s access to news and facilitating the dissemination of information,Footnote 32 and later the Court elaborated further, noting that the function of bloggers and popular users of social media may also be assimilated to that of public watchdogs,Footnote 33 insofar as the protection afforded by Article 10 is concerned. An essential factor in the Court’s argumentation was the interrelationship between the functioning of the political system and the role of the press and NGOs.Footnote 34
The Court has also articulated a need for a legislative framework applicable to the imparting of information received from the internet. For example, in the Ukrainian case of the Editorial Board of Pravoye Delo and Shtekel, the Court considered that the absence of a sufficient legal framework could have negative consequences for the freedom of expression. It found that the lack of legislation at the domestic level allowing journalists to use information obtained from the internet without fear of sanctions for such use seriously hinders the exercise of the vital function of the press as a public watchdog.Footnote 35
Online media outlets are also often now applicants before the Court. One recent interesting judgment was in the case of OOO Memo v. Russia, where an executive body of the state instigated the defamation case. The Court made a new interpretative approach and considered that executive bodies do not have the same rights as legal entities. In previous cases, the question was more about proportionality, and in the OOO Memo case, the Court chose a different line of interpretation. That is, the argumentation comes from the need for the close scrutiny of public power in order to prevent the abuse of power and corruption of public office in a democratic system. The Court saw that if executive bodies could file civil defamation cases, it would place an excessive and disproportionate burden on the media, creating an inevitable chilling effect. The conclusion was that restricting the freedom of expression on the basis of the defamation inflicted on the executive body did not have a legitimate aim.Footnote 36
In another case, Cicad v. Switzerland, the Court considered it proportionate that, through a court order, the applicant was to remove an article from a website and publish the main findings on its website.Footnote 37 In the Hungarian case of Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt, other special features of the internet were considered necessary to be taken into account when deciding whether certain measures are proportionate or not, including expressions in the comment area. According to the Court, ‘comments, albeit belonging to a low register of style, are common in communication on many internet portals – a consideration that reduces the impact that can be attributed to those expressions’.Footnote 38
Another aspect that is typical of the contemporary media environment is that there are actors who deliberately aim to use different forms of disinformation as a way of destabilising democratic systems and societies through abusing the freedom of expression. The flood of disinformation and other hybrid tactics used in the new media environment can be described as, according to some scholars, information disorder and information pollution.Footnote 39 According to the Council of Europe report, the forms of disinformation are (a) false context, (b) imposter content, (c) manipulated content, and (d) fabricated content. In these distributions of disinformation, the role of different types of new media has become an essential factor to be taken into account.Footnote 40 Often, this means that social media platforms, for example, are taking a key role in the dissemination process and spreading distrust in the democratic system. The Joint Declaration from the United Nations (UN) and the Organization for Security and Co-operation in Europe (OSCE), African Union, and Organization of American States rapporteurs have emphasised the positive duties of states especially in promoting a free, independent, and diverse media environment.Footnote 41
As we have previously seen, the defence of democracy doctrine has been developed in the context of protecting the democratic system from those with totalitarian beliefs and a totalitarian history, especially in relation to electoral candidates or the loyalty requirements set for civil servants, but also in relation to pre-emptively dissolving political parties with objectives that are contrary to, and thus weakening or destructive to, the democratic system.
In the course of this development, the question of anti-democratic forces and their imminent threat to democracy has led to accepting pre-emptive measures against such forces, even accepting the dissolution of political parties. Some scholars have warned about the risks of abuse and unclear limits of defending democracy. This was raised, especially in the context of peace-time limitations.Footnote 42 Of course, given the current situation, this criticism has to be reviewed in light of Russia’s war against Ukraine. The threat of using disinformation to overthrow democratic governments, and even more generally to destabilise democratic processes that might have been previously seen as highly unlikely, is now very concrete in countries neighbouring Russia, and even elsewhere where Russian influence and hybrid tactics can cause imminent danger to the democratic system.
In a new media environment, which needs to be distinguished from earlier interpretative lines of print and audiovisual media, one of the significant issues involves anonymity. It has become an important part of social media that people are allowed to express views while maintaining their anonymity. Its value in social media posts and different digital platforms has been widely accepted. The idea behind anonymity is that it provides the opportunity to avoid reprisals or unwanted attention and thus promotes the free flow of opinions, ideas, and information, as was mentioned in one of the key cases in this context, that of Standard Verlagsgesellschaft mbH v. Austria (no. 3).
In the instant case, the lack of any balancing between the opposing interests [(see paragraph 94 above)] overlooks the function of anonymity as a means of avoiding reprisals or unwanted attention and thus the role of anonymity in promoting the free flow of opinions, ideas and information, in particular if political speech is concerned which is not hate speech or otherwise clearly unlawful. In view of the fact that no visible weight was given to these aspects, the Court cannot agree with the Government’s submission that the Supreme Court struck a fair balance between opposing interests in respect of the question of fundamental rights.Footnote 43
This is, however, one extension that also caused criticism within the Court. Judge Eicke saw that the Court’s line of argument could cause problems for victims of abusive speech, stating that:
the extension of Article 10 in this context will not be capable of being limited to service providers under the E-Commerce Directive who are also media companies but will ultimately have to be applied to any ‘bloggers and popular users of the social media’, with the consequent (negative) impact on the ability of victims of abusive posts to seek access to court for the purposes of protecting themselves and their reputation.Footnote 44
Another interesting case is that of Ecodefence and 60 others v. Russia, where one of the most elaborate attempts by a state to create a legal regime placed a significant chilling effect on different NGOs and their choice to seek or accept any amount of foreign funding, especially in respect of politically sensitive or domestically unpopular topics. The case concerned the Foreign Agent Act and its concept of political activities carried out by so-called foreign agents, which could also include any views expressed in interviews or even social media posts by those ‘foreign agents’.Footnote 45
There is a more rigorous approach to online media hatred that also emphasises the role of governments in criminalising hate speech. This has especially been seen in recent cases against minority groups. Hate speech related to a person’s sexual orientation is a key factor in the case of Oganezova v. Armenia, where an inadequate response was considered a violation of Article 3 of the Convention together with Article 14.Footnote 46
10.4.1 From Article 17 to Article 18: From Restricting Extremists to Using Defending Democracy as an Excuse to Control Opposition
In the Court’s case law, Article 17 (prohibition of the abuse of rights) has allowed measures, especially restrictions and sanctions against those extreme groups denying the rights of others, particularly when comments amounted to hate speech and countered the values and ideologies enshrined in the Convention.Footnote 47 This was the provision used in the cases reviewing convictions of neo-Nazi groups in Germany. It was later used in cases involving those denying the Holocaust and atrocities committed during the Second World War,Footnote 48 and also against those who clearly did not accept the same democratic values that were part of European heritage.
One of the interesting questions to be discussed relates to the future role of Article 17. Some scholars have argued that focusing hate speech cases on Article 17 can lead to a chilling effect and consider that it is better to use an argumentation based on Article 10 instead when reviewing hate speech.Footnote 49 Cannie and Voorhof consider that the current situation has led to inconsistent argumentation when some cases are considered under Article 17 and some under Article 10.
The Court itself acknowledged that there should be a high threshold in applying Article 17 in the Perincek v. Switzerland case, as follows:
However, Article 17 is, as recently confirmed by the Court, only applicable on an exceptional basis and in extreme cases (see Paksas v. Lithuania [GC], no. 34932/04, § 87 in fine, 6 January 2011). Its effect is to negate the exercise of the Convention right that the applicant seeks to vindicate in the proceedings before the Court. In cases concerning Article 10 of the Convention, it should only be resorted to if it is immediately clear that the impugned statements sought to deflect this Article from its real purpose by employing the right to freedom of expression for ends clearly contrary to the values of the Convention (see, as recent examples, Hizb ut-Tahrir and Others v. Germany (dec.), no. 31098/08, §§ 73–74 and 78, 12 June 2012, and Kasymakhunov and Saybatalov v. Russia, nos. 26261/05 and 26377/06, §§ 106–13, 14 March 2013).Footnote 50
This high threshold has been applied to new media cases such as Lilliendal v. Iceland, where the comments of the applicants were considered under Article 10 of the Convention, and the Court saw no reason to substitute the views of a national court. The Court’s approach after Perincek seems to focus primarily on the review under Article 10 of the Convention rather than giving too much weight to the abuse clause consideration. However, the abuse clause can be seen as an interpretative aid for the balancing test in Article 10 of the Convention.
The abuse of rights discourse in Article 17 is founded on the idea that the state does not act in bad faith, and state authorities would not abuse their powers for their own ulterior purposes. And for those authorities that operate in bad faith, there is Article 18 (Limitation on the use of restrictions on rights). The new era, which requires the application of Article 18, refers inherently to the use of different kinds of sanctions by a state (or a dominant party) to cause fear among those in opposition, for example, by using various public interest reasonings as an excuse for controlling and restricting the activities of opposition parties. These bad faith acts suppress political pluralism; for example, sanctioning political opponents by referring to public safety, which is a worrying trend, as states are ready to use courts and authorities for ulterior purposes, especially those that suit those in power and against their political adversaries.
In terms of its application, there is a high threshold to apply Article 18, since the Court has also acknowledged that the restrictions, coming from governments, can also serve to pursue a plurality of purposes simultaneously.Footnote 51
Using restrictions for political purposes against opposition has been a relevant issue in many European countries in recent years. In the Navalnyy v. Russia case (GC), the Court applied Article 18, saying that there was an ulterior purpose for the restrictions, ‘namely to suppress that political pluralism, which forms part of “effective political democracy” governed by “the rule of law”, both being concepts to which the Preamble to the Convention refers’.Footnote 52 The Court continues by saying, ‘[A]s the Court has pointed out, notably in the context of Articles 10 and 11, pluralism, tolerance and broadmindedness are hallmarks of a democratic society’.Footnote 53
10.4.2 Changing Responsibilities and Obligations in the Field of Freedom of Expression
One of the most interesting questions concerning defending democracy relates to the distribution of responsibility. To what extent can we transport the key rules of liability from traditional media to internet-based media, and what factors distinguish media platforms with editorial procedures compared with social media platforms? This leads to questions on the responsibilities of individuals and of those providing the platforms for unlawful expression. From the outset, the responsibilities differ from traditional media, in which clear rules place the onus on the publisher, who is ultimately responsible for every publication.
In internet-based media in the instance of speech that could cause instability and threaten the democratic system, no similar editorial structures exist to pre-emptively prevent any such speech from being published. The question is more about the obligation to remove material that would be unlawful and incite hatred and violence afterwards. Emphasis is also placed on social media rules, such as the Facebook community standards, which are monitored.Footnote 54 Facebook community standards emphasise social media as an environment not based on intimidation and exclusion. Facebook’s hate speech policy is founded on anti-discrimination. The prohibited hate speech includes attacks against people on the basis of their ‘protected characteristics’ – race, sexual orientation, sex, gender identity, disability, or serious disease – but it also includes people’s immigration status. The attacks can be violent speech, dehumanising speech, or mocking of concepts, events, or victims of hate crimes. Some exceptions are made when the content is satirical.
A number of detailed examples provided by the Facebook community standards are quite astounding. It is also interesting to note that the recent decision by Facebook’s Oversight Board (hereinafter, ‘Board’ or ‘Oversight Board’) refers to a strong contextual approach. The Board has taken decisions where it has already dealt with several questions of contextual interpretation. The Board recently overturned a decision to remove a post (2022-008-FB-UA) comparing the Russian army in Ukraine to Nazis and quoting a poem that calls for the killing of fascists. The basis of the Board’s review was the six-part threshold test mentioned in the Rabat Plan of Action.Footnote 55 The test includes (a) context, (b) speaker, (c) intent, (d) content and form, (e) extent of speech act, and (f) likelihood, including imminence.
The ECtHR doctrine on Facebook and social media responsibilities is under development. The Grand Chamber reviewed Facebook-related issues in the case of Sanchez v. France (15.5.2023). The case was referred to the Grand Chamber after the judgement of that Chamber found no violation of Article 10 of the Convention. The Grand Chamber came to the same conclusion that there was no violation of Article 10 of the Convention, but elaborated the reasoning and developed the necessity test. The Grand Chamber’s argumentation is based on an assessment in concreto of the specific circumstances of the case.
The Grand Chamber presented an argumentation based on an assessment in concreto of the specific circumstances of the case and the margin of appreciation. The applicant was convicted because he did not delete unlawful comments (incitement to violence against Muslims) on the ‘wall’ of his Facebook account. The Grand Chamber found that the decisions of the domestic courts were based on relevant and sufficient reasoning.Footnote 56 The Sanchez case is interesting because it analysed the standards applied by national authorities and whether they were in conformity with the principles embodied in Article 10. The proportionality of the impugned penalty followed the test set out in the Chamber judgement: (a) the context of the comments, (b) the steps taken by the applicant to remove the comments once posted, (c) the possibility of holding the authors liable instead of the applicant, and lastly, (d) the consequences of the domestic proceedings for the applicant.Footnote 57 The elaborate argumentation relates to the context and developments in the function of the internet. The Grand Chamber considered that because the internet has become one of the principal means for individuals to express their opinion, any interference should be examined particularly carefully in order to avoid a chilling effect, carrying a risk of self-censorship. At the same time, there are other dangers related to the dissemination of hate speech. The Court applies the concepts of defamatory speech and other types of clearly unlawful speech, including hate speech and speech inciting violence.Footnote 58
In the electoral context, there is broad freedom of expression, but ‘in the case of a racist or xenophobic discourse, such a context contributes to stirring up hatred and intolerance, as the positions of the candidates will inevitably harden and slogans or catchphrases become more prominent than reasoned arguments’. The entire argumentation relates to the concern that reasonable argumentation has suffered and ‘the impact of racist and xenophobic discourse is then likely to become greater and more harmful’.Footnote 59 The argumentation is based on the premise that the role of politicians is to avoid comments that foster intolerance. They should be careful to defend democracy and its principles, their ultimate aim being to govern.Footnote 60 The context of an election makes racist and xenophobic discourse more harmful.Footnote 61 The Court developed its review under the premise of the shared liability of all actors involved. Therefore, it is not only the responsibility of the producer. Nevertheless, to exempt producers from all liability might facilitate or encourage abuse and misuse, including hate speech and calls to violence, but also manipulation, lies, and disinformation.Footnote 62
There are also other contributions to the discussion of context. The UN Special Rapporteur Frank La Rue focused his report on this question and highlighted the perspective that context should be taken into account while analysing hate speech.Footnote 63 His basic argumentation was that ‘what is deeply offensive in one community may not be so in another’. La Rue pointed out that the various factors worthy of consideration could relate to tensions between different racial and religious communities, discrimination against the targeted group, the tone and content of the speech, the person inciting hatred, and the means of disseminating the expression of hate. La Rue also considered the difference of weight for publications to small and restricted groups versus publications on a mainstream website.
The applicant’s own behaviour is also relevant in assessing the proportionality of sanctions. The Court suggested in particular that a politician experienced in communication must be aware of the greater risk of excessive and immoderate remarks that might appear and necessarily become visible to a wider audience. The Court refers to the deliberate choice of the applicant in making his ‘wall’ public and also to his being a professional in matters of online communication strategy.Footnote 64 The Court’s scrutiny is closely related to facts and whether the applicant was reasonably expected to review the content of the comments and if necessary delete them. In the case of the applicant, excessive traffic that might prevent the applicant from effectively monitoring the situation did not exist.Footnote 65
This is one of the most controversial parts of the Chamber’s interpretation. To what extent is a person responsible for something published on their Facebook ‘wall’, which cannot be compared to professional websites or a journalistic website? If the liability is extended to the owner of the Facebook ‘wall’, what kind of monitoring is adequate, and would this kind of sanction lead to self-censorship?
The problem is also related to the question of demonstrating the intent of incitement to hatred. For example, the European Commission against Racism and Intolerance (ECRI) has reminded us that it is often difficult to establish intent because the speech might not articulate intent clearly. The speech might be unambiguous, and the intent to incite racial hatred might be inferred from the strength of the language of the speech or other relevant circumstances. The ECRI mentions in connection with this the former behaviour of the speaker.Footnote 66
One of the ongoing topics is how community standards impact countries with weak national human rights protection. The Oversight Board has been particularly concerned about ‘Facebook removing content on matters in the public interest in countries where national legal and institutional protections for human rights, in particular freedom of expression, are weak’.Footnote 67 This is reiterated especially in the Russian and Turkish context and the problems with freedom of expression in these countries. In the Pro-Navalnyy protest case (2021-004-FB-UA), the Board considered that Facebook should have considered the situation in Russia.Footnote 68 The Board stated: ‘Facebook should have considered the environment for freedom of expression in Russia generally, and specifically government campaigns of disinformation against opponents and their supporters.’Footnote 69 The Board explains its approach in the Pro-Navalnyy protest case, balancing values used in the community standard: ‘The decision to remove this content failed to balance Facebook’s values of “Dignity” and “Safety” against “Voice”. Political speech is central to the value of “Voice” and should only be limited where there are clear “Safety” or “Dignity” concerns.’Footnote 70 The Board especially considered ‘Voice’ as especially important in a country such as Russia, where freedom of expression is routinely suppressed.
The Oversight Board also overturned a Turkish case where the applicant’s Instagram post was originally removed.Footnote 71 The post was encouraging people to discuss the solitary confinement of Abdullah Öcalan, a founding member of the Kurdistan Workers’ Party (PKK). The user did not advocate violence in their post and did not express support for Öcalan’s ideology or the PKK. Instead, he or she sought to highlight human rights concerns about Öcalan’s prolonged solitary confinement, which had also been raised by international bodies. As the post was unlikely to result in harm, its removal was not necessary or proportionate under international human rights standards.Footnote 72
One of the interesting continuums that also sheds light on the limits of argumentation over protecting democratic states relates to case law under Article 11 and cases where political parties have been faced with a dissolution. The objective of these dissolutions has often been related to a threat to democracy that these parties cause with their political activities. One recent key case was DTP v. Turkey, where the Turkish court approved the dissolution of the Party for a Democratic Society (DTP), supporting a peaceful resolution to the Kurdish problem.Footnote 73 In this case, the Court did not accept the government’s argumentation that the political party’s goals were themselves against democracy. The DTP fully supported a peaceful solution to the Kurdish question and recognised Kurdish identity in Turkey. The Court assessed whether the DTP project was against the democratic system. The political system must, if functioning properly, allow parties to introduce views that are contrary to the prevailing guidelines.Footnote 74 According to the Court, the DTP political project did not seek to undermine the democratic system, but emphasised the fact that it refused to use violence.Footnote 75
10.4.3 The Role of Public Watchdogs and the Promotion of a Free, Independent, and Diverse Media Environment in the Court’s Case Law
Noting that freedom of expression is acknowledged as one of the foundations of a democratic society, this naturally means that the interpretation of freedom of expression should also reflect the changing nature and characteristics of the prevailing democratic society. In this respect, it is imperative to note that, looking at past decades, freedom of expression has been understood mostly from the perspective of traditional media, that is, print and audiovisual media, which is different from the more diverse current media environment.
Democracy has always been closely related to public debate and the maintenance of transparency within the system, but the views of those engaging in public debate in the past have not been as diverse as they could have been. As such, we can say that the traditional understanding of those whose speech should be protected is also in transition. As a consequence, how to balance the pluralism and diverse views expressed in the media and at the same time uphold the independence of the media have become essential questions to consider.
In a democratic system, the media has an important role as a public watchdog. It has a right to impart news and ideas and disseminate information on difficult and controversial issues. In order to have public debate on important social issues, the Convention allows reporting on issues that might include expressions not protected by the Convention. In the balancing process between freedom of expression and restrictions based on protecting democracy and the rights of others, context is a key factor and has an obvious impact on the limits of free speech.
The role of the media as a public watchdog has often been mentioned in the Court’s case law since the Barthold case,Footnote 76 and has applied pressure to both print and broadcast news. It has been used in the context of public debate and in arguing how important it is in a democratic society not to hamper public debate with disproportionate sanctions. In the famous Jersild case, the role of news reporting and interviews was emphasised as one of the most important means whereby the press is able to play a vital role as public watchdog.Footnote 77 Later, the protection of sources was considered to be vital to the role of the press as a public watchdog, and in Goodwin v. the UK the Court also acknowledged the chilling effect that the disclosure of sources would cause.Footnote 78
These cases in the 1980s and 1990s reflect the traditional media environment and the media’s role in the democratic system. However, the watchdog concept has evolved in recent years to include a larger group of actors, which is better suited to the new media environment. In a number of cases, the Court has discussed the different kinds of roles public watchdogs could perform in a democratic society, especially the impact of restrictions on public debate in the case of censorship in the name of personality rights.Footnote 79 In particular, the case of Magyar Helsinki Bizottság v. Hungary (2016) specified what kinds of actors would qualify as public watchdogs in the context of freedom of expression. The Court based its interpretation on the key task in a democratic society, which is to facilitate and foster the public’s right to receive and impart information and ideas.Footnote 80
The Court mentioned in the Magyar Helsinki case the creation of platforms for public debate. The Court used an analogy between the press and NGOs, drawing attention to matters of public interest. The Court used the characterisation of the social watchdog as warranting similar protection. In this line of consideration, the role of civil society was recognised as making an ‘important contribution to discussion of public affairs’.Footnote 81 The Court also recognised the roles of scholars. According to the Court, a high level of protection also extends to academic researchers and authors of literature on matters of public concern.Footnote 82
More recently, the Court has extended its approach to the media environment. The key concepts in the NIT SRL v. Moldova case are media pluralism and independent media. The NIT SRL case introduces key concepts relevant in creating a media environment that in itself provides the pre-emptive self-defence of democracy. In NIT SRL, the Court considered the key principles of pluralism in the media, stating that it is ultimately part of the positive obligation to guarantee that diverse political programmes are proposed and debated. The Court mentioned that these also mean programmes that question the current system, provided that they do not harm democracy. It is not only sufficient to provide a plurality of channels, but it is also necessary to provide effective access to such channels and guarantee the diversity of programmes reflecting, as far as possible, the variety of opinions in society.Footnote 83 The Court considers it dangerous that some corporations have a dominant position and can exercise pressure on broadcasters and eventually curtail their editorial freedom, undermining freedom of expression.Footnote 84 It is important to note that external and internal pluralism should not be considered in isolation, as external pluralism normally refers to multiple channels, and internal pluralism allows the content of the channel to be diverse.
The Court referred to the multifaceted character and sheer complexity of issues concerning media pluralism. There are a variety of means to be deployed by the state to regulate effective pluralism. Therefore, there is a wide margin of appreciation for authorities regarding the choice of means to ensure media pluralism.Footnote 85
This wider discretion relates to the choice of means, but the Court also referred to a long tradition of case law, acknowledging that the margin is narrow when it comes to editorial freedom of the press and choosing what appropriate journalistic methods to use. The NIT SRL case gives a detailed analysis of Moldova’s regulatory system and background, going into how the regulatory framework was operated and which kind of oversight was provided. It was noticed that legislation existed behind the regulatory framework that was drafted in order to provide proportionality. The legislation was made with careful consideration and genuine effort to strike a fair balance between competing interests. That this fair balance was achieved at parliamentary level is paramount. According to the Court, the regulative framework was also supervised, and the state operated well within its margin of appreciation.
In addition to journalistic sources (Goodwin v. the UK) becoming more important than ever, it is also a question of protecting whistleblowers under the scope of Article 10 and defending democracy. There is a strong continuum that has emphasised the importance of imparting information and reporting on illegal conduct and wrongdoings.Footnote 86 The same has also been confirmed by the UN Special Rapporteur, who referred to an expanded view on the concept of journalism expressed by the Human Rights Committee, that the practice of journalism is carried out by ‘professional full-time reporters and analysts, as well as bloggers and others who engage in forms of self-publication in print, on the Internet or elsewhere (Human Rights Committee, general comment No. 34, para. 44)’.Footnote 87
Spreading an increasing amount of disinformation leads to disturbing impacts on trust and confidence in a democratic system. Distributing disinformation seeks to control media and public opinion. There are often ulterior motives behind disinformation.Footnote 88 These reasons are often part of political tactics, and their purpose can be misleading in the discussion. It is also often part of a state’s hybrid tactics to destabilise other states during elections and other democratic processes by using disinformation and other forms of influence.
While previous case law has approached defending democracy from the perspective of traditional forms of activities related to election rights, new restrictions under the Court’s scrutiny often reveal a widespread aim of preventing opposition forces from participating in public debate rather than using old-fashioned media channels. Recent cases also reflect that the Court has become active in requiring positive obligations to protect those who are victims of hate speech, especially towards minorities. When you cannot control the media as previously, the measures seem severe, and the rights protected under the Convention are questioned comprehensively.
The role of media companies has become instrumental, especially in recognition of media pluralism in a healthy democracy. In addition to national media outlets, social media giants can influence the distribution of disinformation. What is the responsibility of those giving a platform to hate speech and disinformation? What is the role of Article 17 that so far has been related to a rather small number of extremist opinions? Could it be used by those who are distributing disinformation? What is the role of extraterritorial jurisdiction doctrine in new media? These are some of the essential questions to be answered in order to deal with those abusing their freedom of expression for hate speech and disinformation.
10.5 Concluding Observations
So far, the development of the defending democracy doctrine has been considered to involve only very narrow extreme political movements pursuing totalitarian ideologies. The relevance of the historico-political context, and especially the history of events in Europe, have made it necessary for the Court to establish a doctrine based on the ‘self-defence’ of democracy. The Court has repeatedly reminded us of the scars related to Nazism and Communism. It has also reminded us about specific threats related to religious extremism. The historico-political context in different countries, especially the stability of the democratic system, constitutes one of the relevant issues to be assessed in the process of achieving a fair balance. On the one hand, this has allowed different standards in countries that have close historical connections to human rights atrocities in the past; for example, those who are denying historical events such as the Holocaust are considered under Article 17, thus examining their conduct as an abuse of rights. On the other hand, historical context is relevant in states with a short democratic history and, in some cases, forces with support for old authoritarian regimes.
But the contextual discussion is also currently bringing another side of contextualism. The proportionality test should also take into account the prevailing level of protection of the freedom of expression in the country. This is apparent in the recent cases before Facebook’s Oversight Board, considering that weak protection should also have an impact on applying community standards and considering whether removing posts is proportionate. Obviously, the Board has acknowledged that in countries such as Russia and Turkey, there should be a higher threshold for removing Facebook posts.
The European Court’s doctrines on the breadth of the margin of appreciation can provide a clear sign that regulating a complex and multifaceted issue requires a wider margin. Still, when authorities enter into the questions at the core of journalistic activities, such as the choice of journalistic methods, the margin is obviously narrow.
Democracy should be founded on pluralistic media, which, as one of the public watchdogs, ensures critical public debate on issues of public interest. When the authorities start making decisions on what is an acceptable issue to be discussed and what is not, this will ultimately lead to restrictions that jeopardise the core of freedom of expression. The Court states that it is not for the authorities or the Court ‘to review the press’s own appreciation of the news or information value of an item or to substitute their views for those of the press on what methods of objective and balanced reporting should be adopted by journalists’.Footnote 89
When regulating audiovisual media, it has been necessary to take into account practical issues related to limited frequencies. Case law has continued to adopt, little by little, new elements related to a contemporary democratic system, in which democracy is something beyond elections and more about a comprehensive democratic system. On a global level, we are already seeing increasing repressions of freedom of expression and media integrity under the guise of fighting disinformation and protecting infrastructure against cyber-attacks.Footnote 90 This kind of trend can also be detrimental to European countries.
Threats in the new media environment are, to a certain extent, similar to traditional print or audiovisual media. The Court founded its case law on the idea of the chilling effect that different sanctions can have on the media. It was important that the media and journalists were able to impart information to the public without censorship. The media’s access to information was key to performing as a public watchdog over those in power. With the democratic system becoming more complex, including multiple other actors in addition to the media, who are directly imparting information, it is necessary to talk of public watchdogs in the plural, including those who challenge the government and corporations, whether they be individual activists or networks of activists.
However, an even more significant development in the case law is related to understanding that the defence of democracy is related to understanding how modern democracy works and especially how one of the foundations of democratic society, freedom of expression, has transformed in the new media environment. There is ultimately a fear that restrictions become rules and protection comes as an exception. The Oversight Board has revealed many fundamental failings in respect to social media giants such as Facebook and Instagram removing posts, leading to a quest for transparency and specified rules on removing posts because the policy currently applied is in danger of silencing social debate.
Freedom of expression has changed substantially in the sense that governments are acting in bad faith against opposition parties. From Article 17 and extreme groups, it is now Article 18 that is not just an infrequent curiosity of the Convention but trending in cases such as Navalnyy. At the same time, more restrictions towards NGOs are being discussed, as in the case of Ecodefence and 60 others v. Russia. The linking issue in these cases is that the aim is to have a chilling effect and control the social debate.
There is a lack of clear interpretative guidance, especially concerning social media posts and applying traditional freedom of expression doctrines to them. It is not simply a case of proceeding with the Jersild doctrine, when at that time the situation was much clearer as to ‘informing the public about a social issue’. In the new media environment, there is a strong need to develop criteria for freedom of expression and how best to apply these in different contexts. The Court has already decided several cases since Delfi v. Estonia. Despite the idea of source neutrality, a different context must be taken into account when modifying doctrines to be relevant to cases where the role of digital media has become central.
11.1 Introduction
The dissemination of false, incomplete, erroneous, distorted, or intrusive information is a phenomenon as old as humanity, but since the development of information and communication technologies (ICTs), it has re-emerged with a vigour never seen before. The causes and effects have varied considerably to the point of becoming a very dangerous threat not only to human rights but also to the stability of the democratic system.
As the United Nations (UN) Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan, explains:
More than 2,000 years ago, Octavian spun a vicious disinformation campaign to destroy his rival Mark Anthony and eventually become the first Roman emperor Augustus Caesar. Since those ancient times, information has been fabricated and manipulated to win wars, advance political ambitions, avenge grievances, hurt the vulnerable and make financial profit.
What is new is that digital technology has enabled pathways for false or manipulated information to be created, disseminated and amplified by various actors for political, ideological or commercial motives at a scale, speed and reach never known before. Interacting with political, social and economic grievances in the real world, disinformation online can have serious consequences for democracy and human rights.Footnote 1
Among the many examples of the use of false information with the purpose of manipulating the population (or certain sectors of it), the alleged impact fake news (selectively or widely disseminated through social media) could have on recent elections is usually cited. More concretely, the 2016 referendum in the UK on whether or not it should continue to be part of the European Union (EU), so-called Brexit, the presidential election in the same year held in the US where Donald Trump was elected, the resulting Facebook/Cambridge Analytica scandal, and the Italian elections in 2018. Even though there is no consensus that the impact of fake news on these processes has been significant, the tight results obtained suggested that it was decisive.Footnote 2
The use of fake news for political and electoral purposes was not the only preoccupying reality. Soon other uses appeared and turned the phenomenon of disinformation into a much more worrying matter, as can be recognised in the context of the COVID-19 pandemic, during which a large amount of false information circulated – even without the objective of manipulating the population – with mournful results.
In this regard, Khan held:
In recent years, in a number of countries, State-led disinformation campaigns have sought to influence elections and other political processes, control the narrative of public debates or curb protests against and criticisms of Governments. In the context of the COVID-19 pandemic, there have been various instances of State actors disseminating unverified claims about the origins of the virus responsible for COVID-19, denying the spread of the disease or providing false information on infection rates, fatality figures and health-care advice. Such disinformation has been detrimental to efforts to control the pandemic, endangering the rights to health and life, as well as people’s trust in public information and State institutions.Footnote 3
These misconducts by high-ranking public officials, deputies, and even presidents were also verified in many cases in Latin America.Footnote 4 This worrying reality has been favoured particularly by the appearance, within the framework of Web 2.0, of social media (websites and computer programs that allow people to communicate and share information on the internet using a computer or a mobile phone), where the use of social networks and of deepfakes emerged as the preferred and most effective way of communicating fake news to people likely to believe them.Footnote 5 But the phenomenon of fake news is not only due to the technological factor but to other epistemological, economic, affective, and political factors.
As Melo points out when considering those factors:
rumors and falsehoods spread in two different ways: social cascades and group polarization […]. A cascade is born when a group of influential people say or do something and others follow in their footsteps. Group polarization occurs when people with intellectual affinities come together, and thus end up defending a more extremist version than the one they held before they began to communicate with each other […].
Cascades are generated because all people tend to depend on others: if most of the people we know believe a rumor, we are also inclined to believe it. In the absence of our own information, we accept the opinions of others […]. In the economy, rumors can give rise to speculative bubbles… in public health they can generate anti-vaccination movements […]. Rumors and falsehoods are often responsible for sowing panic […]. [F]ear spreads quickly from one person to another, and if rumors provoke strong emotions, such as fear or indignation, it is much easier for them to spread.
How to manage the risk that cascades and polarisations lead to people believing false rumours? The most intuitive answer is related to freedom of expression: people should be offered objective information and the necessary corrections from those who know the truth, but this does not always work […]. Many people trust the market of ideas as a way to guarantee the truth.Footnote 6
In this context, owing to the importance of public and private interests at stake, neither states and the international community nor the mass media and other social media actors (websites and applications that enable users to create and share content or to participate in social networking) have remained static, and have gradually come to face the need to set limits on freedom of expression in order to prevent or even to reduce the pernicious effects or spread of false information. As Warren and Brandeis pointed out, when new inventions converged at the end of the nineteenth century (the telegraph, the telephone, photography, and the rotary press) and competition between newspapers became more acute (causing the appearance of ‘yellow journalism’ or ‘infotainment’, as it is called today):
That individual shall have full protection in person and in property is a principle as old as the common law; but it has been found necessary from time to time to define anew the exact nature and extent of such protection. Political, social, and economic changes entail the recognition of new rights, and the common Law, in its eternal youth, grows to meet the demands of society.Footnote 7
In the most relevant declarations and constitutions of the late nineteenth century, freedom of expression has been recognised as a preferred freedom, and as a consequence of this, even false or erroneous speech has been protected. However, following the ideas of Warren and Brandeis,Footnote 8 this freedom to inform – originally conceived almost absolutely – began to be subject to restrictions. Mechanisms such as the right of reply or the right to be forgotten in its pre-internet version (conceived not as a right to de-index content but as a right to be compensated for the consequences of publishing old information without current relevance that unjustifiably harms a person) are two clear examples. More recently, the new factual configurations claim new responses from governments, the international community, and the private sector, especially from information society services, who have a special responsibility when providing them. This need was seen much more clearly in the last decade, when false information was disseminated as knowingly false to alter electoral processes or influence in the context of the war between Russia and Ukraine and even in the COVID-19 pandemic scenario. As was clearly stated:
COVID-19 became the perfect storm for platforms. The increasing pressure they had been facing to remove problematic content became a public health issue. Disinformation surrounding the pandemic spread like the virus – at the hands of political leaders and influencers – community social media guidelines were insufficient and inconsistent, and those responsible for enforcing them had to confine themselves to their homes. Amid the confusion, platforms quickly began announcing new rules and measures to deal with misinformation. Regarding community rules, the actions reported during the pandemic have focused more on the rules on the content of the publications, than on rules about inauthentic activities.Footnote 9
The main problem in adopting those concrete and multiple measures in the more recent and complex context to combat disinformation in general and fake news in particular, is how to evaluate the veracity of information and which kinds of measures, adopted both from public and private sectors, could be compatible with democracy and human rights.
The purpose of this chapter is precisely to analyse in general terms the phenomenon of disinformation in the new technological context (which added to its already wide field of action of traditional media the vaster field of the internet and social networks), and more particularly of fake news, which has become even more popular in recent years as a weapon to achieve political or economic gains, leading to the ‘disinfodemic’ and to interesting and relevant conflicts between freedom of expression and various individual and collective rights (e.g., access to information, privacy, data protection).
There are, of course, other serious issues that have been generated and will continue to be generated by the new information technologies that are closely linked to the problem of false news, which we will not discuss here for reasons of space, but which must at least be mentioned: dangerous speech (including hate speech, which can be combated by the same tools that we will describe here); the effects on the rights of a specific person caused by the permanence on the internet of negative information that is true and that, although it was lawfully published at the time, has lost relevance owing to the passage of time (which finds a partial remedy in the right to be forgotten), and the appearance of cancel or call out culture, which condemns those who are deemed to have acted or spoken in an unacceptable manner aimed to ostracise, boycott, or shun (situations for which the right to be forgotten is clearly insufficient and requires other forms of mitigation).Footnote 10
11.2 Real and False News: Information, Disinformation, Misinformation, Malinformation, Propaganda, Fake, and Fabricated News
According to the Cambridge dictionary, information means ‘facts about a situation, person, event, etc.’ (UK), ‘news, facts or acknowledge’ (USA) and in business lexicon ‘facts or details about a person, company, product, etc.’.Footnote 11
In the same dictionary, disinformation is ‘false information spread in order to deceive people’;Footnote 12 misinformation is ‘wrong information, or the fact that people are misinformed’;Footnote 13 fake news is ‘false stories that appear to be news, spread on the Internet or using other media, usually created to influence political views or as a joke’;Footnote 14 and propaganda is defined as ‘information, ideas, opinions, or images, often only giving one part of an argument, that are broadcast, published, or in some other way spread with the intention of influencing people’s opinions’.Footnote 15
Malinformation and real, false, and fabricated news are not defined in the dictionary. Beyond its association with the concept of news, the meaning of ‘information’ is not exempt from debate because some authors think that ‘information encapsulates truth, and hence that false information fails to qualify as information at all’, and explain that the distinction between ‘information as true, and misinformation and disinformation as false, collapses due to the possibility of true misinformation and true disinformation’.Footnote 16
In short, the defined terms are few and, furthermore, these definitions do not have unanimous agreement in the academic field, so we will address them here in order to draw on such other sources.
11.2.1 Disinformation, Misinformation, Malinformation
Within the wide range of modalities that disinformation encompasses, for academic purposes, it is worth distinguishing between the next information disorders:
(a) Disinformation (false information shared intentionally to cause harm, for example, fabricated or deliberately manipulated content; intentionally created conspiracy theories or rumours),
(b) Misinformation (false information shared with no intention of causing harm, caused, for example, by unintentional mistakes such as inaccurate photo captions, dates, statistics, translations, or when satire is taken seriously), and
(c) Malinformation (‘genuine information shared with the intention to cause harm’Footnote 17; for example, the publication of private information for personal or corporate interest, such as revenge porn).
About the use of the term ‘disinformation’, in a recent report, Khan explains:
There is no universally accepted definition of disinformation. While the lack of agreement makes a global response challenging, the lack of consensus underlines the complex, intrinsically political and contested nature of the concept […]. Part of the problem lies in the impossibility of drawing clear lines between fact and falsehood and between the absence and presence of intent to cause harm […].
The European Commission has described disinformation as verifiably false or misleading information that, cumulatively, is created, presented and disseminated for economic gain or to intentionally deceive the public and that may cause public harm […] The Broadband Commission for Sustainable Development, on the other hand, has approached disinformation as false or misleading content with potential consequences, irrespective of the underlying intention or behaviours producing and circulating messages. National laws and regulations dealing with disinformation cover a varied combination of false or misleading information, the intention to cause harm or not and the nature of the harm caused or intended. Disinformation is often described in broad, ill-defined terms not in line with international legal standards […].
Academics have developed a taxonomy of an information disorder in which ‘disinformation’ is described as false information that is knowingly shared with the intention to cause harm, ‘misinformation’ as the unintentional dissemination of false information and ‘malinformation’ as genuine information shared with the intention to cause harm […]. By setting out a holistic and interconnected picture of the problem, the information disorder framework encourages a multidimensional, varied and contextualized approach to disinformation […].
Some academics have framed the phenomenon of disinformation as ‘viral deception’ consisting of three vectors: manipulative actors, deceptive behaviour and harmful content. (Camille François, ‘Actors, behaviors, content: a disinformation ABC’ (Transatlantic Working Group, September 2019). The focus is on online behaviour rather than on the veracity of content. Some large social media platforms, including Facebook, refer to these vectors to inform their policies on responding to coordinated inauthentic behaviour […].
Ultimately, the lack of clarity and agreement on what constitutes disinformation, including the frequent and interchangeable use of the term misinformation, reduces the effectiveness of responses. (Submission from the United Nations Educational, Scientific and Cultural Organization. A/HRC/47/25 4) It also leads to approaches that endanger the right to freedom of opinion and expression. It is vital to clarify the concepts of disinformation and misinformation within the framework of international human rights law […].
For the purposes of the present report, disinformation is understood as false information that is disseminated intentionally to cause serious social harm and misinformation as the dissemination of false information unknowingly. The terms are not used interchangeably […].Footnote 18
11.2.2 Real and False News: Fake or Fabricated News
Even at the risk of oversimplification, false news, by exclusion, can be understood as information about something that has happened recently that is totally or partially not true. Obviously, to fully understand the scope of this definition – that obviously presents an information disorder – we must first determine what is meant by real news.
11.2.2.1 Real News
The definition of ‘real news’ has its complexity and can vary based on geographical and personal factors; it involves many other difficulties because:
(a) the notion of truth depends on the subjective interpretation of reality,
(b) to decide whether something is ‘real’ or not, it needs a grounding of truth (e.g., the ‘collective consensus’), and
(c) most definitions take into account the editorial decision-making process followed to determine the credibility and accuracy of the information, considering that the journalistic labour is governed by the principles of verification, independence, and the obligation to report the truth (this is so because false news typically imitates real information in its shape, but not in organisational process or intent).
Real news is created through journalism, and includes ‘hard news’ – serious important news that is considered to be of interest to many people, either in a particular area or country, or in the world;Footnote 19 ‘breaking news’ – information that is received and broadcast about an event that has just happened or just begun;Footnote 20 and ‘soft news’ – news that is a mixture of information and entertainment, often relating to people’s private lives.Footnote 21
In order to distinguish real news from not real, Molina et al. indicate that real news has the following specific differential features:
I. Message and linguistic:
(a) factuality: fact-checked, impartial reporting, uses last names to cite,
(b) evidence: statistical data, research-based,
(c) message quality: journalistic style, edited and proof read,
(d) lexical and syntactic: frequent use of ‘today’, past tense,
(e) topical interest: conflict, human interest, prominence.
II. Sources and intentions:
(a) Sources of the content: verified sources, quotes and/or attributions, heterogeneity of sources,
(b) pedigree: originated from a well-known site/organisation, written by actual news staff,
(c) independence: organisation associated with the journalist.
III. Structural:
(a) URL: reputable ending, URL has normal registration,
(b) About Us section: will have a clear About Us section, authors and editors can be verified, contact Us section, e-mails are from the professional organisation.
IV. Network:
(a) Metadata: Metadata indicators of authenticity.Footnote 22
Complementing this idea, Verma et al. point out that any fake news detection system used to decide whether an article is fake or not should exploit not only the textual features of tweets (e.g., writing style and emotions) but also the characteristics of users (e.g., follower count and verified profile) who propagate fake news, and in this direction, various computational techniques, including long short-term memory, hierarchical attention networks, and natural language process, are utilised to design the fake news detection system with improved accuracy.Footnote 23
11.2.2.2 False Information: False News
Section 11.2.2.1 established that false news should be understood as information that, for different reasons, does not conform to the truth and for that reason constitutes an information disorder that can lead to disinformation, misinformation, or malinformation – even though it is clear that people who create or disseminate the flawed content are not always aware of this and even do not intend to harm. According to Verma, Rohilla, Sharma, and Gupta, the false information that, with a greater or lesser degree of fraudulence, circulates on social media, exhibits various modalities and can be classified according to its contents and to the intentionality of the action, as follows:
(a) satire or parody (one that, although it does not seek to cause damage, has misleading potential),
(b) false connections (titles, images, and quotes are not faithful to the content),
(c) misleading content (information is distorted to create a different reality),
(d) false context (genuine information located in a factual or temporal context different from the real one),
(e) imposter content (the identity of genuine sources is impersonated),
(f) manipulated content (information or images manipulated to deceive), and
(g) fabricated content (totally false content, designed to deceive and cause damage).Footnote 24
A recent paper by D’Amorim and Fernandes de Oliveira Miranda includes the next mis-, dis-, mal- information disorders:
(a) Fake news (information that is completely fabricated for the purpose of either making money or advancing a particular political or social agenda, typically by discrediting others, such as exposed fabrications, hoaxes, and news satire).
(b) Hoaxes (another type of deliberate fabrication or falsification in the mainstream or social media, such as rumours, fake graphics or tables, false attribution of authorship, and dramatic images).
(c) News satire or parody (usually found in humorous news websites based on irony, often in a mainstream format).
(d) Fake reviews (especially used in e-commerce platforms to influence the purchasing of products and services).
(e) Bias (belief bias, confirmation bias, and anchoring).
(f) Propaganda (commonly used as a dangerous persuasive political tool to shape a large-scale opinion, influencing people through Pavlov’s theory of conditioned response that pairs a stimulus with a conditioned response).
(g) Retracted papers (e.g., about 500 papers were backed by the false discovery of the ‘Piltdown man’ a supposed hominid fossil forged about 100 years ago that was supposed to reveal facts about the evolution of man, but was a fraud that took about forty years to be detected).
(h) Conspiracy theories popularised on internet forums and social media (e.g., the COVID-19 anti-vaccine and flat-earthers movements, undermining efforts to end a pandemic and challenging even already consolidated scientific discoveries).
(i) The incorrect use of maps, charts, and graphics (misleading representations with the attempt to support arguments).
(j) Phishing as a malinformation mechanism that misuses personal and/or confidential information. Identity theft, attempts to tarnish a reputation, profile cloning, denying access to email, and financial loss are, for example, results of phishing.
(k) Filter bubbles (algorithm-based, can amplify and at the same time isolate viewpoints and narratives, spreading misinformation by creating a personal ecosystem of information)
(l) Echo chambers (selective exposure of information or messages to users that have a more emotional relationship with information, favouring the circulation of information that reinforces their pre-existing views, maximising ideological polarisation and reinforcing different types of intolerance).
(m) Political use of sensitive information (exaggerations or purposely inflating or deflating numbers to make a point).
(n) Misuse of personal/confidential information (e.g., the Facebook/Cambridge Analytica scandal, an unprecedented data breach involved the harvesting of private information from over 50 million Facebook profiles).Footnote 25
11.2.2.3 Fake News and Propaganda
The broad dissemination of false information is commonly known as false or fake news, but these terms do not have the same meaning around the globe. Even though some authors consider them as synonymous, as said, others view false news as a genre that includes fake news, a concept that was more precisely coined, first to refer to false expressions emanating from high-profile public officials and later to include the expressions of the press and the criticisms disseminated through social networks.
The power of fake news lies in the ability to offer information in accordance with existing convictions of like-minded people, and for this reason the cosmos of social media in general and social networks in particular is a fertile field for its gestation since the business model of this applications leads to the generation of content and information that can be seen as credible and attractive. In this sense:
what makes fabricated news unique is the information environment we currently live in, where social media are key to dissemination of information and we no longer receive information solely from traditional gatekeepers. Nowadays, it is not necessary to be a journalist and work for a publication to create and disseminate content online. Laypersons write, curate, and disseminate information via online media. Studies show that they may even be preferred over traditional professional sources. This is particularly troublesome given that individuals find information that agrees with prior beliefs as more credible and reliable, creating an environment that exacerbates misinformation because credible information appears alongside personal opinions.Footnote 26
We stated before that the fabrication and dissemination of false news for political reasons is an ancient phenomenon, but this phenomenon unexpectedly re-emerged with great vigour in the last decade, and clear evidence of this is that the Oxford dictionary selected ‘post-truth’ as the word of the year in 2016 and the Collins dictionary did the same with the term ‘fake news’ in 2017.
As recently stated:
the concept known as ‘disinformation’ during the World Wars and as ‘freak journalism’ or ‘yellow journalism’ during the Spanish war, can be traced back to 1896 (Campbell, 2001; Crain, 2017). Yellow journalism was also known for publishing content with no evidence and therefore factually incorrect, often for business purposes (Samuel, 2016). In Yarros’ (1922) critique of yellow journalism is characterized as ‘brazen and vicious “faking,” and reckless disregard of decency, proportion and taste for the sake of increased profits.’ As if history were repeating itself, the phenomenon regained attention during the 2016 U.S. Presidential elections.Footnote 27
The very recent resurgence of fake news has brought new uses and a conceptual constraint (most exclude the publication of false stories used as a joke from this term). As Wardle points out, the term has been recently used by politicians (e.g., the former president of the US, Donald Trump), ‘as a weapon to attack a free and independent press’.Footnote 28 In accordance with this, Barclay restrictedly defines fake news as ‘information that is completely fabricated for the purpose of either making money or advancing a particular political or social agenda, typically by discrediting others’.Footnote 29
As Botero Marino points out, if we understand fake news as the publication or massive dissemination of false information of public interest, knowing its falsehood and with the intention of deceiving or confusing the public or a fraction of it, the concept is based on three elements: a material element (the massive dissemination of false information), a cognitive element (the effective knowledge of the falsity of the information that is manufactured and/or divulged), and a volitional element (the intention to deceive or confuse the public or a fraction of it). The volitional element is particularly useful in order to distinguish fake news from satire. Satire can consist of the publication of false information, knowing that it is false, without – or with – the intention of misleading or confusing the public. Satire, in accordance with the jurisprudence of the European Court of Human Rights (Vereinigung Bildender Künstler v. Austria),Footnote 30 enjoys special and reinforced protection from the right to freedom of expression.Footnote 31
As a consequence of this new and restricted concept, it is more obvious that the dissemination of fake news is a core component of propaganda, as both constitute forms of distorting reality in order to influence the opinions and actions of a given audience and obtain beneficial results.Footnote 32
Molina et al. refer to the different possible contents included in the concept of fake news:Footnote 33
(a) false news/hoaxes,Footnote 34
(b) commentary, opinion and feature writing,Footnote 35
(c) misreporting,Footnote 36
(d) polarised and sensationalist content,Footnote 37
(e) satirical news,Footnote 38
(f) persuasive information,Footnote 39 and
(g) citizen journalism.Footnote 40
In addition, they conclude that identifying fabricated information online before it becomes viral is imperative for maintaining an informed citizenry able to make decisions required in a healthy democracy, so that using machine learning and including several types of indicators could provide a solution for the identification of fabricated information, despite its overlapping boundaries with other types of information.Footnote 41 With this in mind, they propose an original taxonomy of online content, as a precursor to identifying signature features of fabricated news, including a table in which seven indicators (fact checked, emotionally charged, source verification, registration inconsistency, site pedigree, narrative writing, and humour) converted into yes/no questions (for a decision-tree algorithm) are provided to distinguish between fake news and other types of content, precisely in order to develop algorithms for fabricated news detection, and not to impinge on users’ right to express their opinions, or journalists’ endeavours, but to stop the dissemination of false information.Footnote 42
11.2.2.4 Ten Keys Suggested by the Private Sector and Academia to Detect and Stop the Spread of Fake News
In a recent study where it was detected that 86 per cent of Spaniards have difficulty distinguishing between real and false information, the researchers provided the following ten tips for detecting when information is false:
(a) Be wary of headlines. Fake news often has eye-catching headlines in all caps with exclamation marks and often shocking and unheard of information.
(b) Always examine the URL. A fake web address or one that copies a real one can indicate fake news. Check the characters of the URL well because there are always small details.
(c) Investigate the source of the news, particularly before sharing or disseminating it. Some social networks such as Facebook (until 2025) or Google have enabled the Fact Checking button so that users can certify the veracity of the information.
(d) Pay attention to the format. Many fake news sites have misspellings or weird layouts.
(e) Take a close look at the photos and do an image search. Fake news often contains manipulated images or videos, even based on authentic photos taken out of context.
(f) Check the dates. Fake news can have a nonsensical timeline or include altered dates.
(g) Check the facts and the author’s sources to confirm that they are accurate. If the identity of supposed experts is not mentioned, it is possible that the news is false.
(h) Check other news. If no other news source is reporting the same story, it may be false.
(i) Consider that the story may be a joke, especially when the source of the news is known for its parodies. If the details and the tone suggest that it has been written in a humorous way, it is not fake news.
(j) Be critical. Some stories are false on purpose and for hidden or malicious purposes.Footnote 43
11.2.3 The Four Top-Level Disinformation Responses
The crisis generated by the disinfodemic phenomenon has led both public and private sectors (particularly the international community, states, digital service providers, and non-governmental organisations) to adopt measures to mitigate it.
In particular, state responses seek to punish administratively or criminally the authors or broadcasters, following the recommendations of the UN Special Rapporteur for Freedom of Opinion and Expression, who recently reported that the responses by states to the growing phenomenon of disinformation refer fundamentally to internet closures and different kinds of regulations: criminal laws on defamation, consumer protection, financial fraud, social media, and social networks, and recommends that, ‘[i]n consonance with their obligation to respect human rights, States should not make, sponsor, encourage or disseminate claims that they know or should reasonably know to be false […]’.Footnote 44
In the case of private stakeholders, they mainly responded by adopting self-regulation (rushed by the pressure of governments on large internet platforms, perceived as ‘facilitators’ of the phenomenon) deployed via:
(a) social media (in the USA, the Washington Post’s FactChecker; in France, Libération’s Desintox and Le Monde’s Les Décodeurs; in the UK, the Channel 4 News’ Fact Check and The Guardian’s Reality Check Blog, among others),
(b) social networks (e.g., various tools used by Facebook to facilitate detection and reporting: a search initiative, a news literacy campaign, and a monitoring of the work done in News Feed),
(c) other internet service providers (e.g., Google’s Fast Check and some browser’s extensions, such as Pinocchio alerts, FiB Stop Living a Lie, This is Fake, B.S. Detector, and Fake News Alert), and
(d) non-governmental organisations (e.g., Fast Check CL in Chile, Newtral in Spain, FactCheckEU.org in Europe, FactCheck.org and Potitifact.com in the USA, Africa Check, in Africa, Chequeado.com in Argentina and AltNews in India; they check the quality of high social impact content and warn about any possible falsehood). In this direction, the Pointer Institute for Media Studies has been developing, with several civil and media organisations from around the world, the International Fact-Checking Network, and recently approved a code of principles on fact checking.
We will focus infra on these efforts, following the ‘four top-level disinformation responses’ classified by the Broadband Commission in the recent report on Freedom of Expression and Addressing Disinformation on the Internet.Footnote 45
11.2.3.1 Identification Responses
These kinds of responses involve the monitoring and analysis of information channels in order to detect the presence of disinformation through two kinds of activities (called subtypes in the report):
(a) Monitoring and fact-checking: carried out by internet communications companies, academia, and news, civil society, and independent fact-checking organisations and their partnerships.Footnote 46
(b) Investigative responses, aimed at determining whether a given message or content is totally or partially false and to provide insights into disinformation campaigns, including the originating actors, degree of spread, and affected communities.Footnote 47
11.2.3.2 Law and Policy Responses Aimed at Producers and Distributors
These kinds of responses are addressed to alter the environment that governs and shapes the behaviour of producers and distributors of disinformation, specifically:
(a) Legislative, pre-legislative, and policy responses, encompassing regulatory intervention.Footnote 48
(b) National and international counter-disinformation campaigns, tending to focus on the construction of counter-narratives.Footnote 49
(c) Specific responses aimed at combating election-related disinformation and designed to detect, track, and counter disinformation that is spread during elections, owing to its impact on democratic processes and citizen rights. This involves a combination of monitoring and fact-checking, legal, curatorial, technical, and other responses, which will be cross-referenced as appropriate.Footnote 50
11.2.3.3 Responses within the Processes of Production and Distribution
These include:
(a) Curatorial responses, primarily editorial and content policy and ‘community standards.’Footnote 51
(b) Technical and algorithmic responses implemented by social media platforms, video-sharing platforms and search engines, but can also be third-party tools (e.g., browser plug-ins) or experimental methods from academic research using algorithms and/or artificial intelligence to detect and limit the spread of disinformation, or provide context or additional information on individual items and posts.Footnote 52
c) De-monetisation responses, designed to stop profit and thus discourage the creation of clickbait, counterfeit news sites, and other kinds of for-profit disinformation.Footnote 53
11.2.3.4 Responses Supporting the Target Audiences (Victims) of Disinformation Campaigns
These responses include guidelines, recommendations, resolutions, media and data literacy, content credibility labelling initiatives, and other tools to influence curation in terms of the prominence and amplification of certain content. They are sub-classified in the report into:
(a) Ethical and normative responses carried out on international, regional and local levels involving the public condemnation of acts of disinformation or recommendations and resolutions aimed at thwarting these acts and sensitising the public to these issues.Footnote 54
(b) Educational responses aimed at promoting media and information literacy, critical thinking, and verification in the context of online information consumption, as well as journalist training.Footnote 55
(c) Empowerment and credibility labelling efforts around building content verification tools and web content indicators, in order to empower citizens and journalists to avoid falling prey to online disinformation.Footnote 56
11.2.4 International and Inter-American Human Rights System Responses
The worrying current reality of the fake news phenomenon has led the international community to propose the adoption of measures and effective tools to combat fake news, but making clear both the scope of freedom of expression and the limits that measures aimed at combating disinformation must respect.
In this direction the UN Special Rapporteur for Freedom of Opinion and Expression, the Representative for Freedom of the Media of the Organization for Security and Cooperation in Europe (OSCE), the Organization of American States (OAS) Special Rapporteur for Freedom of Expression, and the Special Rapporteur on Freedom of Expression and Access to Information of the African Commission on Human and Peoples’ Rights adopted several documents that, with greater or lesser specificity, refer to fake news and disinformation issues: in 2017, the Joint Declaration on Freedom of Expression and Fake News, Disinformation and Propaganda; in 2018, the Joint Declaration on Media Independence and Diversity in the Digital Age; in 2019, the Twentieth Anniversary of the Joint Declaration: Challenges to Freedom of Expression in the Next Decade; in 2020, the Joint Declaration on Freedom of Expression and Elections in the Digital Age; in 2021, the Joint Declaration on Politicians and Public Officials and Freedom of Expression, and in 2022 the Joint Declaration on Freedom of Expression and Gender Justice.Footnote 57 Also in 2022, and clearly facing the consequences of fake news during the Russia and Ukraine war, the UN Special Rapporteur for Freedom of Opinion and Expression also published a report on Disinformation and Freedom of Opinion and Expression during Armed Conflicts.Footnote 58
11.2.4.1 The ‘Standards on Disinformation and Propaganda’ Settled in the Joint Declaration on Freedom of Expression and Fake News, Disinformation and Propaganda
This document states that general prohibitions on the dissemination of information based on vague and ambiguous ideas, including ‘false news’ or ‘non-objective information’, are incompatible with international standards for restrictions on freedom of expression; criminal defamation laws and civil liability are only legal if defendants cannot prove the truth and do not have other defences, such as fair comment.Footnote 59
It also states that state actors should not make, sponsor, encourage, or further disseminate statements that they know or reasonably should know to be false (disinformation) or which demonstrate a reckless disregard for verifiable information (propaganda), and should take care to ensure that they disseminate reliable and trustworthy information, including about matters of public interest, such as the economy, public health, security, and the environment.
The principles settled in that document are as follows:
(a) States may only impose restrictions on the right to freedom of expression in accordance with the test for such restrictions under international law, namely that they be provided for by law, serve one of the legitimate interests recognised under international law, and be necessary and proportionate to protect that interest.
(b) Restrictions on freedom of expression may also be imposed, as long as they are consistent with the requirements noted in paragraph 1(a), to prohibit advocacy of hatred on protected grounds that constitutes incitement to violence, discrimination, or hostility (Article 20(2), International Covenant on Civil and Political Rights).
(c) Intermediaries should never be liable for any third-party content relating to those services unless they specifically intervene in that content or refuse to obey an order adopted in accordance with due process guarantees by an independent, impartial, authoritative oversight body (such as a court) to remove it and they have the technical capacity to do that.
(d) Consideration should be given to protecting individuals against liability for merely redistributing or promoting, through intermediaries, content of which they are not the author and which they have not modified.
(e) State mandated blocking of entire websites, IP addresses, ports, or network protocols is an extreme measure that can only be justified where it is provided by law and is necessary to protect a human right or other legitimate public interest, including in the sense that it is proportionate, there are no less intrusive alternative measures that would protect the interest, and it respects minimum due process guarantees.
(f) Content filtering systems that are imposed by a government and are not end-user controlled are not justifiable as a restriction on freedom of expression.
11.2.4.2 The Joint Declaration on Media Independence and Diversity in the Digital Age
This document states that restrictions on what may be disseminated through the media must be ruled by law, serve one of the legitimate interests recognised under international law, and be necessary and proportionate to protect that interest.Footnote 60 It rejects the use of vague notions, such as ‘information security’ and ‘cultural security’, as a basis for restricting freedom of expression.
It also states that media outlets and online platforms should enhance their professionalism and social responsibility, including potentially by adopting codes of conduct and fact-checking systems, and putting in place self-regulatory systems or participating in existing systems, to enforce them.
11.2.4.3 The Twentieth Anniversary Joint Declaration: Challenges to Freedom of Expression in the Next Decade
This document, considering private control as a threat to freedom of expression, states that there is an urgent need to adopt measures that address the ways in which the advertising-dependent business models of some digital technology companies create an environment that can also be used for viral dissemination of inter alia deception, disinformation, and hateful expression.Footnote 61 It also urges human rights sensitive solutions to the challenges caused by disinformation, including the growing possibility of deep fakes, in publicly accountable and targeted ways, using approaches that meet the international law standards of legality, legitimacy of objective, and necessity and proportionality.
11.2.4.4 The Joint Declaration on Freedom of Expression and Elections in the Digital Age
This document states that Member States should ensure that any restrictions on freedom of expression that apply during election periods comply with the international law three-part test requirements of legality, legitimacy of aim, and necessity, which implies the following:Footnote 62
(a) There should be no prior censorship of the media, including through means such as the administrative blocking of media websites or internet shutdowns.
(b) Any limits on the right to disseminate electoral statements should conform to international standards, including that public figures should be required to tolerate a higher degree of criticism and scrutiny than ordinary citizens.
(c) There should be no general or ambiguous laws on disinformation, such as prohibitions on spreading falsehoods or non-objective information. It also states that the media, both legacy and digital, should be exempted from liability during election periods for disseminating statements made directly by parties or candidates unless the statements have specifically been held to be unlawful by an independent and impartial court or regulatory body, or the statements constitute incitement to violence and the media outlet had a genuine opportunity to prevent their dissemination.
About restrictions during elections, it recommends Member States to consider supporting positive measures to address online disinformation, such as the promotion of independent fact-checking mechanisms and public education campaigns, while avoiding adopting rules criminalising disinformation. It also states that online intermediaries should not be held liable for dis-, mis-, and malinformation that has been disseminated over their platforms unless they specifically intervene in that content or fail to implement a legally binding order to remove that content.
It also states that digital media and online intermediaries should make a reasonable effort to address dis-, mis-, and malinformation and election related spam, including through independent fact-checking and other measures, such as advertisement archives, appropriate content moderation, and public alerts.
11.2.4.5 The Joint Declaration on Politicians and Public Officials and Freedom of Expression
This document recommends Member States to repeal or refrain from adopting general prohibitions on the dissemination of inaccurate information, such as false news or fake news laws, and also to respect the following standards in relation to disinformation and false news:Footnote 63
(a) adopt policies that provide for disciplinary measures to be imposed on public officials who, when acting or perceived to be acting in an official capacity, make, sponsor, encourage, or further disseminate statements that they know or should reasonably know to be false;
(b) ensure that public authorities make every effort to disseminate accurate and reliable information, including about their activities and matters of public interest.
It also states that, given the harm done by hate speech, including the ability of its targets to exercise their right to freedom of expression and to participate in political activities, Member States should:
(a) prohibit by law any advocacy of hatred that constitutes incitement to discrimination, hostility, or violence, in accordance with international law;
(b) undertake a range of activities – including education and counter-messaging – to combat intolerance and promote social inclusion and intercultural understanding.
In this regard, several international organisations have begun to focus their attention on public servants’ obligations to make truthful statements, highlighting those made by public officials and candidates for elected positions.
Knowingly or recklessly disregarding a falsehood about the truth finding is the basis of the doctrine of ‘actual malice’ enshrined by the US Supreme Court in the case New York Times v. Sullivan for the freedom of expression and the press.Footnote 64 This criterion has been taken up by the Inter-American human rights system, where due diligence is required for – and not only for – journalistic activity,Footnote 65 since officials must affirm true facts, and before making a judgement they must act with verification criteria superior to that of any person, and failure to do so in certain circumstances can generate consequences ranging from criminal to relevant ethical. Therefore, within the framework of Article 13 of the American Convention on Human Rights, the establishment of specific responsibilities and obligations to tell the truth or not lie, intentionally or negligently, for certain persons owing to their functions does not necessarily constitute an illegitimate restriction on the right to freedom of expression, and therefore false expression knowing its falsehood – standard of real malice – excludes the protection that a priori false expression enjoys.
In this direction, in the jurisprudence of the Inter-American Court, officials are charged – owing to their function and the position in society – with the duty to verify the facts (higher than in the case of an ordinary person) and to satisfy both the publicity of government acts and the access to public information.
As stated by the Inter-American Court in two resounding cases ruled against Venezuela,Footnote 66 the exercise of freedom of expression is not the same for a mere private subject as it is for public officials, since in a democratic society it is not just legitimate; rather, it is sometimes the duty of state authorities to rule on matters of public interest. However, in doing so they are subject to certain limitations insofar as they must reasonably, although not necessarily exhaustively, verify the facts on which they base their opinions, and they should do so with even greater diligence than that employed by individuals, by reason of its high investiture, the wide scope, and eventual effects that its expression may have on certain sectors of the population, and to prevent citizens and other interested persons from receiving a manipulated version of certain events.
The Court adds that the limitations on the duty to express oneself are clearly based on the damage that false expressions can cause and given that public officials perform the role of guarantor of the fundamental rights of the people, their statements cannot ignore them or constitute forms of direct or indirect interference or harmful pressure on the rights of those who seek to contribute to public deliberation through the expression and dissemination of their thoughts. This duty of special care is particularly accentuated in situations of greater social conflict, disturbances of public order, or social or political polarisation, precisely because of the set of risks that may be implied for certain people or groups at any given time.Footnote 67
More recently, in the context of the COVID-19 pandemic, the Inter-American Court adopted a resolution in which it refers to the obligations of Member States, recommending they:
Observe special care in the pronouncements and statements of public officials with high responsibilities regarding the evolution of the pandemic. In the current circumstances, it is a duty for state authorities to inform the population, and when pronouncing in this regard, they must act diligently and have a reasonable scientific basis. Also, they must remember that they are exposed to greater scrutiny and public criticism, even in special periods. Governments and Internet companies must deal with and transparently combat the disinformation that circulates regarding the pandemic.Footnote 68
In projection of these premises, provisions of a criminal and administrative nature in Latin America can be found in internal laws, such as codes and laws of ethics for public functions and even health regulations, from which the obligations of public officials to tell the truth arise.
Consequently, in criminal matters, in the protection of public faith, mendacious actions by public officials are punished for their crimes in the ideological falsification of public instruments,Footnote 69 and in the field of administrative law damage to trust in the administered in the face of acts of power (‘legitimate confidence’), and when this is violated, the administrative act is nullified.Footnote 70 Among the codes and laws of ethics in the public function, honesty and integrity stand out as express demands in the work of public officials.Footnote 71 These rules have also been incorporated into both regional and global soft law norms.Footnote 72
There are also specific regulations aimed at health officials, especially regarding the obligation not to produce false or misleading messages, which mainly refer to advertising pharmaceuticals and public health issues.Footnote 73
Legal obligations are also established for candidates for elected public office, particularly in the prevention of ‘dirty’ campaigns (those in which offences are used, information is invented, and slanders intrude on the private life of a candidate), which have been denounced recently in Latin America,Footnote 74 and have given rise to regulation, especially in electoral laws and political parties.Footnote 75
In this sense, the Broadband Commission for Sustainable Development, co-founded by UNESCO and the International Telecommunication Union (ITU), recommends in a recent report that:
Political parties and other political actors could: 1) Speak out about the dangers of political actors as sources and amplifiers of disinformation and work to improve the quality of the information ecosystem and increase trust in democratic institutions. 2) Refrain from using disinformation tactics in political campaigning, including the use of covert tools of public opinion manipulation and ‘dark propaganda’ public relations firms.Footnote 76
In any case, when weighing the rights involved and when giving concrete solutions, it appears advisable to consider what Sunstein has said:
In life and in politics, truth matters. In the end, it might matter more than anything else. It is a precondition for trust and hence for cooperation. But what, exactly, can governments do to restrict the dissemination of falsehoods in systems committed to freedom of speech? In brief: Much less than some of them want, but much more than some of them are now doing. I have argued in favor of a general principle: False statements are constitutionally protected unless the government can show that they threaten to cause serious harm that cannot be avoided through a more speech-protective route. I have also suggested that when lies are involved, the government may impose regulation on the basis of a weaker demonstration of harm than is ordinarily required for unintentional falsehoods. Reasonable people can disagree about how to apply these ideas in concrete cases. In general, however, this general principle, and the accompanying suggestion, give a great deal of constitutional protection to falsehoods and even lies.
[…] public officials have considerable power to regulate deepfakes and doctored videos. They are also entitled to act to protect public health and safety, certainly in the context of lies, and if falsehoods create sufficiently serious risks, to control such falsehoods as well. In all of these contexts, some of the most promising tools do not involve censorship or punishment; they involve more speech-protective approaches, such as labels and warnings.Footnote 77
11.2.4.6 The Joint Declaration on Freedom of Expression and Gender Justice
This document promotes the adoption of ‘education programs, social policies, cultural practices, and laws and policies that prohibit discrimination and sexual and gender-based violence and to promote equality and inclusion’, considering specifically that ‘online gender-based violence, gendered hate speech and disinformation, which cause serious psychological harm and can lead to physical violence, are proliferating with the aim of intimidating and silencing women, including female politicians, journalists and human rights defenders’.Footnote 78
11.2.4.7 The UN Special Rapporteur for Freedom of Opinion and Expression Report on Disinformation and Freedom of Opinion and Expression during Armed Conflicts
In this report, Khan examines the challenges that information manipulation poses to freedom of opinion and expression during armed conflict. She notes that the information environment in the digital age has become a dangerous theatre of war in which state and non-state actors, enabled by digital technology and social media, weaponise information to sow confusion, feed hate, incite violence, and prolong conflict.Footnote 79
Emphasising the vital importance of the right to information as a ‘survival right’ on which people’s lives, health, and safety depend, she recommends that human rights standards be reinforced alongside international humanitarian law during armed conflicts; urges states to reaffirm their commitment to upholding freedom of opinion and expression and ensuring that action to counter disinformation, propaganda, and incitement is well grounded in human rights; she recommends that social media companies align their policies and practices with human rights standards and apply them consistently across the world, and concludes by reiterating the need to build social resilience against disinformation and promote multi-stakeholder approaches that engage civil society as well as states, companies, and international organisations.
11.2.4.8 The European Commission Communication on ‘Tackling Online Disinformation: A European Approach’ and the Report on the Implementation of This Communication
In March 2015, the European Council invited the High Representative to develop an action plan to address Russia’s ongoing disinformation campaigns, which resulted in the creation of the East Stratcom Task Force, effective as planned since September 2015.
In a June 2017 resolution, the European Parliament called upon the Commission ‘to analyse in depth the current situation and legal framework with regard to fake news and to verify the possibility of legislative intervention to limit the dissemination and spreading of fake content’.Footnote 80
In March 2018, the European Council stated: ‘social networks and digital platforms need to guarantee transparent practices and full protection of citizens’ privacy and personal data’,Footnote 81 and subsequently the European Commission adopted the Communication ‘Tackling online disinformation: a European Approach’,Footnote 82 focused on combating the effects of online disinformation through: (a) a more transparent, trustworthy, and accountable online ecosystem, (b) secure and resilient election processes, (c) fostering education and media literacy, (d) support for quality journalism as an essential element of a democratic society, and (e) countering internal and external disinformation threats through strategic communication.
Later in the same year, the European Commission adopted a Report on the implementation of that Communication,Footnote 83 which delineates the challenges online disinformation present to our democracies and outlines five clusters of actions for private and public stakeholders that respond to these challenges.
The four measures that the Communication on tackling online disinformation suggested are analysed here.
11.2.4.8.1 A More Transparent, Trustworthy, and Accountable Online Ecosystem.
The first set of actions involves four objectives: (a) online platforms to act swiftly and effectively to protect users from disinformation, (b) strengthening fact checking, collective knowledge, and monitoring capacity on disinformation, (c) fostering online accountability, and (d) harnessing new technologies.
11.2.4.8.1.1 Online Platforms to Act Swiftly and Effectively to Protect Users from Disinformation.
This objective, achieved through:
(a) the adoption of a code of practice to be used by online platforms and the advertising industry in order to increase transparency,
(b) the creation of an independent European network of fact-checkers to establish common working methods, exchange best practices, and achieve the broadest possible coverage across the EU,
(c) the promotion of voluntary online identification systems for the traceability and identification of suppliers of information, and
(d) the use of the EU research and innovation programme (Horizon 2020) to mobilise new technologies, such as artificial intelligence, block chain, and cognitive algorithms.
The commitments are organised under five fields, monitored by the Commission in the implementation of a self-regulatory Code of Practice on Disinformation drafted in 2018 in a Multistakeholder Forum on Disinformation:
(a) Scrutiny of ad placements (deploy policies and processes to disrupt advertising and monetisation incentives for relevant behaviours).
(b) Political advertising and issue-based advertising (all advertisements should be clearly distinguishable from editorial content; enable public disclosure of political advertising; devise approaches to publicly disclose ‘issue-based advertising’).
(c) Integrity of services (put in place clear policies regarding identity and the misuse of automated bots on their services; put in place policies on what constitutes impermissible use of automated systems).
(d) Empowering consumers (invest in products, technologies, and programmes to help people make informed decisions when they encounter online news that may be false; invest in technological means to prioritise relevant, authentic and authoritative information where appropriate in searches, feeds, or other automatically ranked distribution channels; invest in features and tools that make it easier for people to find diverse perspectives about topics of public interest; partner with civil society, governments, educational institutions, and other stakeholders to support efforts aimed at improving critical thinking and digital media literacy; encourage market uptake of tools that help consumers understand why they are seeing particular advertisements).
(e) Empowering the research community (support good faith independent efforts to track disinformation and understand its impact; not prohibit or discourage good faith research into disinformation and political advertising on platforms; encourage research into disinformation and political advertising).Footnote 84
11.2.4.8.1.2 Strengthening Fact Checking, Collective Knowledge, and Monitoring Capacity on Disinformation.
The Commission committed, as a first step, to supporting the creation of an independent European network of fact-checkers.
As a second step, it committed to launching a secure European online platform on disinformation, offering analytical tools and cross-border data collection, including Union-wide open data and online platforms usage data to support detection and analysis of disinformation sources and dissemination patterns.
The Commission organised a series of technical workshops with representatives of the fact-checking community in 2018. It selected relevant projects under the research and innovation programme Horizon 2020. Furthermore, in cooperation with the European Parliament, it organised a fact-checking conference in view of the European elections.
These actions have contributed to: (a) mapping and networking independent fact-checking organisations in the Member States, (b) ascertaining which tools and services are essential and facilitating the improvement of fact-checking activities and their impact (e.g., access to EUROSTAT data, translation tools, automated stream of fact-checks produced by the relevant fact-checking organisations), (c) identifying professional and ethical standards for independent fact-checking, and (d) providing tools and infrastructural support to fact-checking organisations.
To prepare the second step, the Commission proposed the creation of a new digital service infrastructure under the Connecting Europe Facility work programme 2019 for the establishment of a European Platform on Disinformation. In this direction, in June 2020, the Commission launched the European Digital Media Observatory (EDMO), but some civil society actors fear that the inclusion of a former Facebook executive and several organisations sponsored by Google could impact EDMO’s missions, especially those that involve monitoring how online platforms follow the EU’s code of practice on disinformation, which was signed in 2018 by the online platforms Facebook, Google, and Twitter, Mozilla, by advertisers and parts of the advertising industry (who presented their roadmaps to implementation), and then by Microsoft in 2019 and by TikTok in 2020. The strengthened Code of Practice in 2021 will evolve towards a co-regulatory instrument as outlined in the Digital Services Act (DSA).Footnote 85
11.2.4.8.1.3 Fostering Online Accountability.
With a view to increasing trust and accountability online, the Commission committed to promoting the use of voluntary online systems allowing the identification of suppliers of information based on trustworthy electronic identification and authentication means.Footnote 86
11.2.4.8.1.4 Harnessing New Technologies.
The Commission committed to making full use of the Horizon 2020 framework programme to mobilise new technologies and explore the possibility of additional support for tools that combat disinformation, accelerating time-to-market for high-impact innovation activities, and encouraging the partnering of researchers and businesses.
Furthermore, in the proposal for the Horizon Europe programme, the Commission has proposed dedicating efforts to: (a) safeguard democratic and economic stability through the development of new tools to combat online disinformation, (b) better understand the role of journalistic standards and user-generated content in a hyper-connected society, and (c) support next generation internet applications and services including immersive and trustworthy media, social media, and social networking.Footnote 87
11.2.4.8.2 Secure and Resilient Election Processes.
The second set of actions addresses manipulative and disinformation tactics employed during electoral periods, in order to enable secure and resilient election processes, for which the Communication proposed to initiate continuous dialogue to support Member States in the management of risks to the democratic electoral process from cyber-attacks and disinformation.Footnote 88
11.2.4.8.3 Fostering Education and Media Literacy.
The third set of actions focuses on fostering education and media literacy. The life-long development of critical and digital competences is crucial to reinforce the resilience of our societies to disinformation. The Communication proposed new actions to this end, including: (a) supporting the provision of educational materials by independent fact-checkers and civil society organisations to schools and educators, (b) organising a European Week of Media Literacy, (c) exploring the possibility of adding media literacy to the criteria used by the Organisation for Economic Co-operation and Development in its comparative reports on international student assessment, and (d) further encouraging the implementation of ongoing initiatives on digital skills, education, and traineeship.Footnote 89
11.2.4.8.4 Support for Quality Journalism as an Essential Element of a Democratic Society.
The fourth set of actions aims to support quality journalism as an essential element of a democratic society. Quality news media and journalism can uncover and dilute disinformation, providing citizens with high-quality and diverse information. The Communication proposed to enhance the transparency and predictability of state aid rules for the media sector by making available an online repository of decisions. The Communication also proposed to launch a call in 2018 for the production and dissemination of quality news content on EU affairs through data-driven news media and to explore increased funding opportunities to support initiatives promoting media freedom and pluralism, and the modernisation of news rooms.Footnote 90
11.2.4.8.5 Countering Internal and External Disinformation Threats through Strategic Communication.
In line with the April communication, the European Commission worked to ensure the internal coordination of its communication activities aiming at tackling disinformation. In this context, it created an internal Network against Disinformation, the primary purpose of which is to enable its services to better detect harmful narratives, support a culture of fact-checking, provide fast responses, and strengthen more effective positive messaging.
The Commission reinforced cooperation with the European Parliament and East Strategic Communication Task Force through a tripartite forum that aimed to operationalise the institutions’ respective efforts in countering disinformation ahead of the 2019 European elections. In May 2021, the Commission presented a Guidance to strengthen the Code of Practice on Disinformation, which aims to address gaps and shortcomings and create a more transparent, safe, and trustworthy online environment and lays out the cornerstones for a robust monitoring framework for its implementation. It also aims at evolving the existing Code of Practice towards a co-regulatory instrument foreseen under the DSA, offering an early opportunity to design appropriate measures to address systemic risks related to disinformation stemming from the functioning and use made of the platforms’ services in view of the anticipated DSA risk assessment and mitigation framework.Footnote 91
More recently, the assembly, grouping the signatories of the Code and new signatories willing to subscribe to and take on commitments under the 2021 Code, met on 8 July 2021 to start the process that will strengthen the Code of Practice on Disinformation. Members of the Assembly have approved a vade mecum on the organisation and functioning of the process to shape and draft the strengthened Code of Practice on Disinformation.Footnote 92
11.2.5 Rules Emerging Both from International Regulations and Recommendations regarding Freedom of Expression and Other Individual and Collective Rights Involved in the Creation and Dissemination of False Information
A proactive and efficient intervention to combat the disinfodemic phenomenon, respectful of conflicting rights and in particular the freedom of information, needs to first determine what kind of information requires an intervention without unduly affecting freedom of expression, and second, which obligations and responsibilities both public and private sectors have.
Now, little has been done to distinguish when disinformation requires some kind of intervention and to determine what kind of measures must be taken in accordance with the specific obligations that each sector has and even the different responsibilities of those who disseminate it, particularly when it is information issued by public officials and candidates for elected positions, on whom, depending on each legal system, ethical and legal obligations weigh that are even more intense when the dissemination of these manifestations can cause serious damage.
Referring to the emerging rules of the inter-American human rights system and based on what we have stated previously, Botero Marino indicates that there are two important standards applicable to any state effort aimed at prohibiting or regulating fake news:
(a) the simple objective falsehood of the expression cannot be the object of prohibition or state sanction, and
(b) it is only justified to restrict freedom of expression in order to protect the rights of third parties or public order understood from a narrowly defined democratic perspective.
She adds that to attend precisely to the protection of the rights of third parties and of democratic public order without undue restrictions, a test must be carried out that involves various relevant factors to define whether to act on false expressions – and to what extent – and eventually on those who generate and distribute them.Footnote 93
Complementing this perspective, Sunstein affirms that what should be considered includes: (a) the intentionality of the declarant (if he/she is knowingly lying, if he/she is negligent, if he/she doubts and is not interested in corroborating the veracity, if he/she believes that what he says is true); (b) the damage that its expression can cause in terms of its magnitude (serious, moderate, slight, non-existent); (c) the probability of it taking place (certain, probable, improbable, impossible); and (d) the moment in which it will occur (immediate, near future, not near future, distant future).Footnote 94
To measure the probability of damage, it is also relevant to consider the social position of the speaker, since it clearly affects his or her credibility (a minister of health referring to treatments against COVID-19 is not the same as what can be sustained by any person lacking the support of specific studies). And of course, in order to measure the personal consequences on the speaker, it will be necessary to assess whether specific legal obligations weigh on the speaker to tell the truth or to take any special care before issuing the statement, either because of his or her function or because of having contractually submitted to certain obligations (e.g., the acceptance of uses and conditions of a social network, which establishes guidelines on how to express oneself and the consequences that can lead to the administrator of the network restricting or deleting an account, as happened recently with the Facebook sanction of former president Donald Trump, whose account was suspended when he was in office and for two years because the expressions dumped on the network incited the assault on the Capitol).
Based on the combination of these parameters, it is clear that actions by both the states and the private sector must be based on a correct weighing of the conflicting interests at stake and, based on this, decide whether and to what extent it is feasible to intervene. In this direction, mathematical formulas similar to the one Susi proposes for the application of the right to be forgotten online (RTBF 2.0), should help.Footnote 95
Facing the crisis generated by disinformation has provoked either state responses that seek to punish administratively or criminally their authors or broadcasters and private responses, based on self-regulations deployed both by social networks (e.g., various tools used by Facebook to facilitate detection and report, such as ranking and flagging; a search initiative, a news literacy campaign, and a monitoring of the work done in News Feed) and other internet service providers (e.g., Google’s Fast Check), including non-governmental organisations that check the quality of content that may have a high social impact and warn about its possible falsehood (e.g., at a global level, the Fact Checking Network).
Botero Marino warns that the use of algorithms as a means to combating the spread of fake news is only compatible with freedom of expression, as stated by the special rapporteurs for freedom of expression in their 2017 Joint Declaration, if it: (a) is based on transparent and objectively justifiable criteria, (b) fully guarantees the right to due process of the interested parties, and (c) includes the participation of citizen initiatives dedicated to fact-checking based on transparent codes of ethics.
She also remembers that prohibiting or regulating fake news at the state level is not only not the least restrictive remedy for freedom of expression to combat this particular type of speech, but it is also structurally incompatible with the very functioning of democracy, since in it, the best remedy for lies is free democratic debate. And about fact-checking, she says that, although it is a growing trend, there are still very few organisations that are dedicated to it. For this reason, only for some news is its corresponding fact-check available on the network. There is also the problem that, even with the fact-checking of news, it is not always easy to find it on the World Wide Web. For this reason, different organisations dedicated to fact-checking have designed practical guides so that (in theory) anyone can independently verify any news, such as those of FactChecker.org, AfricaCheck, FullFact, and Les Décodeurs. Even PolitiFact.com offers a blacklist of web portals dedicated to spreading deliberately false information. The common idea behind these guides is, to use the expression of the Supreme Court of the US, that ‘each person is his own guardian of the truth’.Footnote 96
11.2.6 Actions Adopted by the Media and Civil Society Organisations to Detect and Report Fake News
As mentioned earlier, both the media and civil society organisations have got to work to help detect and consequently combat fake news. Among various initiatives, the creation of websites, extensions for browsers, and applications dedicated to such a combat stand out. In order to provide a few samples, it is worth mentioning the following initiatives:
(a) Maldito Bulo (Damn Hoax): website, more profiles on Twitter, Facebook, and Instagram where false news and hoaxes are reported. Internet users can help by tagging posts with #MalditoBulo.
(b) El Tragabulos (The Gobbler): section of the Verne supplement of the newspaper El País aimed at denying hoaxes and identifying false news and information that has become viral through the internet and social networks.
(c) WikiTribune: promoted by one of the co-founders of Wikipedia, it is a collaborative editing project that offers journalistic information verified by experts before publication.
(d) The Trust Project Mirror: various important media outlets (among them, the newspapers El Mundo and El País) are responsible for alerting the major search engines that the information being disseminated is false.
(e) Canales de Vost Spain (Vost Spain channels): Vost Spain is the platform for emergency volunteers, and it becomes essential for events and emergency notifications.
(f) Salud sin Bulos (Health without hoaxes): this initiative is thanks to the Association of Researchers in eHealth and offers reliable and responsible information on such sensitive issues as health alerts, medicines, and news that affects the health field.
(g) Fake News Detector: extension for Firefox and Chrome browsers that facilitates marking and detecting fake news and reporting it on social network profiles.
(h) Oficina de Seguridad del Internauta (Internet Security Office): warnings and denials of hoaxes are constantly posted on the website, in addition to reporting threats and cybersecurity advice.
(i) Facterbot: provides the user with a bot that periodically sends information about fake news through Facebook Messenger.
11.2.7 Actions Adopted by Social Media Platforms: Three Examples
The first approach by social media companies facing the aforementioned electoral crises of the last decade was very moderate, but the persistence of information disorders, exacerbated by the appearance of COVID-19, provoked important changes and led them to be proactive in fighting against the pernicious effects of fake news. Before the pandemic, platforms had been taking a rather passive role in the face of content causing disinformation by focusing their intervention on the authenticity of the accounts and the visibility of the publications. The leaders of these companies reasonably argued that they should avoid becoming judges of the public debate and also explained that from a commercial and political perspective, arbitrating content was already a costly and time-consuming task, and trying to evaluate the veracity of information would end up confronting social media platforms with political parties, governments, and civil society. In short, in the pre-pandemic world, social media approached disinformation delicately, with a minimal and exceptional intervention in content.
But the COVID-19 emergency constituted the perfect storm and forced the change, and the platforms began to focus on identifying inauthentic actions and judging the content through moderation processes that rely heavily on human analysis through people trained to make complex decisions on a massive scale, without sufficient context, and under a lot of emotional pressure.
The human intervention was inevitable because although algorithms can detect suspicious behaviour, spam, and certain types of content – especially regarding videos and photos – they do not know how to solve dilemmas inherent to the context of an expression, much less decide on the veracity of information. This mode of damage control has been characterised by inconsistent rules and processes with slow transparency progresses.Footnote 97
In this direction, as has been stated in recent studies, these kinds of stakeholders adopted four types of responses:
(a) Awareness actions: political actions by the platforms, including partnerships with other actors, the promotion of educational campaigns, digital and media literacy, and so on. These are actions that seek to build a positive ecosystem regarding disinformation, to empower actors and strategies that are expected to combat the phenomenon. By definition, they do not involve changes to the platforms’ codes or policies.
(b) Changes to the code of the platforms: including changes to the algorithms for recommendations, visibility, and scope of content.
(c) Policy changes and moderation actions: including actions that implement internal policies or external mandates for the removal of content reported as illegal or contrary to community guidelines.
(d) Transparency and public relations actions: revealing information about the operation of the platform, generated by the platforms and by independent researchers, and abstract or wishful statements from platforms on how to deal with the challenge of disinformation.Footnote 98
We will now discuss the evolution of the measures adopted, respecting the public interest exception contained in their community rules, by three of the most important social networks: Facebook, Twitter, and YouTube, especially following the two previously mentioned studies.Footnote 99
11.2.7.1 Facebook: Meta
Before the pandemic, the publication of fake news through this platform did not violate its community rules. Therefore, pressured by the facts, governments, civil society organisations, and public opinion, it began to take action against fake news through specific policies, but weighing the public interest at stake with the risk of harm, taking into account international human rights standards and considering in each case several factors, such as the particularities of each country (i.e., if it recognises the freedom of the press, if an election is in progress, or if the country is at war), the content of what was said (i.e., posts that can lead to violence and harm), and if it relates to governance or politics. However, the use of the public interest exception highlighted some inconsistencies.Footnote 100
In the case of publications related to politicians, Mark Zuckerberg alluded to the newsworthiness exception when he stated:
A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.Footnote 101
In any case, he admitted that Facebook would begin to label the content it leaves online, in the application of the exception, and that it would allow people to share it in order to condemn it.
Until 2025 Facebook had the following approach. Generally, it did not remove content, but rather reduced its distribution relying on advanced third-party verification (concrete fact-checking by civil society organisations), alerting the person who saw it or who was going to share it, penalising by reducing its visibility in the news section, and taking restricting actions over the accounts that created or shared it repeatedly. Once a piece of content was flagged, it activated proactive detection methods for possible duplicates.
The company also:
(a) bans the spreading of content that shows, confesses, or encourages acts of physical harm to human beings, false calls to emergency services, or participation in high-risk viral challenges in the community rules, more precisely in the section ‘coordinating harm and publicizing crime’,
(b) as reported in blog posts, deleted disinformation about health that may contribute to imminent physical harm, and
(c) focuses significant efforts on identifying ‘information or influence operations’, such as coordinated actions that use automated and human accounts to amplify content, intimidate other users, and capture conversations and trends.Footnote 102
Since January 2020, when COVID-19 was not yet a global pandemic, the company announced that it was controlling disinformation in labelling, filtering, and content removal and would disable the option to search for virus-related augmented reality effects on Instagram; excluded content or organic accounts related to COVID-19 from the recommendations; and alerted people who had interacted with content (with ‘likes’, reactions, or comments) that had been discredited, including recommendations for reliable information, practices that were extended to Instagram.
The company adopted various measures concerning paid advertising: the content to be advertised was previously presented by the contracting parties and approved or not by the platform (in the second case if there were possible violations to the community rules), so there is no content removal but unauthorised content.
In this direction, before the declaration of the pandemic, the platform banned ads that sought to create panic or denote urgency regarding supplies and products linked to COVID-19 and that guaranteed the cure or prevention of the virus, and in March 2020 banned sales of COVID-19 masks, hand sanitisers, disinfecting wipes, and test kits, but from August allowed the promotion of non-medical masks (subject to compliance with certain requirements), hand sanitisers, and disinfecting wipes.
With the approval of COVID-19 vaccines, they began to remove false claims (such as the ones that affirmed that the vaccines are a cure for the virus) and conspiracy theories about them that could cause harm in accordance with the opinion of public health experts and authorities. It also banned advertised content where vaccination kits were sold or accelerated access to the vaccine was promoted, when the vaccines were near to being available. And similar actions were taken regarding advertised content that may affect the availability of biosafety items, even if the advertisements are not misinformative.
11.2.7.2 Twitter [X]
Twitter traditionally sustained that it should not become an arbiter of the truth owing to its nature as an open platform. However, before 2020, Twitter changed the community policies and confronted misinformation by focusing more on the activity of actors (accounts), rather than on the content, concentrating its efforts on avoiding automated use of the platform for manipulation purposes. The company began to include some prohibitions related to disinformation in the context of electoral processes, prohibiting the posting of misleading information about how to participate, voter suppression, and intimidating content, and false or misleading information about political affiliations. Then, the platform introduced the possibility for users to report tweets that violate this Election Integrity policy (later renamed Civic Integrity policy, facing the 2020 US census and presidential election campaign).
Twitter turned from being the platform that intervened the least in user content to the most active, but applying the public interest test, considering that a piece of content is of public interest to be weighed against the possible risk and severity of damage ‘if it constitutes a direct contribution to the understanding or debate of a matter that concerns the whole public’. In this regard, the community rules consider that the exception is:
(a) more likely to apply in some cases (e.g., hate speech or harassment, or when the tweet includes a call to action), and
(b) less likely to apply (e.g., in cases of terrorism, violence, or electoral integrity, when the tweet is directed at government officials or when it provides important context for ongoing geopolitical events).
No exceptions are made for multimedia content related to child sexual exploitation, non-consensual nudity, and violent sexual assault on victims.Footnote 103
In October 2019, Twitter addressed the use of the exception for world leaders in order to ensure people’s right to know about their leaders and demand accountability. When using the exception, a filter can be put in place.Footnote 104
In March 2020, the platform warned that it would expand its definition of harm to include content that goes directly against the instructions of authorised sources in global and local public health.Footnote 105 Then, it prohibited tweets that deny the recommendations of the authorities with the intention that people act against them, encourage breaking social distancing, recommend ineffective treatments (even if they are not harmful or if they are shared humorously), recommend harmful treatments, deny scientific data on the transmission of the disease, make statements that incite action, cause panic, discomfort, or disorder on a large scale, include false or misleading information about the diagnosis or make claims that specific groups or nationalities are not susceptible or are more susceptible to the virus. In this direction, the platform acted on tweets from presidents Bolsonaro and Trump and even temporarily suspended the account of Trump’s lawyer, Rudy Giuliani, for similar reasons.Footnote 106
The exceptions only apply to tweets from elected and government officials, and candidates for political office. If a tweet covered by this exception is kept online, Twitter adds a warning or filter that provides context about the breach of the rules (before someone can view the content, an interstitial warning appears). This also makes the tweet not recommended by the Twitter algorithm and limits the ability of users to interact with what is posted. Twitter applied the exception to a tweet by Donald Trump for violation of its rules on disinformation and COVID-19.Footnote 107 However, the implementation of these measures was not consistent and has interpretative issues.Footnote 108
In May 2020, Twitter updated its approach related to COVID-19 information and explained that it will act on content based on misleading information, controversial statements, and unverified claims, but will remove content only in cases of misleading information with a propensity for severe harm, and for all other cases will use labels (phrases below the tweet, accompanied by an exclamation mark, referring to reliable information) and filters or notices (hiding the questioned tweet to notify the user that the content differs from the guidance of public health experts).Footnote 109
Regarding ads, since April 2020, Twitter has banned sensationalist or panic-inducing content and advertisements with inflated prices. Likewise, it prohibited the sale of masks and sanitisers and did not allow mention of vaccines, treatments, or test kits, except for information published by media outlets that the platform exempts under its policy of political advertisements (afterwards, in August 2020, these bans were limited to medical masks).
Since the acquisition of the company by Elon Musk, freedom of expression has been strengthened in the policies of use, as expressed by its new owner, when he stated in a tweet published in November 2022: ‘New Twitter policy is freedom of speech, but not freedom of reach. Negative/hate tweets will be max deboosted & demonetised, so no ads or other revenue to Twitter. You won’t find the tweet unless you specifically seek it out, which is no different from rest of Internet.’Footnote 110
11.2.7.3 YouTube
YouTube’s general strategy against disinformation rests on three principles:
(a) the preservation of content on the platform unless it violates its community guidelines,
(b) the possibility of monetising publications is a privilege, and
(c) the videos must meet exacting standards for the platform to recommend them.Footnote 111
The Community Guidelines do not properly regulate the public interest exception, but the platform’s chief executive officer has argued that they could keep content posted by politicians that violates its rules because it is important for people to know what they think. Other exceptions are granted to certain kinds of speech with educational, documentary, scientific or artistic content, but are not applicable in cases of harmful or dangerous content, violent or graphic content, incitement to hatred or violence, publications that promote, recommend, or assert that the use of harmful substances or treatments may have health benefits, and that manipulate or modify content that seeks to deceive the user and may involve serious risk of blatant harm.Footnote 112
Trying to ensure authentic behaviour, YouTube prohibits publications that deceptively seek to redirect users to other sites or artificially increase engagement metrics (views, comments, ‘likes’), as well as creating playlists with misleading titles or descriptions that make users believe that they will watch different videos than those in the list. Furthermore, only quality videos can be monetised, but in each intervention with the published content, YouTube always analyses the context to understand the intent of a video.Footnote 113
The eruption of the COVID-19 pandemic led to the creation of a new ‘policy on medical misinformation related to Covid-19’, and owing to its ‘serious risk of flagrant harm’ it removed content that discloses erroneous medical information that contradicts the guidelines of the World Health Organization or local health authorities regarding the treatment, prevention, diagnosis, or transmission of the virus. With the first violation, the user receives a warning, and after a strike is added, three strikes results in permanent removal of the channel.Footnote 114
11.3 Brief Conclusions
As initially stated, fake news is not a new phenomenon, but in the heat of the unthinkable technological changes that have taken place in recent decades (especially in the last two), they have become a huge problem for societies in general and particularly in the functioning of democracies, constituting an enormous challenge both for the international community and for governments, companies, and civil society.
The regulation of fake news became essential for various political reasons and owing to public health, both events that emerged with great force in the last half of the past decade, but regulation is still incomplete and in some cases confusing.
What is not in doubt, beyond the conflicts when dealing with freedom of expression, is the need for the regulation that has existed up to now, whether international, national, or from the private sector, to deepen, with clear and homogeneous guidelines across the globe, good practices, and intelligent initiatives, and in this, international cooperation becomes essential.
The task is not easy, but it is not impossible to carry out either. Let us keep working on it.
12.1 Introduction
The term ‘European imperialism’ in reference to the European Union (EU) legislator’s actions emerged in the context of internet law concerning data protection.Footnote 1 Europeans have recognised the futility of creating one of the world’s most robust data subject protection systems without ensuring its effectiveness beyond their borders. Consequently, the EU legislator introduced various mechanisms explicitly aimed at ensuring the global applicability of the EU General Data Protection Regulation (GDPR). These mechanisms include controlling the transnational flow of data and extending the GDPR’s reach to any online platform actively seeking to collect personal data from European citizens. As a side effect, European data protection regulations not only impacted the digital lives of millions of Europeans but also indirectly influenced the legal frameworks of numerous countries seeking to maintain e-commerce ties with the EU, primarily the US,Footnote 2 notwithstanding the role of China as a third pillar in this game of internet governance,Footnote 3 an analysis of its role and influence would exceed the limits of this research.
Such an evolution was inevitable because, by its very nature, the internet has global reach. Regulating the internet on a regional basis is feasible but will invariably have worldwide implications. Similarly, freedom of expression and its boundaries constitutes another global struggle involving various actors. The American perspective on freedom of expression holds sway on the internet by default, considering that all five ‘GAFAM’ (Google, Amazon, Facebook – now Meta, Apple, Microsoft) companies are headquartered in California. Indeed, the fight for digital sovereignty is, first of all, a fight of states against companies.Footnote 4 The EU legislator is then faced with an impossible dilemma – since a regional response in a globalised environment is doomed to fail, should it grant its legislation global reach, falling into a sort of digital imperialism and provoking economic conflict with its partners (mainly, the US), or should it restrict itself to regional scope?
This chapter will first examine the various interventions made by the EU legislator in the field of internet law that have directly affected freedom of expression worldwide (Section 12.2). This examination is crucial not only because it establishes substantial elements of the law regarding legitimate restrictions on freedom of expression, but also because it expressly focuses on the responsibilities of online service providers. Subsequently, it will demonstrate how this perspective clashes with the American philosophy on freedom of expression (Section 12.3). In light of the imminent prospect of a fragmented and divided internet, there is a pressing need to rethink online freedom of expression on an international scale and to seek consensus (Section 12.4).
12.2 The Birth of an EU Internet Law and Its Impact on Online Freedom of Expression
In this section, some examples of recent interventions from the EU legislator or developments from the European courts that have an impact on freedom of expression will be described, with emphasis on their explicit or implicit, and regional or global, reach. More specifically, the following topics will be analysed as legitimate interferences with freedom of expression from the European perspective: the rise of filters in online copyright law (Section 2.1), the question of the reach of injunctions in the context of the protection of reputation (Section 2.2), the innovations inserted by the Digital Service Act (Section 2.3), the measures taken at European level to fight the propagation of hate speech in digital environments (Section 2.4), the right to be forgotten (Section 2.5), and pictures of sexual content (Section 2.6).
12.2.1 Copyright Law
The enforcement of copyright law online is likely one of the most characteristic areas where a united position gradually diverged into substantial differences between US and EU perspectives. It cannot be denied that the EU’s original stance on this issue, as expressed through the E-Commerce Directive (Directive 2000/31), which implemented a safe harbour protecting online intermediaries from liability, was heavily influenced by the American Digital Millennium Copyright Act, signed in 1998. It is worth noting that the regulation establishes the principle of prohibiting general internet monitoring, which remains in force today. However, as discussed later, efforts to create mechanisms of a priori control of the content have multiplied.
However, Europe’s position shifted, primarily through judicial activism, particularly in its strict interpretation of the safe harbour under ‘passive role’ theory.Footnote 5 The focus on a strict notice and takedown procedure inherently carries the danger of a ‘broad censorial attitude’,Footnote 6 with adverse effects on freedom of expression.Footnote 7
Furthermore, Telekabel jurisprudence paved the way for a new mode of the online enforcement of copyright law at the expense of freedom of expression.Footnote 8 In this case, the European Court of Justice (ECJ) admitted that an injunction against an internet service provider requesting the intermediary to implement filtering measures to block access to a website infringing copyright does not violate freedom of expression. This stance can be seen as a form of censorship, as it blocks user access to online information. However, the Court takes care to add two fundamental safeguards: the injunction should be effective, and it should never have the effect of blocking access to legal content.
One of the most significant and widely discussed interventions of the EU legislator regarding freedom of expression for copyright protection is Article 17 of Directive 2019/790. The purpose of Article 17 is to compel major internet service providers (modelled after Alphabet’s YouTube platform) to install filters that automatically block the uploading of copyright-infringing content. However, after extensive discussions and political negotiations, the term ‘filter’ itself does not appear in the final text of the Article. Article 17(4) only states that online service providers must make their best efforts to block this content; otherwise, they cannot rely on the old safe harbour mechanism.
Similar to the Telekabel case, it is evident how this regulation directly impacts freedom of expression. As a form of censorship, providers are required to block any attempts by users to communicate. This issue was immediately brought to the attention of the ECJ, which issued its ruling in the EU Commission vs. Poland case. The Court considered that Article 17(4) should indeed be interpreted as a restriction on freedom of expression. Still, it justified this restriction by citing the legitimate interest of protecting the rights holders. The Court outlined multiple safeguards: a system of internal and swift complaint mechanisms against abuse, limiting the filter to identified and protected works of intellectual property, restricting the filter’s algorithm to obvious situations of plagiarism, and protecting fundamental rights through exceptions such as short citations and parody.
12.2.2 Protection of Reputation
The ECJ’s landmark case, Eva Glawischnig-Piesczek v. Facebook,Footnote 9 initially arose as a classic defamation and right of publicity case. Someone on the social media platform Facebook posted a photograph of a famous politician along with personal insults. Importantly, the victim in this case not only requested the removal of the illegal content from the platform but also asked Meta to take adequate measures to ensure that the same or similar content would not be posted again in the future. Once again, the question of prior control of content through filtering, or preventive censorship, arose. The Court had to evaluate the legality of such an injunction.
The judges accepted that such an injunction may be legal, provided that it is not interpreted as imposing on the host provider an obligation to generally monitor the information it stores or to actively seek facts or circumstances indicating illegal activity. In other words, the provider would only have an obligation to make reasonable efforts to block the same content.
This case is particularly important in the context of this chapter as it explicitly addresses the issue of the global or regional reach of EU internet law. Regarding the question of the worldwide or regional impact of such an injunction, the ECJ concluded that the E-Commerce Directive does not impose any limitation, including a territorial one, on the scope of measures that Member States are entitled to adopt in accordance with the directive.Footnote 10 Therefore, there is no reason to limit the territorial impact of such an injunction.
12.2.3 The Digital Service Act (DSA)
The substantial impact of the DSA is well documented,Footnote 11 as research already pointed out before the regulation was adopted.Footnote 12 The regulation substantially built on the acquis of the E-Commerce Directive, preferring a system of self-moderation to state-censorship. Under this system, the primary responsibility for notifying and reporting content that allegedly violates the terms and conditions of the service lies with the users of the online service. The platform, in turn, has a duty to respond and intervene as necessary.
At the forefront of this attention is once again the adoption of content-filtering technology. For very large online platforms (VLOPs) and search engines, defined by a set of strict economic conditions, the DSA takes a significant step away from the classic safe harbour approach and imposes a duty to mitigate the risks of illegal content proliferation.
Article 34 introduces a risk assessment, designed to tailor the VLOP’s obligations to the specific dangers associated with their platforms. Article 35 builds upon this risk assessment by requiring VLOPs to implement ‘reasonable, proportionate, and effective mitigation measures’ to address these risks.
Officially, the DSA is presented as an act aimed at protecting digital sovereignty. However, the global reach of such a transformative regulation is evident and not hidden. For example, the EU Commission’s website states that one of the DSA’s two goals is ‘to establish a level playing field to foster innovation, growth, and competitiveness, both in the European Single Market and globally’.Footnote 13 Consequently, it has already been observed that this practical function of the DSA as the global regulator of the internet ‘will likely provoke a clash in free expression standards between the U.S. and the EU’.Footnote 14
12.2.4 Online Hate Speech
There is a strong international legal background concerning the prevention of hate speech proliferation online. The most fundamental milestone at the international level is the first Additional Protocol to the Convention of Europe, commonly known as the Budapest Convention, on cybercrime.Footnote 15 Nevertheless, hate speech does constitute one of the most sensitive issues between the US and EU from the perspective of the delimitation of the boundaries of freedom of expression. It should be noted that the US, while participating in the drafting of the text, chose not to sign the first Additional Protocol covering the criminalisation of acts of a racist and xenophobic nature committed through computer systems.Footnote 16 In parallel, in the EU, the EU legislator decided to integrate this criminalisation of public incitement to violence or hatred into EU law. Council Framework Decision 2008/913 states:Footnote 17 ‘Each Member State shall take the measures necessary to ensure that the following intentional conduct is punishable: (a) publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent, or national or ethnic origin’.Footnote 18
In contrast, in the US, hate speech is protected by the First Amendment. For example, in a case related to a trademark allegedly using a racial slur, the Supreme Court determined that the registration office was not entitled to refuse its registration. The Court held that ‘Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express the thought that we hate’.Footnote 19
The EU legal framework is not limited to the criminalisation of hate speech. Through soft law, specifically the EU Code of Conduct against online hate speech, the EU legislator aimed to engage major internet actors in the fight against hate speech. Furthermore, with the support of new powers provided by the DSA, the Commission may enforce this code of conduct and monitor its application. The DSA specifically mentions ‘illegal hate speech’ as one area where the mitigation duty of VLOPs must differ from general moderation policies, particularly in terms of speed and effectiveness (Article 35(1)(c) of the DSA refers to expeditious removal or disabling access to hate speech, and adds that dedicated resources for content moderation must be adopted). These efforts aim to compel internet service providers to structurally modify their services and integrate hate speech moderation in their design.
12.2.5 The Right to Be Forgotten in the Google Spain Jurisprudence
The circumstances and findings of the landmark Google Spain decision are well known.Footnote 20 The ECJ determined that freedom of expression, specifically the right to be informed, should yield to the data subject’s right to ‘oblivion’, and in certain circumstances, the data subject is entitled to request that a search engine remove online content that reveals their personal data.
It should be immediately noted that this limitation on freedom of expression is not absolute. First, the right pertains not to the existence of the content itself online but only to it being referenced by search engines. Second, in light of the principle of proportionality, it must be demonstrated that there is an effective legitimate interest justifying the request for de-referencing and that, conversely, the information is not of sufficient public interest for the right to be informed to prevail.Footnote 21
Therefore, the right to be forgotten is aptly named, as time itself is not the sole factor in assessing the various legitimate interests. For instance, the accuracy of the information plays a more crucial role. In cases involving fake news, for example, search engines may be compelled to comply with de-referencing requests, as evidenced in the case of Google (Déréférencement d’un contenu prétendument inexact).Footnote 22 Nevertheless, the passage of time remains significant, assuming that the public interest in current information decreases over time.
Once again, the practical importance of this right raises questions about its reach. Should the right influence the behaviour of search engines worldwide, or should it only have regional applicability? This issue was addressed in the Google v. CNIL case,Footnote 23 where, this time, the theory of global reach was rejected by the Court. Specifically, the ECJ concluded that an:
operator is not required to carry out that dereferencing on all versions of its search engine but on the versions of that search engine corresponding to all the Member States, using, where necessary, measures which, while meeting the legal requirements, effectively prevent or, at the very least, seriously discourage an internet user conducting a search from one of the Member States on the basis of a data subject’s name from gaining access, via the list of results displayed following that search, to the links which are the subject of that request.
12.2.6 The Fight against Child Pornography
It might have been expected that a consensus would be reached on combating the highly sensitive issue of child pornography. Indeed, under the framework of the Budapest Convention on cybercrime, both the US and the EU agree on repressing this form of communication. However, an issue has been left to the appreciation of the convention’s signatories, illustrating the extent of the conceptual divergences between Americans and Europeans regarding the topic of freedom of expression.
Indeed, in EU Directive 2011/93/EU on combating the sexual abuse and sexual exploitation of children and child pornography,Footnote 24 child pornography is defined in four alternative ways, aligning with the definition provided by the Budapest Convention on cybercrime. More interestingly, the fourth alternative definition considers as child pornography ‘realistic images of a child engaged in sexually explicit conduct or realistic images of the sexual organs of a child, for primarily sexual purposes’. This means that even in the absence of real abuse, the mere drawing or computer-imaging of a situation of child abuse is considered child pornography. This definition therefore encompasses various modern techniques such as deepfakes (replacing a face in a video with another),Footnote 25 or AI-generated pictures and videos.Footnote 26 By contrast, in the US, the case of Ashcroft v. Free Speech Coalition limits the range of the definition of child pornography to real content only,Footnote 27 considering that in the case of imaginary content (even realistic) freedom of speech should prevail. This stance is generally considered incompatible with the Budapest Convention, which acknowledges virtual child pornography and calls for its criminalisation.Footnote 28 However, on the aftermath of the decision in Ashcroft, the American Congress added what could be seen as a very restrictive interpretation of the Convention, amending the law and limiting the ban to any visual depiction ‘that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct’.Footnote 29
Moreover, the EU has engaged in discussions on a more active implication of online platforms in fighting child pornography. More specifically, the EU Commission has proposed a new regulation in 2022 to Prevent and Combat Child Sexual Abuse (CSA).Footnote 30 The text has created controversy and the opposition of the EU Parliament and the Council, as it aims to impose advanced moderation and detection duties to online service providers. The complimentary impact assessment produced by the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs was, in particular, extremely critical of the proposal.Footnote 31 It details how the new legislation would infringe the prohibition of general monitoring and the prohibition of general data retention principles that are enshrined in EU law and developed in the ECJ’s case law. While a detailed analysis of this proposal and of the subsequent criticism would exceed the limits of this present research, it should be mentioned that the CSA regulation, if adopted, would have substantial consequences worldwide, as it would force online service providers to rethink how and when encrypted end-to-end communication services are provided to users.
12.3 Freedom of Expression versus Freedom of Speech: The False Twins
This section conducts a comparative analysis of the concept of freedom of expression in the EU and the US. As previously discussed, the US serves as the de facto model for information technology (IT) companies regarding freedom of expression, operating under the belief that this notion possesses a consensual and universally accepted definition in all modern democracies. First, the conceptual differences between European freedom of expression and the American freedom of speech will be explored (Section 3.1), and then the practical example of the recent policies adopted by X (formerly Twitter) will be used as a case study to illustrate these divergences (Section 3.2).
12.3.1 Freedom of Expression versus Freedom of Speech
While there is consensus that freedom of expression is one of the fundamental pillars of democracy, this unanimity conceals a significant divergence of viewpoints regarding its definition.
In the US, freedom of speech is defined with reference to the famous theory of the marketplace of ideas. In a dissenting opinion in the landmark case of Abrams v. United States,Footnote 32 Justice Oliver Wendell Holmes argued that the First Amendment protects the right to dissent in response to the government’s viewpoints and objectives. The theory of the marketplace of ideas posits that truth and by extension the common good will naturally emerge from the free exchange of ideas. In other words, all ideas should be freely expressed because they all contribute to the formation of a well-informed public opinion. In 1969, the US Supreme Court’s decision in Brandenburg v. Ohio solidified this theory as the dominant public policy in US free speech law,Footnote 33 explaining that the government cannot penalise inflammatory speech unless that speech is ‘directed at inciting or producing imminent lawless action and is likely to incite or produce such action’.
Nevertheless, multiple authors have warned against an overly literal interpretation of the theory of the marketplace of ideas. It has been pointed out that the classical interpretation depends on impossible assumptions for its coherence,Footnote 34 and Gordon in 1997 demonstrated that the theory, taken absolutely, would only create a situation where the views of the most powerful and/or the most numerous prevail.Footnote 35 In consequence, the theory of the marketplace of ideas has evolved over time. For instance, in the landmark Edwards v. Aguillard (1987) case, the US Supreme Court considered that a law stating that if one theory is exposed at school, the other theory must also be presented (referring to the principle of natural evolution, as opposed to the dogma of creationism), is unconstitutional.Footnote 36
In Europe, on the contrary, the indelible scars of the Holocaust have led to the adoption of a rather different stance on freedom of expression. In this context, a free marketplace of ideas is considered a utopia. Under the prism, implicitly, of the theory of memes,Footnote 37 society is seen as an ecosystem where some ideas act as viruses, contaminating people’s minds and jeopardising democracy. Therefore, freedom of expression, as enshrined in Article 10 of the Convention on Human Rights and Article 11 of the Charter of Fundamental Rights, is not perceived as absolute, as illustrated in the famous ‘little red schoolbook’ case.Footnote 38 While restrictions on freedom of expression are accepted, they must pass a three-part test, as recast in cases such as Mouvement Raëlien Suisse v. Switzerland,Footnote 39 and Animal Defenders International v. the United Kingdom;Footnote 40 the limitation must be provided by law, pursue a legitimate aim, and be necessary in a democratic society. This last condition has been further explained by case law, with emphasis on the notion of the ‘pressing need of society’. In other words, for a restriction to freedom of expression to be legitimate in the European perspective, it is not enough for the restriction to serve a public interest. The restriction must be unavoidable in a democratic society. All the restrictions analysed above (copyright law, child pornography, protection of reputation, data protection, hate speech) have been subjected to the three-part test.
It is commonly accepted that the American philosophy of freedom of speech is much broader than the European one, but it is essential to temper this notion, as in some respects, the American concept is more restrictive than the European one. First, the American human rights framework has primarily been constructed as a defence mechanism against state intrusion, under the ‘state action doctrine’,Footnote 41 resulting in the impossibility of horizontal application. Meanwhile, in European traditions, some forms of horizontal application are permitted.Footnote 42 Second, American freedom of speech is largely conceived from an individualistic standpoint, whereas the European perspective could be described as multidimensional, encompassing not only the freedom to express ideas but also the protection of pluralism,Footnote 43 and the right to be informed.Footnote 44
In conclusion, the American and European approaches to freedom of expression converge in terms of their core, which is the right to provoke. However, the various social functions of freedom, its role in society, and, most importantly, its limits are subjects of significant divergence between the birthplace of the internet and the EU.
12.3.2 Case Study: The Situation of X (formerly Twitter)
The recent policy changes of X (formerly Twitter) following its acquisition by Elon Musk should be used as a case study to illustrate the potential conflicts that can arise from this divergence. Indeed, the new president of the famous social media platform, who is a self-declared free speech absolutist,Footnote 45 has chosen to implement an orthodox American freedom of speech’s perspective on X’s functioning and moderation policies.
Consequently, the company lost around half of its workforce in 2022 (among them a substantial percentage were employees who focused on ethics and moderation) and then started to reinstall contentious accounts that had been suspended. Unsurprisingly, the platform almost immediately saw a rise in hate speech in its content.Footnote 46
However, according to Twitter’s advertising audience data from July 2022, the social network currently has 30.6 million users in Western Europe, and 27.2 million in Northern Europe.Footnote 47 Therefore, it is not surprising that, when Elon Musk tweeted ‘the bird is freed’ (after acquiring the company), the Internal Market Commissioner Thierry Breton replied ‘[i]n Europe, the bird will fly by our rules’.Footnote 48 Furthermore, in 2023, the European Commission Vice President focused on X in a press statement, highlighting the misfunctioning moderation system of the platform.Footnote 49
Indeed, data shows that X is having the ‘largest ratio of mis/disinformation posts’ among platforms that submitted reports to the EU. In the context of information wars and while the existence of ‘fake news factories’ have been demonstrated,Footnote 50 the reactions of the EU and the US diverge again, as they are based on their definition of freedom of expression. The EU pushes for stricter control of posted content; for instance, by putting forward the model of the ‘trust flagger’ in the DSA, organisms charged with the mission to accompany and guide the moderation teams of the online platforms.
By contrast, in the US, the case United States v. Alvarez,Footnote 51 where the Supreme Court ruled that the crime of falsely claiming military honour is unconstitutional, clearly stated that even disinformation is by presumption protected by free speech. Only under strict scrutiny may a content-based restriction of freedom of speech be accepted by exception in the US.Footnote 52
12.4 The Need for a Solution through International Consensus
This section examines the consequences of EU interventions into online freedom of expression in the context of divergences with the US on the scope of this human right. The primary concern is the potential risk of a fragmented digital landscape (Section 4.1), which would undermine the internet’s purpose and philosophical basis. Given the significant economic implications of this situation, the World Trade Organization (WTO) is proposed as an adequate forum of discussion for potential attempts at solutions (Section 4.2). Nonetheless, any attempt at solving the divergences on the boundaries and limitations of the human right must involve a reformulation of the concept itself. Online freedom of expression is effective only when its enforcement fully embraces the concept of the digital public space (Section 4.3).
12.4.1 The Danger of a Fragmented Internet
Based on the observations in Section 12.2, it can be concluded that the EU legislator, in conjunction with jurisprudence, does not hold an absolute position on the question of whether EU internet law should have regional or global reach. While the CNIL decision accepted that the right to be forgotten could only have regional reach, the Eva Glawischnig case, on the contrary, applied global reach. The situation is even more complicated concerning the mitigation duty stemming from the DSA and the Article 17 filter, where no indication exists regarding the reach of the legal framework. Generally speaking, any legislation that would allow for regional reach alone would be structurally flawed, as users could easily circumvent such regulations by using a virtual private network (VPN). Nevertheless, the National Security Agency revelations in the past have potentially damaged trust on internet governance in Europe, and ever since there has existed a deep current towards global reach in EU regulation of the internet, leading to a fragmentation of the internet.Footnote 53
Furthermore, online law enforcement is also a matter of the protection of democratic values for EU institutions. Characteristically, in the 2023 European Declaration on Digital Rights and Principles for the Digital Decade,Footnote 54 the EU institutions declared with force that ‘the digital transformation should not entail the regression of rights. What is illegal offline, is illegal online.’Footnote 55 Nonetheless, the global character of the digital environment pushes the boundaries of private international law. For instance, the GDPR explicitly considers it applies from the moment that EU data subjects are specifically targeted by the processing, even if neither the data controller nor the processing is in the EU. In any case, it has to be accepted that in any matter that concerns criminal law (such as hate speech), the EU is also entitled to defend the enforcement of its legal framework in the digital environment.
From a practical standpoint, it is doubtful that major companies would invest substantial resources in compliance with legislation that applies regionally or risk non-compliance with various European obligations, only to find out that European courts view them as obliged to comply because European citizens are indirectly affected by their activities. This phenomenon can be observed in the functioning over time of some of the most popular generated language artificial intelligence models, such as ChatGPT. Various white papers show that the company has taken a precautionary approach regarding mitigation of the risks of generating ‘illicit content’, including safeguarding anti-racist and anti-hate speech content.Footnote 56
As a consequence, American proponents of the internet face three options: comply with the EU legislator’s expectations worldwide, ignore them and limit their market, potentially losing one of the largest digital markets in the world, or offer differentiated services depending on the user’s location. The third option may seem appealing at first: IT companies would not lose any market or antagonise any legislator. The simplest solution would be to distinguish the service based on user internet protocol (IP) address, or even implement geo-blocking. For instance, some American newspapers automatically block access to viewers with European IP address, fearing the adoption of the GDPR.Footnote 57 However, this ‘soft division’ of the internet provides more legal uncertainties. Indeed, as the use of VPNs constantly increases in the population, the general awareness of the ‘geo-unblocking’ capabilities of such networks also expands,Footnote 58 to the extent that in a foreseeable future any geo-blocking attempt will be deemed inefficient on the prism of the principle of proportionality.
Alternatively, there is a risk of a ‘hard division’ or fragmentation of the internet. China for instance, following its own agenda of absolute national control of communications, has developed native alternative internet technologies that could splinter the global internet’s shared and ubiquitous architecture.Footnote 59
It has been argued that the territorialisation of cyberspace does not necessarily entail risking the fragmentation of the internet.Footnote 60 However, the arguments proposed are unconvincing because it is challenging to ignore how, in a globalised context, efforts to protect digital sovereignty can be seen as a form of digital imperialism.
A high-stakes game is currently unfolding – while Europe would not benefit from the GAFAMs leaving the internal market, the companies also have no incentive to abandon it. In this context, Meta’s behaviour, first threatening to leave the market after the US-EU Safe Harbor and Privacy Shield agreements were struck down by the ECJ,Footnote 61 and then launching its Twitter-like social network Threads in the EU, should be seen as diplomatic attempts to push back against the EU legislator’s imperialist view of the digital landscape.
12.4.2 The Need for an International Structure of Conciliation
Several international organisations have shown interest in resolving the challenge of enforcing laws in the digital realm without infringing upon online freedom of expression. The Council of Europe, in particular, has a long history of issuing guidelines on this topic.Footnote 62 In general, the question of global internet governance has been discussed as long as the internet has existed. It could be proposed that the United Nations is a natural choice for this debate, especially through its action via the Internet Governance Forum (IGF),Footnote 63 which embraces the ‘multistakeholder model’ (with all its flaws).Footnote 64 However, the actions of the IGF and the limitations of its mandate suggest that it may not be an effective platform for discussing the boundaries of online freedom of expression.
In this context, while the choice of the WTO may seem unusual at first considering it is primarily an economic organisation, there are arguments in favour of this option. First, the interstate Dispute Settlement Understanding is a highly efficient mechanism, and it is believed that enforcement through the WTO has been easier than through other international instruments.Footnote 65 Second, the divergent definitions of online freedom of expression impose significant economic costs on IT companies, which can be viewed as an economic factor. It has thus been argued that the WTO has a role to play in limiting China’s online censorship policies.Footnote 66 Third, while differences in perspectives on freedom of expression are deeply rooted in constitutional systems, the WTO is seemingly independent of specific legal traditions. While the search for a synthesis between EU and US approaches to online freedom of expression involves complex reasoning and considerable effort, the use of a comparative approach privileged by the organisation would provide some insights on the issue. Finally, the human rights approach is not foreign to the WTO, and how the WTO is involved in resolving disputes related to freedom of expression has already been analysed,Footnote 67 using the concept of ‘public morals’ as a legitimate limitation to freedom of expression.Footnote 68
12.4.3 Through the Protection of Digital Public Spaces
Finding a structure capable of resolving international disputes over online freedom of expression is not enough; it should also be recognised that the existing definitions, both American and European, are insufficient to fully protect online freedom of expression. In other words, online freedom of expression requires not only a structure but also substantial legal adjustments. One of the fundamental aspects of the protection of online freedom of expression today is its heavy reliance on free access to social media platforms. However, these platforms are owned by private companies, which raises new challenges for the freedom of expression. Some authors then refer to the ‘platformisation of the public sphere’.Footnote 69 Digital public spaces are governed by algorithms of presentation and management of the information flow. The huge potential of these algorithms in influencing public opinion has been demonstrated. The 2023 Declaration on Digital Rights and Principles expressly mentions this issue, stating that:
Online platforms, particularly very large online platforms, should support free democratic debate online. Given the role of their services in shaping public opinion and discourse, very large online platforms should mitigate the risks stemming from the functioning and use of their services, including in relation to misinformation and disinformation campaigns, and protect freedom of expression.Footnote 70
Furthermore, it is also argued that the digitalisation of the public space entails ‘antagonistic and networked-individualistic flows of populist communication’,Footnote 71 and effectively, the bubble effect of social media is nowadays well documented.Footnote 72
One of these issues is the need for the horizontal application of freedom of expression. Here, ‘horizontal effect’ refers to the ability of end users to initiate private litigation against an online platform for violating their human rights, specifically by filing a complaint when their freedom of expression has been infringed owing to abusive moderation. In the US, no margin of appreciation is recognised for the horizontal effect of freedom of expression. For example, in the District Court in California FAN v. Facebook case,Footnote 73 it was ruled that private internet service providers such as Facebook are entitled to shut down users’ accounts, as social media does not operate as a public forum.
By contrast, the EU tradition allows for greater application regarding the horizontal effect of freedom of expression (see for instance the Viking Line judgement,Footnote 74 where the ECJ adopted the German theory of the indirect horizontal effect of human rights). Furthermore, the landmark Egenberger and Bauer cases directly acknowledge a direct horizontal effect to some provisions of the Charter (the principle of non-discrimination and effective judicial protection).Footnote 75 These two cases have substantial importance for EU constitutional law,Footnote 76 and this reasoning could be seen as opening the door for further recognition of the horizontal effect. However, it is uncertain, in the absence of relevant secondary EU legislation, whether the reasoning employed in these two cases can be used by extension for freedom of expression.
Therefore, in the US without any doubt, and in the EU most probably, the enforcement of online freedom of expression against private actors, even those managing very large social platforms and acting as de facto gatekeepers of public spaces, remains uncertain. This has very practical consequences: How is it possible to prohibit a platform from banning an account or imposing a ‘shadow ban’ on a communication (through algorithmic moderation to restrict the range of a publication), thus manipulating opinion and enforcing private censorship, without having any control? Shadow banning, as a conscious form of moderation, might be a myth.Footnote 77 Nonetheless, the automatic consequence of algorithmic information presentation is that certain content, based on predefined criteria, may experience reduced visibility. In the EU, the DSA constitutes a first step towards regulation of this practice.Footnote 78
Consequently, the protection of online freedom of expression currently relies on an indirect and imperfect legal framework, which combines contract law with consumer protection. For example, in Germany, in the Facebook Terms of Service Case,Footnote 79 it was determined that Facebook’s terms of service on deleting user posts and blocking accounts for violations of its Community Standards were invalid. Similarly, in France, in the ‘nombril du monde’ case,Footnote 80 the court found that by moderating the posting of a painting without explanations, Facebook had violated the general duty of good faith in the execution of the contract, as mandated by the Civil Code.
This situation is far from satisfactory. The philosophy and methodology of contract law cannot be seen as an ideal method to guarantee freedom of expression. It is time for a re-evaluation of online freedom of expression that takes into consideration the practical reality of some social media platforms as digital public spaces. As a result, no obstacles should be placed against the enforcement of freedom of expression in these digital public spaces.
12.5 Conclusion
The term ‘imperialism’ typically carries a historical and negative connotation. However, in the context of this chapter, it does not imply a deliberate intent of hegemony on the part of European institutions. Rather, it is an automatic consequence of safeguarding digital sovereignty in a globalised environment. This sheds light on fundamental conceptual differences regarding the interpretation of freedom of expression. Nevertheless, these differences give rise to political struggles with economic implications, highlighting the need for a solution to bridge these divides.
It must be accepted, then, that the EU institutions indeed endorse a certain amount of ‘digital imperialism’. While the thorny issue of the governance of a globalised network is as old as the internet itself, the quest for the digital sovereignty of the EU is a novel concept that encompasses a will to promote the defence of European values online more actively.Footnote 81
The argument here is that while this policy is justified from a democratic perspective, it may result in increasing confrontations that could hinder internet growth and potentially lead to network fragmentation. Instead, this chapter suggests that international negotiations with trusted partners, based on shared core elements of the concept of freedom of expression, should be prioritised. Such a solution could find support within the framework of the WTO, necessitating a re-evaluation of online freedom of expression. One of the main potential dangers of this clash between the US and the EU on the legitimate boundaries of freedom of expression may not be the potential fragmentation of the internet, but the occultation of a debate on the need to re-evaluate this human right in a way that enhances its scope in digital public spaces.