Skip to main content Accessibility help
×
Hostname: page-component-77f85d65b8-t6st2 Total loading time: 0 Render date: 2026-04-16T23:49:51.779Z Has data issue: false hasContentIssue false

Part I - Adapting Human Rights to a Digital World

Published online by Cambridge University Press:  24 October 2025

Tiina Pajuste
Affiliation:
Tallinn University

Information

Part I Adapting Human Rights to a Digital World

Introduction to Part I What Difference Does It Make to Move Online?

As individuals, communities, and governments increasingly move online, long-standing legal frameworks face unprecedented challenges. The transition to the digital realm introduces new dimensions to established rights, amplifies existing vulnerabilities, and raises questions about whether current frameworks can adequately address the complexities of digitalisation. This first part of the book offers a comprehensive exploration of how the online context makes a profound difference in the application, governance, and enforcement of human rights. The seven chapters challenge readers to rethink traditional legal frameworks, adapt to the rapid pace of technological change, and embrace interdisciplinary solutions to ensure that human rights remain robust and relevant in the digital age. The chapters aim to answer a core question: What difference does it make to move online? The groundwork is then laid for understanding the digital transformation of rights and the need for innovative approaches to their protection. Through theoretical insights, case studies, and practical examples, the chapters collectively address key challenges related to governance, accountability, harm, and innovation in the digital sphere.

Chapter 2. Is There a Need for New Digital Human Rights in AI Governance?

Chapter 2 explores the evolving landscape of digital human rights. Wolfgang Benedek evaluates whether the emergence of digital technologies necessitates entirely new human rights (e.g., the right to digital self-determination) or whether existing rights can be extended to address new challenges (e.g., the right to privacy, freedom of expression). He focuses on artificial intelligence (AI) governance as a case study, examining regulatory initiatives by global, regional, and national actors. The discussion includes the role of the United Nations, the Council of Europe (CoE), and the European Union (EU) in addressing gaps in digital rights through frameworks such as the CoE Framework Convention on Artificial Intelligence and the EU Artificial Intelligence Act. The chapter also examines the interplay between state actors and private platforms in regulating AI and protecting digital rights. In response to the core question of the first part of the volume, the chapter notes the following: (a) human rights take on new dimensions in the digital context, requiring reinterpretation and the extension of traditional rights to address these challenges; (b) digital technologies introduce novel threats (e.g., disinformation, mass surveillance), which necessitate innovative legal responses; (c) moving online requires protection through binding international commitments, while non-binding soft law approaches are insufficient to enforce rights effectively; and (d) the borderless digital environment requires international co-ordination to avoid regulatory fragmentation.

Chapter 3. Why and How the State Should Regulate the Internet

Chapter 3 by Cathleen Powell examines the philosophical and practical foundations for state regulation of the internet, focusing on the relationship between individual rights and societal interests. It critiques the traditional view of rights as individualistic and argues for a more community-focused approach, emphasising that human rights should serve the common good. Powell argues that the digital realm introduces unique challenges – such as disinformation, manipulation, and hate speech – that require state intervention to preserve the integrity of public discourse and democratic values. Drawing on legal theory, particularly the ideas of Lon L. Fuller, she emphasises the importance of fostering trust, maintaining the rule of law, and balancing power between states, private actors, and users in internet governance. With regard to this part’s core question (What difference does it make to move online?), the chapter identifies several transformative effects of moving online: (a) the digital realm magnifies the reach and impact of harmful content (e.g., disinformation and hate speech), making existing social and legal challenges more acute; (b) the internet’s susceptibility to manipulation and false information erodes trust in public discourse and democratic institutions, making it harder for communities to reach shared understandings and maintain the rule of law; (c) online spaces, often privately owned, function as public forums, demanding state oversight to ensure that private governance aligns with public interests and fundamental rights; and (d) the state’s regulatory power must be exercised in collaboration with non-state actors to prevent abuse and ensure responsiveness to public concerns. The chapter advocates a three-tiered model of internet governance involving self-regulation by platforms, oversight by independent regulatory bodies, and state intervention for serious threats such as hate speech or disinformation during elections.

Chapter 4. How to Tame the ‘Digital’ Shrew: Constitutional Rights Going Online

Chapter 4 examines the horizontal application of constitutional rights in the digital environment. It focuses on the accountability of social platforms, the limitations of current regulatory frameworks, and the growing role of private actors in regulating online spaces. Violeta Beširević analyses judicial practices in jurisdictions such as Germany, Canada, and the EU, and demonstrates how constitutional rights – traditionally enforceable only against the state – are being extended to regulate private actors, particularly social platforms, which now wield significant power over public discourse and individual freedoms. The chapter critiques the lack of adequate regulatory mechanisms for holding platforms accountable for human rights violations, emphasising their quasi-regulatory role in online governance. It advocates for a model of governance that enforces constitutional principles in the digital sphere, promoting accountability, transparency, and democratic legitimacy. The chapter responds to the core question by drawing attention to the following aspects: (a) the lines between public and private spaces have blurred (as private platforms such as Facebook and X function as public forums), so accordingly constitutional protections need to be extended to interactions governed by private actors; (b) as social platforms have quasi-regulatory roles, they should be recognised as duty bearers under constitutional rights; (c) the reach and permanence of digital actions amplify the potential harm caused by rights violations, necessitating stronger accountability mechanisms and legal protections; and (d) the online environment forces courts to reinterpret constitutional principles to address unique digital challenges (e.g., algorithmic decision-making and platform moderation).

Chapter 5. How Do We Decide Whether Moving Online Makes a Difference?

Johanas Baltrimas continues the exploration of changes that occur in the digital environment in Chapter 5 by examining the legal reasoning behind determining whether the digital nature of an action or object requires different legal treatment to its offline equivalent. Through an analysis of court cases from different jurisdictions, he identifies two primary criteria that courts use to evaluate whether moving online makes a difference: (a) the purpose and function of the disputed object or action; and (b) the extent of harm caused by the disputed behaviour. Baltrimas highlights how courts compare digital cases with analogous offline situations to assess whether the online factor is legally relevant or whether traditional rules suffice. He provides a structured framework for evaluating the legal significance of moving online, addressing the following ways in which the digital realm impacts legal reasoning: (a) the online context often amplifies the extent of harm (e.g., the speed and scale of disinformation), which necessitates different legal responses; (b) moving online forces courts to reinterpret traditional laws and occasionally establish new rules when existing frameworks prove inadequate (e.g., the right to be forgotten); and (c) the difference made by moving online is not static – ongoing technological advancements continuously reshape the relevance of the online factor in legal disputes, requiring courts to remain adaptable.

Chapter 6. Some Reflections on the Non-coherence Theory of Digital Human Rights

Chapter 6 by Mart Susi analyses the challenges of applying human rights frameworks to the digital realm through the lens of non-coherence theory. This theory posits that human rights in the digital domain differ fundamentally from their offline counterparts owing to shifts in meaning, scope, and application. Susi critically examines the assumption that offline human rights norms can be seamlessly transposed into the digital environment, highlighting the distortions and variances that arise in this process. He calls for a reimagining of human rights as adaptable, context-dependent principles that can address the unique dynamics of the digital age. In response to this part’s core question – What difference does it make to move online? – the chapter demonstrates that moving online fundamentally alters the structure, interpretation, and implementation of human rights as follows: (a) the digital environment transforms the meaning and scope of established rights (e.g., privacy online often involves the desire not to be left alone); (b) moving online shifts human rights from being absolute to relative, as digital contexts require rights to be balanced against one another in ways that differ significantly from offline contexts; (c) the digital realm creates competing governance systems (e.g., private versus public regulation), leading to non-coherence between rights frameworks and fragmenting their application; and (d) the digital environment’s fast-paced nature leaves little time for traditional legal processes (e.g., judicial balancing), compromising the effectiveness of rights protection.

Chapter 7. Internet Addiction as a Human Rights Issue

After addressing broad conceptual and theoretical issues regarding human rights in the digital realm, Chapter 7 introduces a specific harm – internet addiction – that uniquely arises in the online environment. Vygantė Milašiūtė examines internet addiction through the lens of human rights, highlighting how the online environment amplifies the risks of addiction and introduces new legal and policy challenges. She explores the interplay between medical research and legal frameworks, analysing how internet addiction affects rights such as health, non-discrimination, and the rights of vulnerable groups such as children. The chapter also evaluates international and regional responses, including public health interventions, consumer protection laws, and the emerging concept of the ‘right to disconnect’. With regard to the core question, the chapter provides a comprehensive analysis of how moving online changes the nature of addiction and introduces new challenges for human rights law: (a) the online environment’s accessibility, anonymity, and engaging design features make addiction more prevalent and severe than in offline contexts, affecting vulnerable groups disproportionately; (b) digital tools blur the boundaries between leisure and harm, turning everyday internet use into a potential health risk that requires public health and regulatory interventions; (c) online platforms increasingly function as de facto regulators of user behaviour, raising questions about their responsibilities in mitigating addiction and the state’s role in holding them accountable; and (d) the online context requires new approaches to existing rights (e.g., the right to health) and the creation of novel legal concepts such as the ‘right to disconnect’, which respond specifically to digital harms.

Chapter 8. Just Don’t Get Caught!

Chapter 8, the final chapter in the first part of the book, provides a different perspective, critically examining the limitations of legal regulations in governing the digital realm. Laws struggle to keep pace with technological advancements, leading to vague, inconsistent, or unenforceable rules in the digital space. Barbora Bad’urová explores the potential of digital ethics and education as complementary solutions. She highlights the challenges of regulating digital technologies, including the slow pace of legal development, the limitations of law enforcement, and the unique nature of the online environment, such as anonymity and global reach. Bad’urová emphasises the role of intrinsic motivation, individual responsibility, and moral education in addressing the ethical dilemmas posed by digital technologies. The chapter responds to the core question by noting the following: (a) the anonymity and lack of accountability online (making it difficult to detect and address violations effectively) amplifies the need for internal regulation and moral responsibility; (b) as digital activities transcend national borders and create regulatory challenges and inconsistencies, there is a need for universal ethical principles to address these gaps; and (c) as legal regulation lags behind technological advancements, it becomes essential to complement laws with flexible and forward-looking ethical education. The chapter argues for a multidimensional approach, combining legal regulation with the promotion of digital ethics and education, to effectively address the complexities of the digital realm.

Shared Themes and Interconnections

The seven chapters collectively provide a nuanced exploration of how moving online transforms the nature, application, and governance of human rights. Each chapter addresses distinct facets of this transformation, but they converge around several overarching themes that highlight the fundamental differences introduced by the digital realm. Listed here are the key parallels across the chapters and their shared contributions to answering the question: What difference does it make to move online?

A. Transformation of Rights

All chapters acknowledge that moving online reshapes the scope, interpretation, and enforcement of human rights, shifting their application from static, state-centred frameworks to dynamic, context-dependent interpretations:

  • Chapter 2 on new digital rights emphasises that existing rights, such as privacy and freedom of expression, require reinterpretation in the context of AI and digital governance.

  • Chapter 3 on state regulation of the internet shows how the internet amplifies risks such as disinformation and hate speech, necessitating reimagined frameworks to preserve public trust and democratic values.

  • Chapter 4 on constitutional rights online explores how private platforms function as public forums, requiring constitutional rights to extend beyond state obligations to regulate platform behaviour.

  • Chapter 5 on legal reasoning in digital contexts argues that courts must adapt their reasoning to address new harms and purposes unique to the online world, such as the permanence of digital information.

  • Chapter 6 on non-coherence of digital rights argues that moving online disrupts the coherence of traditional human rights, leading to the relativisation of rights and the need for novel theoretical approaches.

  • Chapter 7 on internet addiction discusses how digital environments amplify behavioural risks, such as addiction, which require integrating health and human rights approaches to governance.

  • Chapter 8 on digital ethics and law critiques the limits of legal frameworks in addressing the unique challenges of the online space, emphasising the need for ethics and education.

B. Amplification of Harm and Vulnerability

Each chapter highlights how the digital environment magnifies risks and harms that were less prominent offline:

  • Chapter 2 shows how AI exacerbates risks such as bias, discrimination, and surveillance, amplifying existing inequalities.

  • Chapter 3 illustrates how disinformation and manipulation online distort public discourse and erode trust at a scale unprecedented in offline contexts.

  • Chapter 4 demonstrates that the privatisation of public spaces online amplifies the power of platforms to influence discourse, often to the detriment of individual freedoms.

  • Chapter 5 notes that the reach and permanence of digital actions, such as online defamation or data breaches, heighten harm compared with their offline equivalents.

  • Chapter 6 draws attention to the fact that the fast-paced and opaque nature of digital systems increases the likelihood of rights violations, complicating traditional legal safeguards.

  • Chapter 7 shows how digital environments intensify the risk of addiction, particularly for vulnerable groups such as children, and require targeted public health interventions.

  • Chapter 8 highlights how anonymity and the global reach of digital spaces amplify unethical behaviour, making it harder to enforce accountability.

C. Shifting Duty Bearers and Accountability

Most of the chapters discuss how moving online partly shifts responsibilities for rights protection from states to private actors, creating gaps in accountability:

  • Chapter 2 notes the role of private AI developers in adhering to human rights standards, emphasising the need for public oversight.

  • Chapter 3 advocates for shared responsibility between states, platforms, and independent regulators to address disinformation and other digital harms.

  • Chapter 4 argues that constitutional rights must be horizontally applied to hold private platforms accountable for human rights violations.

  • Chapter 6 points out the fragmented governance of digital rights, with overlapping responsibilities between states, platforms, and international bodies.

  • Chapter 7 critiques platforms for failing to mitigate addictive design features, placing the burden on individuals and states to address the issue.

  • Chapter 8 stresses the importance of ethical self-regulation alongside legal frameworks to address the accountability gaps created by anonymity and global reach.

D. Fragmentation and Globalisation

Several of the chapters highlight the tension between the global nature of the internet and fragmented legal and regulatory frameworks, and advocate for harmonised regulatory frameworks based on international cooperation:

  • Chapter 2 discusses the need for the harmonised international governance of AI to address cross-border rights violations.

  • Chapter 3 emphasises the difficulty of regulating global platforms using national laws, advocating for international cooperation.

  • Chapter 4 highlights how platform governance transcends national boundaries, necessitating transnational constitutional approaches.

  • Chapter 5 illustrates how courts struggle to apply consistent reasoning when addressing global digital cases using local legal norms.

  • Chapter 6 critiques the lack of coherence in global digital governance, arguing for more integrated frameworks.

E. The Need for Complementary Approaches

Several chapters also advocate for combining legal regulation with other mechanisms, such as ethics, education, and public awareness. Legal regulation alone cannot address the complexities of the digital realm, so complementary tools and cross-sectoral collaboration are essential:

  • Chapter 2 stresses the need for enforceable legal obligations alongside ethical AI guidelines.

  • Chapter 3 proposes a multi-tiered regulatory model combining self-regulation, independent oversight, and state intervention.

  • Chapter 7 advocates for public health initiatives and digital literacy campaigns to mitigate addiction risks.

  • Chapter 8 highlights the role of digital ethics and education in addressing gaps left by slow legal responses.

The seven chapters collectively argue that moving online fundamentally alters the nature of governance, accountability, and the protection of human rights. They draw attention to the amplified harms, shifting responsibilities, and global complexities of the digital environment, emphasising the need for innovative, multidisciplinary, and collaborative approaches to address the challenges posed by digital technologies.

2 Is There a Need for New Digital Human Rights in AI Governance?

2.1 The Emergence of (New) Digital Human Rights
2.1.1 Introduction

Several recent initiatives have proposed new human rights for the digital sphere. This is in response to new challenges to human rights in an increasingly digitalised world. Recent developments to strengthen the governance of artificial intelligence (AI) have contributed to this debate. However, is there really a need for genuinely new digital human rights or would it suffice to adjust or extend existing rights by interpretation to deal with the new threats from cyberspace? As the proposals claim the emergence of new principles and rights, at which stage is the development of new digital human rights? How have European institutions reacted to the proposals and which regulatory efforts have they undertaken? If the new rights are proposed to be protected at the European level, what about the universal level of a global cyberspace, where there is a threat of increasing fragmentation?

Specific attention will be given to the regulation of AI from a human and fundamental rights perspective. In a dramatic move, a large number of renowned experts and scientists, among them developers of AI, have made a call in an open letter in March 2023 for a pause, a moratorium on the training of AI going beyond ChatGPT 4 in order to provide time to deal with the shortcomings in terms of reliability and some neglected aspects of the new technology. The letter also calls for quicker regulation by governments to provide a proper framework for the development and use of AI. This was complemented by concerns expressed by leading AI developers and even companies. The erratic Elon Musk has welcomed the suggested pause, and in view of the fact that ChatGPT cannot always distinguish between truth and falsehood, announced the development of a ‘TruthGPT’ as an alternative. Prompted by data protection concerns, Italy even temporarily prohibited the use of ChatGPT in order to obtain some assurances. The White House and the US Senate have called on the leaders of major companies to report on their activities. The Council of Europe (CoE) and the European Union (EU) have presented new regulatory approaches to AI, which will be analysed in a comparative way regarding their contribution to the protection of human and fundamental rights.

2.1.2 Protection of Human Rights on the Internet

The question of how to protect human rights online was first raised by civil society at the World Summit on the Information Society of the United Nations (UN), which took place in Geneva in 2003 and in Tunis in 2005. While the final documents of those conferences made only a few references to human rights,Footnote 1 the topic became a major concern in their follow-up in the form of the annual Internet Governance Forum (IGF).Footnote 2 For example, on the suggestion of the Association for Progressive Communications, which in 2006 produced an Internet Rights Charter, the Dynamic Coalition on Internet Rights and Principles established at the IGF in Rio in 2007 began elaborating the Charter of Human Rights and Principles for the Internet. The Charter was drafted in a broad process mainly by civil society and academia and presented as a draft version at the IGF in Vilnius in 2010, while the final version was presented at the IGF in Belgrade in 2011. The methodology was oriented towards the Universal Declaration of Human Rights in order to interpret it and other key UN human rights documents such as the International Covenants on Civil and Political as well as on Economic, Social and Cultural Rights, the UN Conventions on the Rights of the Child, and on the Rights of People with Disabilities for the Purposes of the Internet. The only new right identified was the right to access to the internet, which is formulated in Article 1 of the Charter.Footnote 3 The Charter contributed to the general debate on internet rights, which produced numerous proposals.Footnote 4

The UN human rights bodies were thus far largely absent from the topic. Only in 2011 did the Special Rapporteur on the Promotion and Protection of the Right of Freedom of Opinion and Expression, Frank LaRue, produce a report with a focus on the internet.Footnote 5 In 2012, the UN Human Rights Council adopted its first resolution on human rights and the internet in which it used the famous formula that ‘the same rights people have offline must also be protected online’.Footnote 6 This clarified that all human rights, in principle, are also applicable online, but does not exclude taking the particularities of the internet into account. Since that time, the UN has taken up issues related to the internet in several fora and recently prepared the Global Digital Compact that was agreed upon by its Summit for the Future of the Internet in September 2024. The compact deals with digital cooperation, including the application of human rights online.Footnote 7

Already before the UN started to deal with the issue, the CoE engaged in the question of how to apply human rights, in particular the European Convention on Human Rights (ECHR) to the internet, and since has taken the lead among international organisations in the study and regulation of the new challenges to human rights brought by the internet.Footnote 8 Inspired by the Charter on Human Rights and Principles for the Internet, the CoE elaborated a guide on human rights for internet users, which contains a catalogue of the main digital human rights.Footnote 9 The European Court of Human Rights (ECtHR) has developed its case law on the issue, related mainly to access to the internet, freedom of expression,Footnote 10 and the right to privacy and data protection with regard to the new technologies.Footnote 11 The tech community has also shown awareness of the need to give adequate attention to human rights in regard to technological development by coining the concept of ‘digital humanism’.Footnote 12

2.1.3 Proposals for New Digital Human Rights

In recent years, several proposals for new digital human rights have been launched, such as the Charter of Digital Fundamental Rights of the EU, elaborated by a group of mainly German experts and launched with the help of the Zeit Foundation in 2016 and updated in 2018.Footnote 13 The idea was to complement the EU Charter on Fundamental Rights drafted by a Convention in 2000, which became binding as part of the Treaty of Lisbon in 2009. The proposal contains, for example, the right that net neutrality must be provided in a non-discriminatory way (Article 11). Of particular interest are the rights related to automated systems and decisions; for example, that the criteria for automated decisions, such as in regard to digital profiling, must be transparent and be taken by natural or legal persons. Every person must have a right to the independent review of such decisions by a human being (Article 5). The rights should apply to the EU, state and non-state actors, including internet platforms.

Another initiative came in early 2021 from the author and lawyer Ferdinand von Schirach, who suggested six new fundamental rights to complement the EU Charter on Fundamental Rights, among them two new digital human rights: ‘Everyone has the right to know that any algorithms imposed on them are transparent, verifiable and fair. Major decisions must be taken by a human being.’ The other fundamental right proposed was the right to digital self-determination, according to which ‘excessive profiling or the manipulation of people is forbidden’. This proposal was supported by the WeMoveEurope Foundation, which has collected more than 270,000 supporters.Footnote 14 While these definitions lack the necessary precision to be directly legally applicable, they may serve as principles or be concretised depending on the respective context. The proposal included the idea of organising a new Convention to discuss the enlargement of the EU Charter on Fundamental Rights on the basis of these rights. However, this proved not to be realistic. A right to informational self-determination with a focus on data protection is already part of German law.

The proposals are addressed in the first place to the bodies of the EU, such as the European Parliament, the European Commission, and the governments of European states. However, there has been no known direct response to the proposals, while the European Commission and Parliament have developed their own proposals in the field of digital human rights, to be presented later in this chapter.

2.1.4 Methodology for ‘Creating’ New Digital Human Rights

The various initiatives for the progressive development of digital human rights and principles raises the question of how such new rights are being created. There has been an abundance of declarations and recommendations by various institutions, and non-government and inter-governmental organisations, sometimes developed with a multi-stakeholder approach.Footnote 15 With few exceptions, they are all of a soft law nature. Today, this is the norm in the progressive development of international legal obligations. If the authority of the proposed rules is high, they might be respected even without being legally binding. Therefore, the process of creation, whether by state initiatives or in a multi-stakeholder approach, whether by non-government initiatives or by regulatory bodies at the regional or universal level, makes a major difference. The regulation may not only come from public institutions but also from private platforms; for example, by way of self-regulation, which might follow recommendations from the public sphere, thus providing the horizontal protection of human rights – that is, of individuals against private companies. For example, in the case of the Human Rights Oversight Board of Facebook/Meta, individuals can launch appeals against company decisions that limit their freedom of expression.Footnote 16

There is an obvious danger of an inflation of the use of the concept of ‘rights’, which requires appropriate standards for their recognition. Today, we are faced with many claims for new human rights in various fields, but whether they find their way to general recognition and finally into regulatory norm-setting is a process with many factors. Already in 1984, Alston proposed some criteria for quality control in the creation of new human rights, such as the social value added, non-repetition of existing rights, ability to achieve high international consensus, or being sufficiently precise to produce identifiable rights and obligations.Footnote 17 Rights might also be recognised only at national or regional levels, which raises the question whether human rights need universal standing to be recognised as human rights.Footnote 18 For example, the prohibition of the death penalty as a key human right in Europe applies only among the members of the CoE and a number of other states globally, and therefore is not a universal human right. However, this does not affect its human rights character. The Spanish government has drafted a Charter on Digital Rights, which contains a comprehensive set of rights for the digital environment, including the right to digital identity, to public participation by digital means, and the right of access to digital environments for older persons. In several cases, the details of the rights identified are to be specified by the law.Footnote 19

Much depends on the protection needs identified by relevant actors. For example, while the focus in the past was on data protection, leading to various regulatory efforts such as the modernised Convention 108+ of the CoE or the General Data Protection Regulation (GDPR) of the EU, more recently the main concern has been with illicit content such as hate speech and disinformation on the internet, while today the protection of human rights in the development and use of AI takes centre stage. AI is also used for the Internet of Things. This produces non-personal data, which are not covered by the GDPR, but may still raise protection issues. Generally, the rights are to address protection needs, which in the digital environment can be structured into issues related to identity and access such as digital self-determination and protection from blocking and filtering as well as internet shutdowns, issues of protection against illicit digital content such as hate speech, disinformation, defamation, or online violence, and issues related to protection against technological threats such as surveillance, biometrical data use for facial recognition, and the use of AI to interfere with human rights or the harassment of bloggers or of digital human rights defenders.

Not all the digital rights proposed qualify as human rights. As an advocacy non-governmental organisation (NGO), European Digital Rights (EDRi) has established a network of NGOs from different countries to protect and promote digital rights.Footnote 20 In 2014, EDRi also produced a ten-point Charter on Digital Rights aimed mainly at members of the European Parliament preparing for (re-)election. In practice those rights were rather commitments in the form of principles covering a wide range of issues, such as the promotion of encryption or of free software.Footnote 21

Various questions related to the emergence of new human rights have been studied in detail in a recent handbook on new human rights, which also looks at some examples from the digital world.Footnote 22 The development of new human rights is a process starting with the identification of new protection needs, as mentioned, their articulation in the form of rights, and finally their recognition and implementation. Mart Susi identifies several elements of inadequate protection on the basis of existing human rights as a reason for developing new human rights. For rights derived from other rights, he observes a decrease in abstractness.Footnote 23 This could also be said for rights derived from principles such as the right to informational self-determination, which goes back to a decision of the German Constitutional Court in 1983.Footnote 24 In most cases, an evolutionary interpretation as applied by the ECtHR will suffice. For example, the secrecy of correspondence by letter can be extended to personal communication using the internet. But this also needed additional provisions on data protection, as in CoE Convention 108+, to address new privacy needs.Footnote 25

In view of the particularities of the online domain, Susi developed the non-coherence theory of digital human rights, which claims a change of meaning and scope when human rights are transposed from the physical to the digital world. Accordingly, digital human rights are different by nature from the respective offline human rights.Footnote 26

New human rights may be derived from parent rights (‘implied rights’) or be stand-alone rights. For example, the right to access to the internet has been derived by some from the freedom of expression and information, but others claim that because the relevance of meaningful access to the internet goes far beyond the freedom of expression it should be considered as a stand-alone right.Footnote 27 This is also supported by a report by the Office of the UN High Commissioner for Human Rights on Internet Shutdowns from a human rights perspective, which shows that many more rights are affected than just freedom of expression.Footnote 28 However, while the report refers to international commitments ensuring universal internet access, it does not speak of internet access as a human right, which shows that the right is not yet fully recognised as such at the UN level.Footnote 29

Furthermore, the ‘right to be forgotten’ has been claimed to be a stand-alone right, while its formation is related to the right to privacy and data protection.Footnote 30 Based on a judgement of the European Court of Justice, it has been introduced as part of the GDPR rules of the EU.Footnote 31 In response, Google has received thousands of requests for the deletion of links to personal information. It was largely left to the private company how to deal with these. However, whether a new digital human right has really been created is also being questioned.Footnote 32 So far, it is legally codified only in Article 17 of the GDPR and a few national constitutions, and is thus not generally recognised. However, in Biancardi v. Italy, the ECtHR has upheld the right to be forgotten, without referring to it in that way, against the freedom of expression of a journalist. The delayed de-indexing of an article on criminal proceedings on the internet had been found damaging to the reputation of a person.Footnote 33

Human rights can be individual and collective. Most experts will agree that a human right should be individually enforceable, otherwise we should better speak of principles. But there can also be a collective enforcement, as in the context of social rights. Some universally recognised human rights, such as the right to self-determination, and including the solidarity rights of peace, development, and the environment, can only be realised collectively. They are therefore sometimes called ‘peoples’ rights’. The African Charter on Human and Peoples’ Rights contains several such rights. Therefore, it might be worth also distinguishing between individual and collective digital human rights. For example, some – such as the right to cybersecurity – are quite abstract and need concretisation to be applied to individuals. However, this is nothing unusual if we consider, for example, the right to water and sanitation or the right to a clean, healthy, and sustainable environment recognised respectively in 2010 and in 2022 by the General Assembly of the UN.Footnote 34 Most human or fundamental rights also need concretisation. This is why one function of the various human rights treaty bodies is to elaborate interpretations, mostly in the form of general comments, while the human rights courts can give binding interpretations in individual cases that may provide general directions.

2.2 Governance of AI and Human Rights
2.2.1 Introduction

The impact of the development and application of AI (systems) on human and fundamental rights has become a priority concern for many actors, in particular since the company OpenAI backed by Microsoft has made ChatGPT 3 freely available.Footnote 35 The new technology opens new opportunities, but also new threats, such as the possibility of further facilitating the assessment of people according to their social behaviour (social scoring), as is already in use in China. While it produces astonishing results, which have generated heightened attention and expectations for its potential, there have also been warnings about its disruptive potential for democracy and society at large if used for disinformation. This culminated in the open letter of 22 March 2023 signed by key AI developers and thousands of concerned scientists asking for a six-month moratorium on the training of AI systems more powerful than ChatGPT 4. This is motivated by what the letter calls an out-of-control race, with AI labs ‘developing and deploying ever more powerful digital minds that no one – not even their creators – can understand, predict and control’. One example given is the ‘flooding of information channels with propaganda and untruth’. The uncontrolled race may lead to a ‘loss of control of our civilization’. Consequently, it creates new challenges to democracy and human rights. Time should be taken to introduce much-needed regulation and to address planning and management needs, including the development of shared safety protocols for advanced AI design to be overseen by independent outside experts. AI developers should work with policymakers ‘to dramatically accelerate the development of robust AI governance systems’, including oversight of highly capable AI systems, auditing and certification systems, and liability for harm caused.Footnote 36

The Human Rights Council of the UN also reacted by indicating the potential and risks of the emerging technologies and recommending certain measures to protect the human rights of individuals throughout the life cycle of AI systems. It also recommended strengthening the capacities of the Office of the High Commissioner for Human Rights (OHCHR) to advance human rights in the context of new and emerging technologies, and asked it to prepare a mapping report identifying challenges and gaps in this respect.Footnote 37 The report, presented in 2024, identified several gaps, including the provision of advice that should be provided by the OHCHR to Member States and all stakeholders to support them in integrating human rights from the design to the regulation of digital technologies.Footnote 38

This development shows the limitations of the old approach that industry should lead on new technological developments and that regulation can come later in cases where economic considerations lead to the neglect of societal concerns. The corresponding approaches of self-regulation and declarations of ethical principles, while important, are insufficient in such circumstances where the largest tech giants match each other in regard to economic opportunities, while states only pursue their national interests. The Asilomar Principles on Beneficial AI already elaborated in 2017 with the purpose of ensuring the compatibility of the development of AI with human dignity, rights, and freedoms, were already concerned that AI systems remain under human control.Footnote 39 The Organisation for Economic Co-operation and Development (OECD) adopted a set of principles on AI in 2019,Footnote 40 while UNESCO adopted a consensus-based Recommendation on the Ethics of Artificial Intelligence in 2021.Footnote 41 OpenAI on its website expresses its commitment to ensure that artificial general intelligence benefits all humanity, and also commits to the principle of transparency.Footnote 42 However, the training of AI systems is presently taking place behind closed doors without state or societal control. The danger of serious harm created by these developments has led one of the AI pioneers at Google to resign from his position in order to be free to speak out.Footnote 43

The call for the regulation of the development of AI and its application has accelerated ongoing efforts in Europe and beyond.Footnote 44 Besides the EU and the CoE, the US has responded with a non-binding AI Bill of Rights and a White House executive order. The AI Bill of Rights has a focus on addressing challenges to democracy and certain rights such as privacy and non-discrimination with a focus on serving the American people. It also foresees a right to opt out and of access to a human person to consider and remedy problems.Footnote 45 In reaction to the appeals for regulation, the heads of the main companies were called to the White House and the Senate, where even the chief executive of OpenAI called for regulation.Footnote 46 The issue is also on the agenda of the G7.

China has also adopted ‘Interim Measures for the Management of Services of Generative AI’, which are to complement existing laws on network security and data security. The regulation establishes the requirement that the contents should reflect the basic values of socialism, while the law should also prevent discrimination, hatred, violence, fakes, and other content that could interfere with the economic and social order.Footnote 47

These different and partly competing approaches also carry some ideological baggage. American and European values or Chinese socialism create the danger of a further fragmentation of the internet. Accordingly, universal rules for global problems created by the use of generative AI would be necessary to counter the trend of fragmentation and polarisation. Digital constitutionalism aiming at a normative framework for the protection of human rights and the balancing of powers is one approach to addressing these concerns.Footnote 48 Strengthening international cooperation and closing digital divides are the main aims of the UN Global Digital Compact. It contains several principles and objectives for international cooperation and also aims at enhancing governance in the field of digital technologies for which purpose new institutional proposals have been made, such as the establishment of an International Scientific Panel on AI to conduct independent risk assessments and produce annual reports, a Global Dialogue on AI Governance, and a Global Fund for AI for Sustainable Development. In support of stakeholders, the Office of the High Commissioner for Human Rights in Geneva is to provide advice on ensuring human rights.Footnote 49

As the non-binding recommendations and commitments to the self-regulation of AI were considered insufficient and some of the developers themselves were calling for a binding regulation, European regional organisations took the lead in elaborating new rules. Therefore, the efforts of the CoE and the EU will be studied here in greater detail using a comparative approach. Both of these started long before the hype around ChatGPT.

2.2.2 The CoE on AI and Human Rights

From a human rights perspective, major work on the regulation of AI has been undertaken by the CoE, which has traditionally taken the lead in the field of the internet and human rights. The methodology of the CoE aims to establish a committee by decision of the Committee of Ministers (CoM) for a particular topic to produce a report with a recommendation for adoption by the CoM. The committee is supported by the competent staff of the CoE and traditionally follows a multi-stakeholder approach as it brings together experts from Member States and civil society as well as academia, while the business community is also consulted. In the case of AI, the CoE has engaged in numerous activities since 2017 to study pertinent human rights issues and produce recommendations aimed towards guidance and regulation in this field.

In particular, the Parliamentary Assembly of the CoE adopted a recommendation in 2017 on technological convergence, AI, and human rights,Footnote 50 calling for the drafting of guidelines. In addition, the European Commissioner of Human Rights also came out in 2019 with a recommendation entitled ‘Unboxing Artificial Intelligence: 10 Steps to Protect Human Rights’.Footnote 51 It called for Human Rights Impact Assessments, information and transparency, independent oversight, non-discrimination, data protection, and remedies, to mention just the main steps. Also in 2019, the CoM established the Ad Hoc Committee on Artificial Intelligence (CAHAI), which was succeeded by the Committee on Artificial Intelligence (CAI) in 2021. Having adopted a declaration in 2019 on the manipulative capabilities of algorithmic processes,Footnote 52 one of the first recommendations by the CoM on the human rights impacts of algorithmic systems, with a set of guidelines attached, was adopted in 2020.Footnote 53 Based on a multi-stakeholder consultation, CAHAI with the help of three sub-groups produced a comprehensive feasibility study on a legal framework for the development, design and application of AI.Footnote 54 It came to the conclusion that while there were several applicable instruments, a number of substantive and procedural legal gaps existed that could be best addressed by a new legal framework for which key elements were identified.Footnote 55

In December 2021, after broad consultation, several studies, and conferences,Footnote 56 CAHAI adopted a report entitled ‘Possible Elements of a Legal Framework for Artificial Intelligence, Based on CoE’s Standards on Human Rights, Democracy and the Rule of Law’,Footnote 57 which dealt with issues related to the development, design, and application of AI based on CoE’s standards. For this purpose, the regulatory work of other international organisations such as UNESCO, the OECD, and the EU was taken into account.Footnote 58 It set out the main elements for a transversal convention creating a framework and possible additional legal or soft law instruments to apply generally or to specific sectors, and the public sector in particular. For example, a model for a human rights, democracy, and rule of law impact assessment on a soft law basis was proposed.Footnote 59 Mantelero, who has developed his own Human Rights, Ethical and Social Impact Assessment, critically noted that the CoE approach might be too broad.Footnote 60

Between 2022 and 2024, the CAI negotiated the text for a common legal instrument on AI. The drafts were provided by the secretariat and then discussed in the committee.Footnote 61 The Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law was designed as an open convention with a global vocation. There were fifty-seven states participating, among them some already established observer states of the CoE, including the USA, Canada, Mexico, and Japan, and also the EU. The Framework Convention was finally adopted by the CoM in May 2024 to be opened for signature in September 2024 at the meeting of ministers of justice in Vilnius. For it to enter into force, it needs only five ratifications. All states (including the EU) that have participated in its negotiation, as well as other states invited by the CoM of the CoE, can then become parties to the Convention.

The Framework Convention mainly contains principles for respecting existing human rights, but does not formulate new digital human rights. It also covers the obligations to protect the integrity of democratic processes and respect for the rule of law (Article 5).Footnote 62 The Convention contains relevant guidance for its parties such as the obligation to adopt and apply measures to protect human dignity and individual autonomy during the lifecycle of AI systems.Footnote 63 Its framework character implies that there might be additional instruments to address specific issues of AI governance. There are provisions on transparency and oversight, on accountability and responsibility, and on non-discrimination in the implementation of the Convention.Footnote 64 There is also a right to be informed when one is interacting with AI systems.Footnote 65 It also provides for a risk and impact management framework, which includes an obligation on parties to ensure that adverse impacts of AI systems on human rights, democracy, and the rule of law are adequately addressed.Footnote 66 There is no obligatory human rights, democracy, and rule of law impact assessment, but the CoE plans to elaborate a pertinent methodology. Regarding effective remedies, measures are to be foreseen that inform relevant bodies and where appropriate also affected persons about AI systems having the potential to significantly affect human rights, allowing them to contest decisions made or lodge a complaint to competent authorities.Footnote 67 Measures to ensure that AI systems are not used to undermine the integrity of the democratic process and the rule of law are to be adopted.Footnote 68

Parties should also establish effective oversight mechanisms. However, several general legal safeguards proposed in the elements for a convention by CAHAI, the predecessor of the CAI, such as the right to interaction with a human,Footnote 69 did not make it into the final version of the Convention, although they were partly taken up in the AIA.Footnote 70 With regard to the scope of the Convention, national security interests are excepted. The focus is on regulating public bodies. In view of resistance to including the private sector, the final version provides that parties will declare at the time of signature or ratification whether they will apply the principles and obligations also to private actors.Footnote 71 It seems that a watering down of the Convention was the cost of including the USA and other non-European states in the negotiations. States are free to use the legal tools they consider appropriate in implementing the Convention (Article 3). This would include the possibility of private self-regulation, which has been criticised by civil society, as it could undermine the binding character of the Convention. The general exception for national security has also been criticised.Footnote 72

Like the Data Protection Convention (Convention 108+),Footnote 73 the Biomedicine (Oviedo) Convention and the Cybercrime (Budapest) Convention of the CoE, the Convention will also be open by invitation to non-European states. For this purpose, the interests of future parties from outside the European region, but also in the EU, which was finalising its Artificial Intelligence Act (AIA) in parallel, had to be taken into account in the negotiations.

The CoE therefore moved ahead with a legally binding approach, whereas the field of AI and human rights has so far been primarily the subject of soft law recommendations and guidelines. This approach also goes beyond the ethical dimension covered by UNESCO guidelines,Footnote 74 but aims at legally binding obligations after soft law and self-regulation has been shown to be insufficient. There are obvious advantages and disadvantages to this approach. The advantage of a legally binding obligation and not just a soft law commitment is obvious if we look at the difficulties of having the multitude of recommendations and guidelines in the field respected in practice. The disadvantage is that a convention is negotiated mainly by states, which is also reflected in the composition of the CAI; this includes CoE member and observer states, and representatives of international organisations and private business, while selected members of civil society and academia can only participate if invited as observers. The outcome of the negotiation process needs to be ratified by national parliaments. As in the case of the Cybercrime Convention, only limited membership from non-European states can be expected. In order to achieve greater participation, standards had to be lowered. Therefore, whether the Convention can be expected to produce a valuable global response to the issues at stake is yet to be seen. It might remain a mainly European response to a global challenge, as the other open conventions of the CoE have. However, the effort to open up to the world needs to be recognised.

2.2.3 The EU Act on Artificial Intelligence and Fundamental Rights

The efforts of the EU in the field of regulating AI also aim to have an effect beyond the EU, as the EU, as in the case of the GDPR, aims at a ‘Brussels effect’. In a broad process, the EU has worked on different aspects of AI including its definition and ethical principles, on which the Independent High-Level Group of Experts on AI has produced relevant proposals including guidelines on ethics.Footnote 75 In January 2022, the European Commission presented a European Declaration on Digital Rights and Principles for the Digital Decade, which also contains rights related to the use of AI jointly adopted with the European Parliament and the Council in December 2022.Footnote 76 It focuses on principles and claims that in the digital transformation of Europe people should be at the centre, but, however, avoids saying clearly whether people should have enforceable fundamental rights. The declaration was supposed to serve as a reference work for politics and business.Footnote 77 Under freedom of choice, the rather short declaration also contains principles on AI, such as ensuring transparency about the use of AI, avoiding discrimination, and ensuring that algorithms are not used to predetermine people’s choices, which is reminiscent of some elements of the rights proposed by the WeMoveEurope campaign. This paved the way for the EU AIA to regulate AI.

The proposed EU AIA,Footnote 78 of April 2021, was presented by the European Commission after wide public consultation with the involvement of all interested actors. The explanatory memorandum only saw advantages for fundamental rights,Footnote 79 and indeed compared with the present situation, improvements can be expected. Certain AI practices, such as the evaluation of the trustworthiness of a person leading to a social score, are to be prohibited. For AI, which interacts with humans and uses biometric data, certain transparency obligations should apply.Footnote 80 For non-high-risk AI, providers are encouraged to develop codes of conduct for which the AIA sets a framework.

The Artificial Intelligence Strategy of the EU of 2018 was mainly concerned with making the EU a world-class hub for AI.Footnote 81 While committed to a ‘human-centric approach’, it also includes rules for product safety and civil liability. The resolution adopted by the European Parliament in May 2022, on ‘AI in a digital age’,Footnote 82 focused on the competitive situation of the EU in the global context and addressed various sectors including AI and the future of democracy. It mentioned many challenges for the protection of fundamental rights, which were to be fully respected in the digital transition and the development of AI, and called for ex ante risk self-assessments, data protection impact assessments, and conformity assessments.Footnote 83 It was clear that the risk-oriented approach, for example, in the use of face recognition, surveillance, or transparency requirements, did affect significant fundamental rights issues internally and human rights concerns externally. In addition, the participation of AI users and their rights were considered to be key concerns. In reaction to the debate on ChatGPT, the European Parliament requested stricter rules for ‘general purpose models’ of AI, which are trained on broad data at scale and can be adapted to a wide range of distinct tasks. This includes generative AI such as ChatGPT, which can produce new content.

With regard to fundamental rights, the EU Agency on Fundamental Rights provided an overview in 2020 of the main issues related to AI.Footnote 84 There were suggestions for a more specific inclusion of fundamental rights concerns in the AIA.Footnote 85 A total of 115 European civil society organisations, including EDRi and Algorithm Watch, called for a number of amendments to the AIA to strengthen its impact on fundamental rights.Footnote 86 Besides a better mechanism to deal with new risks, the authors called for meaningful rights and redress for people impacted by AI systems, such as the right for the explanation of decisions taken, the right to a judicial remedy, and the requirement of accessibility to all AI systems. NGOs such as Algorithm Watch produced studies on how to protect worker rights when AI is used in the workplace or how to provide access to data for public interest research.Footnote 87

After key committees of the European Parliament such as the Civil Liberties Committee had adopted a position on the AIA in which most of the concerns of civil society, such as the prohibition of predictive policing systems, emotion recognition, and real-time biometric identification in public spaces were taken into account, the EU Parliament adopted its position in June 2023.Footnote 88 The trilogue (an informal inter-institutional negotiation bringing together representatives of the European Parliament, the Council of the European Union, and the European Commission) ended with an agreement in December 2023 and further negotiations on the adoption of the revised draft by the European Parliament in March 2024. After a very tough negotiation process, the AIA was finally adopted on 21 May 2024.Footnote 89 The final text of the AIA was published in the EU Official Journal in July 2024 to enter into force the following month.Footnote 90

The AIA provides for a risk-management system distinguishing between unacceptable risks, high risks, and lower risks in or from AI systems. With regard to high-risk AI systems, which are characterised by posing significant threats to health, safety, or fundamental rights,Footnote 91 before being placed on the market, providers should meet several requirements such as testing and mitigating foreseeable risks to health, safety, fundamental rights, the environment, rule of law, and democracy with the involvement of independent experts.Footnote 92 In Article 5, it defines several prohibited AI practices, such as manipulative techniques impairing the ability of persons to make an informed decision, social scoring and profiling, certain emotion recognition systems, and certain biometric identification systems, although there are numerous exceptions. Still, they create barriers against the violation of fundamental rights.

The AIA has binding force only in Member States of the EU, but there is the expectation that, similar to the GDPR, it will set a global standard for companies, and thus also have extra-territorial impact.Footnote 93 In this context, it is worth noting that the GDPR already sets some applicable standards of relevance for AI applications, such as the prohibition of automated decisions related to individuals except if they give their consent (Article 22), thus applying the principle of human intervention (human in the loop) regarding data controllers. The Digital Service Act of 2022, which came into force in 2024, regulates online platforms with provisions relating to liability for the deletion of illicit and fake content and to remedies. In line with the competences of the EU, the objectives of the AIA are not focused on fundamental rights issues, but more broadly on risks related to the use of AI and their impact on economic and commercial as well as consumer concerns. It deals with the use of a trustworthy AI in products and services, and thus also adopts a harmonising and conformity-oriented approach. Accordingly, its first objective is described as ensuring safety and conformity of AI systems with fundamental rights during the whole AI lifecycle. For this purpose, it distinguishes different risk categories. Unacceptable risks, such as social scoring or the use of biometric data in public spaces, are prohibited in general; high risks, such as profiling, predictive analysis, and so on, are subject to risk management via assessments and transparency obligations, such as the right to be informed on the use of AI or the right to human oversight. However, there are also large exceptions for law enforcement and the security sector is fully exempted. In addition, there are safety and liability rules as well as a right to effective remedies. Fewer obligations apply to limited-risk and lower or minimal-risk AI systems.

The AIA became applicable according to a phased approach in which the prohibition of AI of unacceptable risk started to apply six months after the entry into force of the AIA (i.e., February 2025), and the obligations of general purpose AI model providers or the appointment of authorities in competent member states started to apply after twelve months. Certain obligations related to high-risk systems listed in Annex III, such as AI systems in biometrics, will become enforceable only after twenty-four months and others only after thirty-six months. A clarification of the rules for the implementation will be provided by delegated acts and guidance from the Commission, as well as codes of practice by the EU AI Office, established in May 2024.Footnote 94 For example, only by early 2026 will the EU Commission guidelines on the classification of high-risk AI systems have to be issued. Consequently, owing to the complexities of the Act, its obligations will be phased in over several years, which raises the question of whether this might not come too late given the ongoing race for ever more powerful AI systems. However, in view of this fact, the European Commission has set up the AI Pact as a framework to assist companies to prepare for the AIA on a voluntary basis.Footnote 95

2.2.4 Possible Complementarity between the Two Regulatory Approaches

While the CoE Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law is fully focused on human rights and democracy standards, fundamental rights in the EU AIA only play a limited role, such as safeguarding against the undermining of existing standards as contained in the EU Charter on Fundamental Rights. While references to fundamental rights are included throughout the entire text of the AIA, they are limited to the rights contained in the Charter. This confirms the relevance of the EU Charter on Fundamental Rights for addressing AI. The EU, being represented in the drafting process of the AI convention of the CoE also contributed to its provisions from the EU background. In this way, possible differences were avoided. For example, the Convention provides that EU parties to the Convention shall in their mutual relations apply EU rules.Footnote 96

Accordingly, while there is an alignment to be observed, the CoE Convention in terms of human and fundamental rights covers a much broader ground. For example, it also directly covers democracy, while the AIA is only of indirect relevance, although this has also been one of the important concerns in the EU process. In this regard, the two texts can also be seen as complementary. If, as can be expected, the EU becomes a party to the Convention, this complementarity will be particularly relevant. However, the EU might want to wait until a significant number of its member states have ratified the Convention. However, as the European Court of Justice has clarified in its opinion of 2021, issued on the request of the European Parliament on the accession of the EU to the Istanbul Convention, there is no need for a ‘common accord’ of all EU Member States before the Council decides to join the Convention.Footnote 97

One major issue highlighted by civil society is the use of invasive applications of AI, such as biometrics for mass surveillance or in the migration context.Footnote 98 The AIA sees the remote use of biometrics in publicly accessible places for law enforcement as a red line, but has exceptions for asylum and migration.Footnote 99

One key problem is how to operationalise the right to digital self-determination in view of self-learning algorithms, which are black boxes even to their programmers. Accordingly, full transparency is technically not possible. Benefits and risks cannot always be fully determined in advance. To ensure that the persons affected remain in control has already shown to be difficult in the case of data protection rules. Therefore, the transparency obligation in Article 50 of the AIA contains the right to be informed if one is interacting with AI systems or if content is artificially created. The principle not to predetermine people’s choices as in the draft EU Declaration on Digital Rights and Principles can hardly be applied in practice where decisions in penal systems or the migration context are often guided by algorithms proposing decisions based on a large number of cases. However, rules on consent, as in data protection, are missing in this context. This makes the principle that safeguards or remedies should be provided all the more important, but the EU principles lack precision in terms of whom those safeguards should be provided for, let alone how they should look. The CoE, in its preliminary reflections and drafts based on Article 13 of the ECHR on the right to effective remedy or safeguards was more precise,Footnote 100 while the EU Declaration does not even refer to Article 47 of its Charter on Fundamental Rights on the same right. However, the AIA contains certain mechanisms, such as a right to an appeal to be provided at the national level, but still needs progress on the issue of enforcement.

Regarding the regulation of AI by the CoE Framework Convention and the AIA, both regulation projects, having been finalised in parallel, did influence each other. For example, the definition of AI chosen for the AIA also appears in Article 2 of the Framework Convention. There was clearly an intention to ensure that where the two processes overlapped the negotiations should achieve complementary results. However, in cases of conflict for EU Member States, the AIA prevails, as the Framework Conventions allows them to give preference to their obligations under the AIA. The open letter calling for a moratorium has contributed to an acceleration of the legal regulations urgently needed to provide legal security and establish oversight institutions.

2.3 General Conclusions

The need for the protection of human rights in the regulation of the internet today is beyond doubt. The online dimension of human rights is receiving increasing attention, as are the emergence of digital human rights that may go beyond existing human and fundamental rights. Such rights may be claimed and recognised at different levels; for example, at the national, European, or universal level. Accordingly, their human rights nature does not depend on recognition at the universal level, although this might be the claim and ambition, in particular when new threats show a global nature. As the example of the right to access to the internet shows, it is widely but not as yet generally recognised as a new human right. In the case of the right to be forgotten, this new right has mainly been established in the EU. The human rights related to the use of AI are presently in the process of concretisation, starting from ethical principles and aiming at concrete obligations at least in some respects in the form of the Framework Convention adopted by the CoE or the AIA of the EU. The focus is on the progressive development and concretisation of general principles and rights, and not on the creation of new digital human rights, which, however, may also be the case in response to new challenges. For example, the AIA now contains the right to know whether one is interacting with an AI system. One could also identify a right to be protected against detrimental uses of AI leading to manipulation or biometric categorisation of natural persons by prohibitions or risk-management systems. In the process of identifying or designing digital human rights in line with a multi-stakeholder approach, civil and public actors may be involved. In the case of most new and emerging threats there is no need for new digital human rights, but existing rights can be extended by interpretation to cover the new challenges. Altogether, the emergence of (new) digital human rights is a highly dynamic part of the progressive development of international law in all its emanations, from soft law to hard law. The regulation of the development and application of AI has accelerated the process of meeting new threats on the basis of extended and new human and fundamental rights in order to close gaps identified in human rights protection in the digital environment.

3 Why and How the State Should Regulate the Internet

3.1 Introduction

This volume explores the challenges posed to human rights by the digital environment. Grounded as they are in human rights law in general, and Western human rights law in particular, many of the contributions take the individual as their focal point. From this perspective, the broader society of which the individual forms a part is not excluded from the analysis, but it is overshadowed by the individual’s interests. This focus can be found expressly in the preamble to the Charter of Fundamental Rights of the European Union (EU), which states that it ‘places the individual at the heart of its activities’.Footnote 1

By contrast, society, community, or the common good, is treated as a competing interest; a tension suggested most strongly in the limitations clauses of human rights instruments. These clauses focus on restricting the extent to which social or general interests can be allowed to limit an individual’s rights. An example of such a clause is article 52 of the Charter of Fundamental Rights of the EU: ‘Subject to the principle of proportionality, limitations may be made only if they are necessary and genuinely meet objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others.’

The argument in this chapter is that the interests of society as a whole can and should play a more central role in the protection of human rights. In addition, I show that the heightened significance of societal interests has a particular implication for the digital environment.

My argument proceeds in three main parts. In Section 3.2, I explore the supposed tension between the individual and society, examining briefly how Western human rights law in particular tends to see these two entities as competing rather than interdependent units. I then set out some alternative approaches to the relationship between the individual and society, in particular, the natural law-based Common Good Constitutionalism and the interactional vision of the legal philosopher, Lon L. Fuller. I suggest that Fuller’s view is particularly compelling in the context of internet regulation, as it emphasises the need for (genuine) social interaction as a prerequisite for individual agency.

In Section 3.3, I use Fuller’s vision as the basis for a discussion of two seminal requirements of a healthy system of internet regulation: trust and the constraint of power. In Section 3.4, I flesh out how these requirements could best be realised to create an effective system to protect individual rights and society as a whole. Section 3.5 draws together the main points of my chapter.

I need to be clear at the outset that what I am aiming to do in this chapter is to explain why the state must regulate the internet at all; that is, to establish the baseline from which individual rights analyses could proceed. The harm that my suggested approach is aimed at countering is, broadly speaking, the use of (dis)information to manipulate users of the internet. The main examples of such harm are found in disinformation, manipulation of information, and hate speech. I set out an approach that, I submit, justifies the speedy removal of disinformation or even of accurate but misleading information from the internet, the reporting and banning of social media accounts and websites, and the creation of structures that monitor and respond to inaccurate or manipulative postings in both the short and the long term. Such forms of internet regulation are already known in parts of the globe, and some of the mechanisms I propose in my conclusion may appear familiar to the European readers of this volume.

However, it is important to set out some basic design elements of internet regulation and explore their foundation. This is for two reasons. The first is that the mechanisms need to be tailored to their specific social, political, and constitutional contexts. In states with authoritarian governments, weakened governments with less technological capacity, and societies in which the government is not trusted by the subjects of law, the main features of the European or Western system may need to be adjusted. For a successful adjustment, the purpose behind the mechanism needs to be properly understood. Having said this, my second reason for suggesting the design features that I set out in Section 3.4 is that all societies, even those with technologically advanced governments and a seemingly healthy rule of law, need ongoing engagement between internet users, internet providers, and government authorities to sustain healthy internet regulation, avoiding both the harm caused by individual users and that caused by the government suppression of information and ideas. Thus, even for societies that already implement the basic design that I am suggesting here, it is important to understand its rationale and its positive relationship with individual human rights.

3.2 Different Perspectives on the Relationship between the Individual and Society

As mentioned earlier, limitation clauses in human rights instruments are often interpreted in a way that sees the individual and society as essentially in conflict. This emerges from the formulation of the clauses themselves – they make it clear that the individual and his or her rights come first – but also from the dominant role that the limitations enquiry tends to play in rights analysis. Consequently, the wording of the limitations clause might even be said to present the general interest as a necessary evil, and it certainly functions as a counterweight to the individual right. With some variation, all limitation clauses require a compelling case to be made that the limitation is necessary (not merely convenient) to achieve an important social good.

Second, the strong focus on the individual is reinforced by the tendency to shift most of the work of rights analysis to the limitations clause. Where rights are not phrased as absolute – where they are subject to limitations – the first step of rights analysis should be to determine the scope of the right in question, to see whether it has been limited or infringed. Only if the answer to the first test is ‘yes’ should courts and commentators consider whether the limitation/infringement is legally justifiable. However, as Michael Foran has noted, modern doctrine ‘has a tendency to avoid difficult questions relating to the scope of rights, preferring instead to view virtually any interference with a claimed interest as an infringement that stands in need of legal justification’.Footnote 2 As the legal justification can only be found through the limitations clause, the court is constantly being required to weigh up the individual interest against the public interest. Such an approach assumes ongoing tension between the individual and society. It is important to note that the public interest is, in these cases, generally identified with the interests of the majority, the group purportedly represented by the legislature.Footnote 3 If this is how the public interest is to be understood, courts are repeatedly placed in the position of defending one person against a larger group. In Michael Foran’s words: ‘When the public is set up conceptually in tension with the individual, rights become the last great defence of the individual against an encroaching state demanding their sacrifice for the benefit of the rest of society’.Footnote 4

How might such a stark dichotomy between individual and society be countered using legal theory? I examine two possible approaches briefly here. The first is based on natural law and the second on the philosophy of Lon L. Fuller.Footnote 5

Common Good Constitutionalism sees the common good as ‘the set of conditions necessary for each and every member of the community to flourish’.Footnote 6 The common good is therefore not limited to the interests of the majority, but refers to the good of all individuals within the community. From this perspective, there is no tension between the individual and the broader society, or the minority and the majority:

It is a central tenet of the common good that there is no conflict between the good of the majority and the good of the minority, once both are properly understood. This is because the good of an individual cannot be separate from the good of the community: my life is better when my friends’ lives are better. My membership within a civic community grounds the bonds of a civic friendship that connects all members of a polity. It is in our shared common interest that all members of our community be capable of leading flourishing lives and that they be treated with dignity and respect. To diminish the flourishing of others, to disrespect their dignity, in the name of the common good, is to fundamentally misunderstand what makes the common good common. It also fundamentally misunderstands what it means to pursue a good life, of which membership within a flourishing political community of equals is essential.Footnote 7

A similar approach to the virtue and necessity of community is found in the African concept of Ubuntu, which, in its broadest meaning, recognises that humanity is attained through community.Footnote 8 This pre-colonial concept may lie at the root of the emphasis on peoples and community in African human rights instruments.Footnote 9 The African (Banjul) Charter expressly recognises the duties of individuals, and states these duties are owed to ‘family and society, the State and other legally recognised communities and the international community’.Footnote 10 Furthermore, the Banjul Charter expressly mandates that ‘[t]he rights and freedoms of each individual shall be exercised with due regard to the rights of others, collective security, morality and common interest’.Footnote 11 Finally, this Charter is the only human rights instrument that recognises the rights of peoples and not just of individuals, thereby including the group within its focus.Footnote 12

Common Good Constitutionalism makes particular demands of human rights analysis, in that the first stage of the analysis – determining the scope of the right – has to be taken seriously. The scope of the right must further be determined with reference to the common good. In this way, our very understanding of the right is informed by, and promotes, the values that allow members of the community to flourish.Footnote 13 The limitations analysis, with its conflict between the individual and society, will come into play less often.Footnote 14

Since its introduction in 2020,Footnote 15 Common Good Constitutionalism has been strongly criticised as dangerous and authoritarian.Footnote 16 This chapter shows that a benign application of Common Good Constitutionalism is possible by drawing on Lon L. Fuller’s legal philosophy to suggest both what the underlying values of the common good might be and to show that the understanding of that common good can, and should, be developed by the community to whose benefit it must be used, rather than by a disconnected authority.

Common Good Constitutionalism sits uneasily with liberalism to the extent that the latter excludes from a legal analysis questions ‘relating to the flourishing of individuals or of what constitutes a good life’,Footnote 17 adopting instead the neutrality principle.Footnote 18 Consequently, the nature of the good ‘is either whatever a given individual says it is for them, or it is whatever a democratic institution determines it to be’.Footnote 19 Indeed, some liberal theorists hold that governments themselves must be neutral on ‘what might be called questions of the good life … that political decisions must be, so far as is possible, independent of any particular conception of the good life or of what gives value to life’.Footnote 20 It would thus seem that the role we are prepared to afford to society, or public interest, in human rights analysis depends on whether we are prepared to accept the moral content that natural law locates within law itself.

Lon L. Fuller’s approach could possibly be seen as a compromise between the two apparently opposing philosophies of liberalism and natural law. Fuller described his own philosophy as ‘procedural natural law’.Footnote 21 The qualifier ‘procedural’ is important. Fuller remains neutral on (most) ethical issues within the content of law,Footnote 22 which would seem to exclude any discussion of the common good from legal enquiry. For Fuller, most substantive (what he would call ‘external’) moral questions fell outside the realm of law. Nonetheless, his vision of law is particularly useful for our analysis because of the minimum moral content, which he suggested is internal to law itself.Footnote 23 Second, the notions of the rule of law and equality before the law underpin the human rights documents consulted for this study.Footnote 24 If we adopt Fuller’s understanding of the rule of law, then the human rights treaties themselves recognise a minimum level of common good that society needs to promote before it can meaningfully protect human rights.

What do we mean by the rule of law? Most legal philosophers agree on the basic requirements of the rule of law,Footnote 25 what Jeremy Waldron calls the ‘laundry lists’.Footnote 26 These boil down to two basic ideas: first, that law must have a form that allows its subjects to understand what it demands of them and to ensure that their behaviour complies with it; second, that the law as laid down must be the law that is applied to them.Footnote 27 To fulfil the first requirement, law must be general, publicised, understandable, consistent, and not impossible to comply with. To fulfil the second, it must be prospective, reasonably stable, and faithfully enforced.

Fuller’s understanding of the concept of the rule of law differs from that of other scholars to the extent that he saw it as giving effect to a deeper, moral function, which was to protect and promote human agency:

I have repeatedly observed that legal morality can be said to be neutral over a wide range of ethical issues. It cannot be neutral in its view of man himself. To embark on the enterprise of subjecting human conduct to the governance of rules involves of necessity a commitment to the view that man is, or can become, a responsible agent, capable of understanding and following rules, and answerable for his defaults. Every departure from the principles of law’s inner morality is an affront to man’s dignity as a responsible agent.Footnote 28

Like liberalism, the agency-centred vision of the rule of law appears to take the individual as the quintessential and foundational element of law. Furthermore, Fuller’s concept of agency maps closely onto the right to dignity and its associated notions of autonomy and freedom. However, law could protect agency in Fuller’s conception only by coming into being and being sustained through community. Fuller used the term ‘interaction’ to describe the reciprocal process that shapes the content of the law, and this interaction required the engagement of the subjects of law with one another.Footnote 29 The interactional process, and not the fiat of government, creates and sustains law, because it is built on the shared understandings of the individuals who interact with one another. As Brunnée and Toope explain, the core moral quality of law, its capacity to allow its subjects to ‘reason with law and make choices about their own lives’,Footnote 30 generates fidelity ‘to the rule of law itself and not merely to specific rules’.Footnote 31 As a result, while the ultimate purpose of law is to protect and build on human agency, its vehicle for doing so is a socially cohesive community through which the applicable norms can be created and applied.

Douglas Sturm made a similar point about Fuller’s conception of freedom:

It should be made clear that Fuller’s understanding of freedom is not individualistic in character. More than once, Fuller has indicated his strong rejection of that theory of natural law whose substance consists in the proclamation of the ‘natural, inalienable and sacred rights of man’ precisely because of its individualistic bent. To posit the value or goal or goodness of freedom in Fuller’s usage of that term is not to desire that each man attain a state of total and absolute independence. On the contrary, the choices one can make, the purposes one can pursue, without collaborative social effort and without appropriate forms of human intercourse are trivial, if any such totally independent choices are in fact possible. More definitively, the natural law of ‘keeping alive the creative, choosing, and purposive side of man’s nature’ is intrinsically societal…Footnote 32

Because community is a prerequisite for and guarantor of individual agency, it is inaccurate and misleading to treat the individual and the community as adversaries. To support human rights through regulating the internet, we should therefore encourage rules that strengthen the community and, in particular, ensure genuine engagement and communication.Footnote 33

If human agency requires community and communication, then a healthy system to regulate the internet will promote trust between internet users and accurate information. This will require the active involvement of the state but, as I argue here, the state will need both the support and the constraint that is provided by non-state actors engaging with it. I explore each of these factors in more detail in the following sections.

3.3 Seminal Features of Good Internet Regulation
3.3.1 Trust

For Fuller, communication had a pivotal moral function because he saw ‘maintaining communication with our fellows’ as the overriding aim of human aspiration.Footnote 34 Its further moral value lay in its role in creating law. In Fuller’s theory, ‘law is constructed through rhetorical activity producing increasingly influential mutual expectations or shared understandings of actors’.Footnote 35 Through interaction and rhetorical activity,Footnote 36 actors are also able to ‘generate shared knowledge and shared understandings that become the background for subsequent interactions’.Footnote 37

Communication must clearly be more than cheap talk to give rise to genuine interaction,Footnote 38 so we need to ask what kind of communication creates shared understandings and legal norms. Drawing on Habermas’s notion of communicative action, Cornelieu Bjola suggests that communication moves beyond ‘instrumental bargaining on the basis of fixed preferences’ to a more reasoned process, including ‘a mode of interaction between actors based on the logic of arguing; that is, of convincing each other to change their causal or principled beliefs in order to reach a reasoned consensus’.Footnote 39 Similarly, Ian Johnstone points out that ‘[d]eliberation is not a communicative free-for-all, in which any argument is as good as any other; the felt need to offer reasons others can accept in principle sets the parameters of discourse’. Seen in this way, communication must be based on reason and principle.

Compare this standard with the kind of exchanges that result on the internet when users create and share disinformation or manipulate the processes of internet communication, either to amplify or suppress information or ideas, or to eviscerate the expertise and authority of qualified actors. The internet is particularly vulnerable to this form of abuse owing to the difficulty of verifying the identity of the persons behind the posts, or when a post is being artificially enhanced or suppressed by the manipulation of algorithms or the use of robots. Such abuse makes principled or reasoned discussion impossible, and every argument is indeed as good as the next because there is no way for the readers to ascertain which information or actors they can rely on. As a result, the posts that are more likely to be believed are those that align with the views that the reader already holds; a situation which reinforces existing divisions between groups in society rather than stimulating genuine interaction across boundaries of difference. This means that shared understandings cannot arise across a community as a whole. People misled by disinformation and manipulation will not have engaged in the process whereby shared understandings and norms are formed, they will not be able to make informed decisions themselves, and they will not trust the authorities who are attempting to make and implement informed decisions. Those governed by the law will not feel fidelity to it. As a result, disinformation and manipulation threaten the rule of law and also weaken government programmes set up to protect public health or welfare, or any other common good. It is therefore not surprising that the EU describes online disinformation practices as ‘public harms’, specifically harms to the integrity of electoral processes, and as ‘threats to our way of life’,Footnote 40 which undermine trust and confidence in democratic politics. One of the best examples was the disinformation around COVID-19, as reported by the EU,Footnote 41 and by monitoring bodies all over the globe.Footnote 42

The constitutive role of communication in the creation and maintenance of the legal and social order renders the integrity of internet communication particularly important. Expressed in terms of a rights analysis, the heightened social importance of reliable communication would suggest either that the scope of the individual’s freedom of expression should be more narrowly defined, or that, if that freedom is found to have been limited by the regulation of the internet, society’s interests should weigh more heavily in a proportionality analysis. In this regard, we can distinguish between the people who create or knowingly distribute false or misleading information and the people who unwittingly or negligently encourage or disseminate it. In the case of the former group, I would argue that the scope of freedom of expression does not extend to the right to knowingly misrepresent information or mislead. In the case of the latter group, the scope of freedom of expression can more plausibly be argued to include this group’s behaviour; that is, to include the right to unknowingly disseminate false or misleading information. However, in this case, at the point where freedom of expression has been found to have been limited, society’s interest in accurate and good faith communication should play a stronger role in justifying the limitation of that right.

The internet needs to be regulated because a lack of regulation leads to distrust and a breakdown of common values. But simply putting a regulator in charge of the internet with no way of ensuring that the regulator itself is trustworthy compounds the problem. In the area of COVID-19, for example, there were many examples of state actors spreading disinformation, either by denying the existence or spread of the disease or by suggesting cures with no medical efficacy.Footnote 43 Moreover, we have seen how control and censorship of the media (social and registered journalism) has been used to justify government abuses and war.Footnote 44 Ironically, the power to regulate cannot itself be left unregulated. It needs to be structured in an interactive manner, which ensures that the regulator is in dialogue with responsible stakeholders.

3.3.2 (The Constraint of) Power

States have a monopoly on legitimate violence within their territories, control over the executive arm of government, including its security apparatus, and the ability to pass new legislation. If states abuse this power, they pose a threat to democratic values and the common good. On the other hand, and despite the power they may enjoy over their subjects, states are simultaneously often weaker than powerful social media corporations, such as Meta and X. From both perspectives, states need to act in concert with non-state actors and broader society to be effective and compliant with the rule of law.

Fuller’s theory explains why the state is both too weak and too strong to act alone. The state needs the buy-in of its subjects because law is formed out of the engagement of those subjects with each other and the government. Fuller claims that, in practice, interaction was part of all lawmaking, ‘even … those [forms] apparently dominated by enacted law and formal law-making and law-applying institutions’.Footnote 45 This was both because there is a horizontal element in apparently vertical lawmaking procedures, such as adjudication or even the drafting of legislation,Footnote 46 and because the subjects of law are engaged in a vertical process of interaction with the lawgiver.Footnote 47 Non-state actors thus help the state to create the law.

Particularly in the context of internet regulation, there is a practical reason why the state needs the help of broader society. The state often cannot enforce the law; that is, it cannot effectively regulate the internet without the engagement of non-state actors. Particularly where the government is not fully trusted by its citizenry, and where that government does not have the expertise to quickly identify disinformation, manipulation of the internet, or hate speech, non-state actors need to assist and collaborate with the government if it is to respond effectively to such misuse. Collaboration of this kind actually empowers the state meaningfully to counter the dangers posed by the internet.

On the other hand, the engagement of non-state actors also prevents the imbalance of power between state and subject from threatening the rule of law. Interaction is necessarily a reciprocal process, even when there is a power imbalance between the parties to the interaction. As Cheng explains:

To Fuller, no power relation is completely devoid of any measure of interaction between the power-holder and the subject of his power, especially over the course of time. This inevitable degree of reciprocity, in turn, imposes a constraint on the exercise of power by the power-holder, while allowing for the possibility of resistance and negotiation by those subject to the power.Footnote 48

The state bears the primary moral duty of protecting the subjects of law against threats to their lives, health, and other resources.Footnote 49 It has the authority to exercise coercion, through regulation,Footnote 50 or even criminalisation,Footnote 51 when this proves necessary to fulfil its duty. But both criminalisation and regulation can be manipulated by the state to favour a particular party, such as the government in power, and silence the input of the other actors in the internet community. In the context of the internet, a reciprocal process of interaction needs to be built into the very design of the regulatory process to prevent the coercive power enjoyed by the state from threatening the rule of law.

3.4 A Lawful Process of Regulation

In this concluding section, I suggest some basic design elements of a healthy system of internet regulation, one that permits and indeed requires of the state that it protect society against disinformation, manipulation, and hate speech, and yet constrains the power of the state in order to protect individual agency and the rule of law.

The outline of my suggested regulatory system must generally be broad, as the details of the particular mechanisms employed for regulation depend on a number of extraneous factors. These include whether and how states control the activities of transnational corporations within their own jurisdictions, the technological capacity the state and civil society can provide, and the traditional modes of civic engagement within any particular society. Nonetheless, I suggest there are common features that all well-regulated internet communities will share.

As set out in Section 3.3, we are aiming to ensure a community in which participants have a minimal level of trust in each other and the government, and in which power – both state and private – is sufficiently constrained. As argued in Section 3.2, the main tool to achieve these ends is ongoing engagement and responsiveness between the parties. This process, which we would call interaction under a Fullerian approach to the rule of law, ensures that the necessary shared understandings can arise; understandings that form the foundation of the legal system adopted by the participants. Particularly in the case of the internet, ongoing engagement will also help to monitor compliance with the norms that emerge from this foundation.

Ongoing and effective interaction is best achieved by a multi-tiered design. At the first level of such a framework, the internet and online service providers regulate themselves in dialogue with their users. At the second level, non-state actors monitor the use of the internet, engaging with both the service providers and the state. At the third level, the state plays an active role in internet regulation, co-ordinating between the first two tiers of non-state actors.

3.4.1 Self-Regulation

At the first level, service providers self-regulate. This term, ‘self-regulation’, is a slight misnomer, as I suggest that it be mandated by the state itself, and that penalties be attached to egregious failures by service providers to prevent serious harm through their users’ traffic on the internet. However, the horizontal and internal nature of the regulation is important; the platform must itself publish and enforce a code of conduct based on international best practices, and encourage debate among users of its platform on what the code of conduct should look like.

Singapore provides an example of self-regulation that is promoted by the statutory authority,Footnote 52 the Infocomm Media Development Authority.Footnote 53 This government board encourages content providers to develop industry codes of practice in order to promote self-regulation and codes that complement existing internet content regulations.Footnote 54 The dialogue between users of the platform and the platform, and among the users themselves, provides the first step in developing shared understandings that recognise the harm that can be caused by internet traffic and build consensus on how to avoid the harm.

The role that the users of the internet can play is illustrated by the process followed in South Korea whenever the Korean Communications Standard Commission (KCSC) instructs an internet provider to remove content from its platform.Footnote 55 Upon receiving a request for deletion or rebuttal of the information, the provider of information and communications services must delete the information or take a temporary or any other necessary measure.Footnote 56 This content can be hidden, but not deleted, for thirty days, to allow either the platform or the user to challenge the decision. An example of such an engagement arose in 2015, when the entire platform of an adult cartoon service was blocked because part of its content had been considered obscene. The platform operator challenged this ruling on the basis that the site used an age authentication system and therefore complied with the law. The decision to block the site was met with public outcry, and subsequently the commission removed the blocking order.Footnote 57

3.4.2 Independent Regulatory Bodies

The second tier of regulation needs to be provided by independent regulatory bodies. These bodies need to be structurally independent of the government and steered by civil society. At the same time, these independent bodies must be designed and set up in such a way as to ensure that they genuinely do represent the public interest and operate in a transparent and accessible manner.

Recognition is growing that the involvement of civil society is crucial for healthy internet regulation. Civil society can fulfil two roles: developing the rules and policies by which the internet should be governed, and monitoring compliance with the normative system. A good example of the former can be found in the creation of the African Declaration on Internet Rights and Freedoms.Footnote 58 This declaration is a pan-African initiative aimed at ‘promoting human rights standards and principles of openness in internet policy formulation and implementation’ across the African continent.Footnote 59 The idea for the declaration emerged from the African Internet Governance Forum in Nairobi, Kenya, in 2013,Footnote 60 at which participants came from government, the private sector, civil society, and regional and international organisations.Footnote 61

International bodies such as UNICEF can play a role in the constitution and functioning of internet regulation at the domestic level, and some of the domestic bodies fulfilling this function can be self-constituting, particularly those offering essential technical expertise.Footnote 62 However, non-state bodies require the backing of the government and need to remain in communication with it, for reasons explored further here.

I submit that these bodies need to be independent of the state because the government should not be identifying trends or disinformation itself, particularly in states with weak democracies. If an autocratic government exercises unilateral control over the internet, the rule of law is broken. Particularly egregious examples of this form of abuse have been seen when dictatorial governments in Africa have shut down the internet completely during protests or elections.Footnote 63 By contrast, if external bodies are doing the job of regulation, they can function as an interlocutor with whom the government has to interact, and to whom it must justify its exercise of power. In this way, they help to ensure the necessary checks and balances in internet regulation.

Independence can be achieved only by the correct design of the regulatory body and of its connection with the state. First of all, the power to regulate must be given to an external body; that is, a body that is structurally separate from the state. But, second, the regulatory authority may not be subject to political interference. Singapore provides an example of a body without structural independence. Under the Protection from Online Falsehoods and Manipulation Act of Singapore, government ministers can order the publication of corrections and the retraction of content they assess to be false or against public interest.Footnote 64 The regulatory body is thus embedded within government itself. However, South Korea provides an example of an apparently independent regulatory body that is still subject to strong political interference. The KCSC is established through legislation and identifies itself as a private organisation; however, all nine members of the Commission are appointed by the president.Footnote 65 Consequently, a nominally independent ‘private’ organisation is factually ‘under the direct control of the President’.Footnote 66

In addition to politicised appointments, the KCSC has been criticised for its lack of transparency.Footnote 67 The norms that it applies when it suppresses internet communications are opaque, and it enjoys wide discretion in its decisions.Footnote 68 A similar criticism is levelled at the new power enjoyed by the Film and Publications Board in South Africa,Footnote 69 a board whose core members are also appointed directly by the relevant minister.Footnote 70

The problematic role of the government in appointing incumbents to these regulatory bodies highlights another important design element of healthy internet regulation, namely, a transparent process for the selection of office holders. Such a selection process addresses two problems at once. First, it reduces the chance that the regulatory body will be beholden to the government and thus lack independence. But second, it also helps to ensure that such bodies do, in fact, represent the public interest. These independent bodies do not have a democratic mandate, which means that some kind of public process is needed to ensure responsiveness and legitimacy for the authority that they exercise, particularly for those bodies that are formulating policy. The selection process should ideally consist of public interviews conducted by mixed panels representing both the legislature and experts in the field of internet regulation.Footnote 71 Such a process ensures the interaction from which shared understandings can be built by promoting trust in the incumbents of the regulatory bodies.

Legislation or regulation is then needed to confer on this body the power that resides, by default, in the state itself: the power to regulate the internet. These powers serve two functions. They are, on the one hand, a check on inadequate self-regulation by service providers, while on the other hand, they serve as both a check on and support for the government in its duty to prevent harm through internet usage. Particularly where the government faces challenges of technical capacity, there needs to be coordination between rapid information task teams, which are properly constituted, to respond effectively to immediate crises and provide watchdog analysis of future or emerging threats. In relation to disinformation, this needs to be a real-time response, so that inaccurate and damaging information can be removed immediately if needed.

However, immediate responses are not always necessary. Monitoring and response bodies can also be assigned to different target areas that need to be addressed, such as terrorism, incitement, hate speech, and xenophobia. Such bodies may react in a more long-term manner and even coordinate counter-campaigns against harmful internet initiatives.

Although these bodies need to be independent of the government, they need to be supported by the government to be effective, a point discussed further in the next section. They need to operate under clear rules and procedures, and remain responsive to the engagement of the public as well as the government. As seen in the example of the KCSC, public response to the rulings of this independent authority can help it to refine and clarify the norms that it is meant to be applying.

3.4.3 The Role of the State

The state is needed partly because deleting and retracting does not eliminate the activity behind the information shared online, such as terrorism, incitement, hate speech, and xenophobia. Engaging with this reality is beyond the capacity of the task team; this is when the involvement of the state is warranted. We have many recent examples of occasions when the state should have acted, but did not. The online trends and chatter leading up to the 6 January riots in the US in 2021 made the riots foreseeable. Similarly, the Centre for Analytics and Behavioural Change picked up the indicators of the approaching riots before they began in South Africa in June 2021.Footnote 72 The presence of the state, through the police, was needed, whether or not the police themselves were gathering the primary information.

The role of the state requires clear delineation because its involvement in internet governance must be a justifiable intervention addressing credible and serious threats to the peace and security of citizens and the state as a whole. It would intervene in two main contexts. The first is proactive suppression of a serious forthcoming danger, such as xenophobic attacks, riots, and insurrection. These threats call for a multi-departmental response that only the state has the capacity to execute. In these cases, the state’s involvement should work based on the analysis, predictions, and recommendations made by independent task teams in the event that a pre-emptive response is required to maintain societal peace and security.

The second context in which the state has a role to play is in the prosecution of serious crimes committed online. These include hate speech in some jurisdictions and may also encompass the deliberate distribution of misinformation, especially during elections. And, of course, it includes all the offences that are also offences when committed offline. There is no reason why the normal law enforcement processes should be hindered because the crime is taking place in the digital space. Therefore, a component of the amendment to the internet regulation law in South Africa was that internet service providers were to inform the Film and Publication Board whether they had reported the presence of the prohibited content, as well as the particulars of the individual maintaining, hosting, distributing, or in any manner contributing to the content, to an official of the South African Police Service.Footnote 73

3.5 Conclusion

In this chapter, I have attempted to pan out from an analysis of specific rights affected by internet regulation to then focus in on the basis for that regulation. I argue that this basis is the rule of law, and that the rule of law, properly understood, requires an approach to rights analysis that recognises and fosters the community in which the rights are exercised. Under such an approach, we interpret individual rights in such a way as to benefit all the members of the community, and we are prepared to limit individual rights if this is necessary to maintain genuine interaction between the subjects of law. In Fuller’s view, this process of interaction creates the shared understandings from which a community develops a legal system, and nurtures and develops the particular norms that make communal life possible. It is thus a prerequisite for the very existence of the individual rights.

With the focus on maintaining interaction and responsiveness, I then suggest some basic design elements for internet regulation. The general model I propose is backed up by the coercive power of the state and allows for significant limitation of individual rights to prevent the manipulation of information. However, such limitations are themselves bounded by a structure in which the necessary actors remain in dialogue with one another and keep the state responsive to the views of internet users.

4 How to Tame the ‘Digital’ Shrew Constitutional Rights Going Online

‘No man and no mind was ever emancipated

merely by being left alone.’

John Dewey, The Public and Its Problems
4.1 On Dangers and Solutions

On 11 July 2017, the Knight First Amendment Institute at Colombia University filed a lawsuit against former President Trump and his aides for blocking several people from Trump’s Twitter account following their criticism of his presidency and policies. The plaintiff asserted that the @realDonaldTrump account was a ‘public forum’ protected under the First Amendment, from which no one could be excluded based simply on their views.Footnote 1 The US Court of Appeals for the Second Circuit affirmed the district court’s holding that such blocking violated the First Amendment.Footnote 2 This judicial finding confirms ‘the key role that the Internet can play in mobilising the population to call for justice, equality, accountability […] and better respect for human rights’.Footnote 3

By way of contrast, assume now that Twitter has removed users’ comments. The outcome would be different since Twitter, as a private actor, is not bound by constitutional obligations embodied in the First Amendment. Under such a scenario, First Amendment rights are illusory, as are sometimes privacy-related rights, whose online infringements can produce cascade results. Recently, a surveillance information technology company and social media made, what Danielle Citron calls ‘intimate privacy’,Footnote 4 accessible to everyone possessing a computer or mobile phone. MIT Technology Review revealed how iRobot collected photos and videos from the homes of test users and employees and shared them with data annotation companies.Footnote 5 The investigation revealed that images of a minor and a tester on a toilet ended up on Facebook. It was a robot vacuum that took the pictures, not a person. Nevertheless, it would be too simple to blame iRobot for this infringement since humans were those who decided to steal and leak them.

But who is to blame? When during the 1990s the internet started the digital revolution, it was generally viewed that it should be free of regulation, that electronic commerce should be free, and that social platforms, that is, intermediaries, should not themselves be liable for content posted by third parties.Footnote 6 However, with the increasing influence of digital technology on everyday life, the concerns about human rights violations have also rapidly increased, making the issue of liability for the online infringement of human rights unavoidable. Modelling the liability of social platforms has become a pressing issue, along with emerging efforts to institutionalise the accountability of digital collective actors, which includes human–algorithmic association.Footnote 7 But to make social platforms liable, it is necessary to resolve the problem of the accountability of private actors for human rights violations traditionally immune to human rights challenges because social platforms are owned by private actors who also manufacture their contents, and coordinate and control them. To remedy the situation, different strategies are employed or offered.

First, the transnational nature of digital communication and unreadiness of states to step out of their traditional zone of control within their borders made room for social platforms to turn to self-regulation and develop what Gunther Teubner following David Sciulii calls ‘societal constitutionalism’.Footnote 8 At the expense of democracy, by accumulating powers traditionally seen as public, the private actors have become responsive to values of fundamental rights through private actions. The Facebook Oversight Board’s decision to uphold and partially revise Facebook’s decision to suspend former US President Donald Trump’s account indefinitely for the alleged influence of his posts on the violent attacks on Capitol Hill on 6 January 2021 clearly illustrates this trend.Footnote 9 What is more striking is that international law has encouraged such privatisation of public law, because it lacks legally binding instruments that would regulate the direct responsibility of non-state actors and profit-driven companies for human rights violations. Take, for example, the United Nations (UN) Guiding Principles on Business and Human Rights, whose greatest achievement suggests that corporate responsibility implies a negative obligation to respect human rights and a positive obligation to understand and mitigate the negative impacts on human rights with due diligence.Footnote 10 In the absence of state reactions, business actors, including social platforms, have become quasi-regulators.Footnote 11

Second, despite the limits of international law and the fact that the European Convention for the Protection of Human Rights and Fundamental Freedoms generally obliges only states, the European Court of Human Rights has attempted to limit the power of social platforms, by ruling that, in principle, social platforms, in the role of content providers, are liable for third-party content.Footnote 12 Although this approach has been subject to substantial criticism because it might encourage social platforms to behave like censors to avoid liability or arbitrarily remove content, it is worth noting that the Court has made an effort to model a liability regime for human rights violations in the online context through its doctrine of positive obligations.Footnote 13

Third, within the European Union (EU), a social platform’s liability was initially perceived as non-existent, providing that in a specific case it ‘has neither knowledge of nor control over the information which is transmitted or stored.’Footnote 14 In the opposite case, if it was established that the social platform possessed knowledge about the posted information and exercised control over it, the liability existed.Footnote 15 Today, the Court of Justice of the EU (CJEU) has become a significant regulator of the digital world, finding the balancing principle extremely helpful in disputes involving, on the one hand, the right of social platforms to conduct business, and on the other hand, the rights of individuals. Furthermore, the recently adopted Digital Service Act, the Digital Market Act, and AI Regulation represent additional EU attempts to force platforms to act more responsibly.Footnote 16

From a methodological perspective, although policymakers and judges have attempted to cope with the issue of intermediary liability for human rights violations within different fields, including data protection law, consumer law, privacy law, intellectual property law, and hate speech regulations,Footnote 17 on the theoretical level, the dominant way to frame this new phenomenon is to label it in constitutional terms. Starting from the fact that constitutionalism aims to limit government to protect individual rights, the emerging model of digital constitutionalism strives towards the same aim: to provide an understanding of how digital technology affects human rights, which are traditional tenants of constitutional law, and offer solutions for how to make private actors responsible for human rights infringements in the context of the digital world.Footnote 18

On this account, some scholars have offered the recognition of the new rights as the exit strategy. In addition to already available rights, the right to explanation (in the context of data processing), the right to accessibility, and the right to obtain a translation from the language of technology into the language of human beings are perceived as means to regulate the relationship among three main participants of the information society – platforms, states, and individuals.Footnote 19 Others examined whether the horizontal application of constitutional rights in private law may be a possible response to the unlimited powers of social platforms.Footnote 20 Several years ago, the Council of Europe’s Committee of Ministers acknowledged the significance of the horizontal effects strategy in regulating the responsibilities of online platforms and recommended to Member States to ensure the horizontal effects of constitutional rights in relations between private parties.Footnote 21

Considering that human rights violations in the digital sphere are of a constitutional quality, this chapter identifies the horizontal application of constitutional rights as a possible response to human rights challenges raised by the actions of social platforms. The traditional view in constitutional law is that constitutional rights are shields only against the state. However, in this chapter, I will start from the premise that the focus should not be on the state’s obligations but the individual’s rights. Following Joseph Raz, who claims that ‘rights precede obligations and therefore there is no closed list of obligations according to a certain law [but that] … changed circumstances can lead to the creation of new obligations according to an already existing law’,Footnote 22 I will presuppose that constitutional rights correlate not only with different duties but also with different duty-bearers concerning the fulfilment of duties. In light of this conclusion, it is evident that digital technology has made social platforms a prominent duty-bearer toward constitutional rights. I intend to make progress on the issue of the liability of social platforms for individual rights violations by suggesting the horizontal application of constitutional rights as an available strategy to remedy individual rights infringements in the online environment. The issue of whether constitutional rights should be restructured to protect from all intrusions of the digital world (e.g., the digital code) and not only from the activities of social platforms is outside this discussion.Footnote 23

This chapter is divided into five parts. After the introduction, in the second part, I will more closely explain the (non-)application of constitutional rights in offline private relations. In the third part, I will discuss different approaches to the emerging authority of constitutional rights online. The fourth part will advance understanding of how the horizontal application of constitutional rights in the online environment helps establish and maintain democratic control over digital technology. In the concluding part, I will summarize why an extension of constitutional rights in the digital sphere could be a driving strategy for protecting individual rights against the intrusive power of the algorithmic society.

4.2 Rights Talk in Private Law
4.2.1 The Meaning and Relevance of the Public/Private Law Distinction

A distinction between private and public law provides a ground for the systematisation of law.Footnote 24 It first appeared in sixteenth-century legal treatises, which, largely ignoring the contrast in Roman law between jus gentium and jus civile, made a sharp distinction between jus publicum and jus privatum.Footnote 25 Ever since, what belonged to the first and what was covered by the second category, however, remained a subject of vivid discussion.

Broadly speaking, private law traditionally encompasses the law of contracts, tort, property, business associations, commercial transactions, and related fields governing relations between individuals.Footnote 26 The state here appears as a mere arbiter of the rights and duties that exist between private parties and is not the party with the interest.Footnote 27

Figuring out the meaning of the term public law became more challenging. Thus, scholars have never had a complete control of the definition of public law. For example, public law was entirely omitted in Justinian’s civil law, but Hale and Blackstone’s civil law incorporated much of what the Romans would have called public law – including the rights and duties of the monarch, members of Parliament and other magistrates.Footnote 28 Issues of whether public law is an autonomous body founded on the autonomy of the political realm, whether it is isolated from morality, to which philosophy should we turn to specify its subject and tasks (e.g., to functionalist legal thought, legal positivism, Dworkinian legal interpretivism, or political theory), and which values are immanent within public law, have been subject to a modern passionate debate exemplified in the work of Loughlin, Craig, Harlow, and Cane.Footnote 29 At the highest level of abstraction, one may say that public law is inseparable from the government.Footnote 30 Following this argument, one can argue that constitutional law, criminal law, and administrative law are principal tenants of public law. Yet one should bear in mind that a lack of a clear definition of public law is also a result of still present differences among jurisdictions – despite emerging unifying trends, the tenants of public law are not the same in, for instance, France, the US, and England.Footnote 31

The issue of whether public law is fundamentally different from private law slowly loses its attraction since, in contemporary times, it is often unclear whether the relevant institute, value, or principle derives from public or private law. This is why Kelsen’s claim, that a distinction between private and public law is ‘useless as a common foundation for a general systematization of law’, is still valid.Footnote 32

There is much more to be said on the distinction between public and private law, but what helps approach the issue of the impact of constitutional rights on private parties is Kelsen’s view that traditionally, private law embraces norms governing relations between private parties, while public law embraces norms stipulating rights and duties between the state on the one hand, and private parties on the other.Footnote 33

4.2.2 The Riddle of Horizontality: From Natural Law to the Horizontal Enforcement of Constitutional Rights

Individual rights were first theoretically articulated in natural law theory without any differentiation between duty bearers responsible for their violations and the state obligation to protect individual rights regardless of who a perpetrator was:

The state of Nature has a law of Nature to govern it, which obliges everyone, and reason, which is that law, teaches all mankind who will but consult it, that being all equal and independent, no one ought to harm another in his life, health, liberty or possessions […]. The great and chief end, therefore, of men uniting into commonwealths, and putting themselves under government, is the preservation of their property […].Footnote 34

Thus, John Locke never considered natural rights to create obligations only for states, nor did he think a state’s duty to protect rights extended only to its intrusions. His vision found an expression in the first state documents stressing freedom and equality and acknowledging various duty holders vis-à-vis individual rights and a total state obligation to protect them from different intruders, not only from the state.Footnote 35

While the idea of natural rights mostly proved short, the constitutionalisation of individual rights in different forms and through different generations continued to endure after its first occurrence in the US Bill of Rights. However, over time, the focus on the state’s duties became the centre of constitutional protection as the state accumulated power and authority over its citizens.Footnote 36 As a result, constitutional rights, with fewer exceptions, extended only into the public law regime but not the regime of private governance. What followed on the theoretical level was the development of an animating idea that constitutional rights apply only vertically – in the relationship between a state and individuals – and not horizontally between private parties.

However, the fall of the Berlin Wall, the end of apartheid, and the return of many Latin American countries to democracy in the 1990s encouraged drafters of the new constitutions for post-communist, post-apartheid, and post-authoritarian societies, and constitutional law scholars, to rethink the nature and the purpose of the constitution, including the issue of its application in private law.Footnote 37 Furthermore, the phenomenon of globalisation, the increased activity of non-state actors in world conflicts, and the penetration of transnational corporations, non-government organisations, and digital platforms into the traditional public sector have imposed a critical theoretical question – who are the duty bearers to whom human rights (either constitutional or international) impose burdens and obligations?Footnote 38 Translated into this discussion, the starting position is that constitutional rights have become just as vulnerable to private actions as to the states, but unlike their constraint of the state in a Lockean sense, constitutional rights, in principle, do not constrain private actors.

Now, the academic discussion on whether constitutional rights should or should not produce effects in private law has brought to light three different positions regarding the applicability of constitutional rights in private law.

The first two positions are mutually exclusive. One, verticality, exposes the already elaborated traditional idea that constitutional rights protect only against the government and have no application in private law. Accordingly, they are judicially enforceable in public but not in private law. The main justification for insisting on verticality is the protection of individual autonomy in the private sphere, along with liberty or privacy, and the need for market efficiency.Footnote 39 A frequent argument is that autonomy in the private sphere should be isolated from any state action or control.Footnote 40 A constitution’s mandate is to secure a limited government and not to regulate private relations, which should be based on free individual choice.

The other position rests on the opposite, horizontal, approach: although defined and determined in constitutional (i.e., public) law, constitutional rights are directly applicable in private law, meaning that constitutional rights protect not only against government but also against private parties. Apart from insisting on the vulnerability of human rights in relation to any actor, either state or private, some authors also assert that there is no reason to avoid the constitutional regulation of autonomy for the purpose of its protection, as autonomy is already regulated by non-constitutional law.Footnote 41 Others indicate the function of a constitution, arguing that as a supreme law of the land it should apply to all equally. For example, Mattias Kumm’s total constitution claim derives from the premise that private law also involves political choices, and that political choices are subject to a constitutional rights review using proportionality analysis; this, in turn, suggests that decisions relating to private law should not be excluded from constitutional rights scrutiny.Footnote 42 Consequently, constitutional rights should be judicially enforceable in disputes between private parties.

The third position stands between the two extreme positions. It is grounded on an intermediate position, known as the indirect horizontal effect, and assumes that although constitutional rights apply directly only in public law against the government, they nevertheless indirectly apply, that is, produce effects, on private law.Footnote 43 While in the direct horizontal effect position, private actors are directly subjected to constitutional rights, in the indirect horizontal effects position, private laws are subjected to constitutional rights. Courts play a decisive role here as they must, in one way or another, take into account constitutional rights in deciding disputes between private parties.Footnote 44 What is indirect here is the fact that individuals are protected not directly by constitutional rights, but by the effects constitutional rights produce on private law.Footnote 45

There is a further wrinkle here. Constitutional rights can produce either strong or weak indirect horizontal effects on private law. A strong indirect horizontal effect assumes that all private laws are subject to constitutional rights and may be challenged in private litigations, meaning that individuals are fully (yet indirectly through private law) protected by constitutional rights. Contrary to this, a weak indirect horizontal effect means that private law is not subjected to constitutional rights, but that courts can take constitutional values exemplified in applicable constitutional rights into account when interpreting or developing private laws in litigation between private parties.Footnote 46 What stands behind the weak indirect horizontal effect is a claim that ‘the actions of private individuals can produce similar or identical effects or harms to those of governmental’.Footnote 47

4.2.3 How Horizontality Works Offline

The concept of the horizontal effects of constitutional rights is one of the basic coinages of modern constitutionalism. Although Ireland and South Africa are famously known for expanding constitutionalism in private law, horizontality is operational in some other jurisdictions, most notably in many Latin American countries, including Argentina, Bolivia, Chile, and Colombia,Footnote 48 and then in Malawi, Ghana,Footnote 49 and Slovenia.Footnote 50 As a rule, horizontality means different things in different jurisdictions.

The Irish Constitution itself contains an expressed commitment to direct horizontality: ‘The State guarantees in its laws to respect, and, as far as practicable, by its laws to defend and vindicate the personal rights of the citizen.’ Since 1965, the Irish courts have maintained that constitutional rights may have a direct horizontal effect and are not imposing obligations on the state alone.Footnote 51 The objections to the horizontal application of constitutional rights from liberal constitutional theory have played no role in judicial reasoning.Footnote 52

South Africa has a specific approach to the horizontality issue.Footnote 53 Apart from verticality, its 1996 Constitution endorses the direct horizontal application of constitutional rights in private law ‘taking into account the nature of the right and the nature of any duty imposed by the right’.Footnote 54 This is not the end of the story. The Constitution also authorises the indirect horizontal application of rights in disputes between private parties through the courts: ‘When interpreting any legislation, and when developing the common law or customary law, every court, tribunal or forum must promote the spirit, purport and objects of the Bill of Rights.’Footnote 55

An important footnote should be added here. Direct horizontality also works in France, but not as a constitutional matter. French legal culture does not recognise the horizontal effect of rights, and, in some way, it is incompatible with it because the state has never been perceived as a threat to rights but rather as the protector.Footnote 56 Nevertheless, rights do produce a direct effect in private law through the judicial application of the European Convention on Human Rights.Footnote 57

In Germany and Canada, constitutional rights do not have a direct but an indirect horizontal effect. In the world of horizontality, Germany is best known for the application of the Drittwirkung doctrine, meaning third-party effect, which has produced a substantial horizontal effect in jurisprudence. The doctrine was born in the jurisprudence of the German Federal Constitutional Court. By specifying its premises in the famous Lüth case decided in 1958, the Court ended a decade-long passionate discussion among German scholars and courts about the scope of the then newly adopted Basic Law (1949).Footnote 58 The Federal Labour Court took a leading role in the discussion, asserting that the constitutional rights, protected in the Basic Law, were directly applicable to relations between the employer and employees.Footnote 59 Concerned with the fact that ‘basic rights shall be binding for the legislative, executive and judicial powers’, the Federal Constitutional Court did not accept the standing of the Federal Labour Court. Yet, more importantly, it did not fully reject the idea that fundamental rights could have produced effects on the relations between private parties. The Court adopted what is now called the indirect horizontal effect model, in which constitutional rights were understood as legal codifications of objective general values immanent to the whole legal order, including private law:

This value system, which centres upon human dignity and the free unfolding of personality within the social community, must be looked upon as a fundamental constitutional decision affecting the entire legal system […] It naturally influences private law as well; no rule of private law may conflict with it, and all such rules must be construed by its spirit.Footnote 60

In short, although the Drittwirkung doctrine accepts that basic rights oblige only state organs, it nevertheless holds that: (a) all private law is directly subjected to constitutional rights and is invalid if it conflicts with constitutional rights, and (b) it is not on private actors to conform their actions with constitutional values, but rather, it is on judges, bound by the basic rights under Article 3 of the Basic Law, to consider constitutional values when interpreting private laws.Footnote 61

The German indirect horizontal effect model, with some variations, is also followed in Canada. Like Germany, Canada also supports verticality when it comes to the effects of the Charter of Fundamental Rights and Freedoms on legislation.Footnote 62 Yet the Supreme Court has distinguished between the constitutional rights and constitutional values embodied in the Charter, allowing constitutional values to influence the entire legal system, including private law, when ‘private litigant disputes fall to be decided at common law’.Footnote 63 This means that the principal role of the rights embodied in the Charter is to protect citizens against the government, but the courts may ‘apply and develop the principles of the common law in a manner consistent with the fundamental values enshrined in the Charter’.Footnote 64 Therefore, unlike in Germany, not all private law in Canada is directly subjected to constitutional review for its inconsistency with the Charter, but only common law, when the courts should check its consistency with Charter values (not rights). This solution directly derives from the separation of powers principle in the (British) common law tradition, under which the courts are allowed to develop common law in parallel with the Constitution.Footnote 65

Compared with Germany and Canada, the US is an outlier in the world of horizontality, although it adheres to the same starting position – the constitutional rights – as they oblige only state actors. Influenced by the strong liberal tradition, in which individual autonomy is highly cherished in all law, not only that the constitutional text refers explicitly to the obligation of states when conferring rights (No state shall…), but also that the US Supreme Court has established the controversial ‘state action doctrine’, which, arguably, precludes the influence of constitutional rights in private law. Therefore, in the original case from 1903 – The Civil Rights Cases – in which the doctrine was born, the US Supreme Court held that since they apply only to government actions, the Thirteenth and Fourteenth Amendments are not an appropriate basis for Congress to pass laws protecting African-Americans from discrimination.Footnote 66 The Court emphasised that constitutional rights shielded by the Fourteenth Amendment were so designed that it had only negative effects – they imposed duties of restraint only on federal or state governments whose duty was to protect them only from actions taken by the state. However, the Court made an exception regarding the Thirteenth Amendment, arguing that ‘Congress may probably pass laws directly enforcing its provisions, yet such legislative power extends only to the subject of slavery and its incidents […].’Footnote 67

The issue of who the state is for the state action doctrine has extended the reach of this doctrine to private parties providing that their conduct can be in different ways attributable to the state.Footnote 68 However, such situations are limited because the US Supreme Court has refused to find state action under many strands, apparently without clear criteria.Footnote 69 On the other hand, for those who are committed to debating the state action doctrine, a key concern is either which law is subject to constitutional scrutiny or whether courts, as state actors who enforce those laws, must be subject to constitutional rights scrutiny. Thus, focusing on the courts, the US Supreme Court itself ruled in Shelley v. Kraemer that it was unconstitutional for the courts to grant relief to enforce a racially restrictive covenant because this would constitute a state action under the Fourteenth Amendment.Footnote 70 Stephen Gardbaum even claims that the US adheres to the horizontal model when viewed through a comparative lens, as all law is fully and equally subject to constitutional rights scrutiny.Footnote 71 Nevertheless, the dominant position in the US is still that the Constitution limits the application of constitutional rights to public law. I hasten here to say that in cases involving constitutional rights infringements in a digital context the US Supreme Court has not shown any intention to change this position.

4.3 The Authority of Constitutional Rights Online: Emerging Trends and Resistance

Having documented the horizontal effects of constitutional rights offline, I now turn to present similar efforts to validate the horizontal effects of constitutional rights online. This section takes the freedom of expression and the right to privacy as the focus of attention.

4.3.1 Is the Internet a ‘New Free Marketplace of Ideas’ Immune to Constitutional Review?

There are several reasons why freedom of speech deserves special status in constitutional democracies. First, freedom of speech essentially contributes to social progress and the moral and intellectual development of individuals.Footnote 72 Second, without citizens being free to express, deliberate, and accept different ideas, there is no democratic government.Footnote 73 Third, freedom of speech is indispensable for establishing the truth: in the famous words of Justice Oliver Wendell Holmes, truth should not be regulated, but determined in the ‘marketplace of ideas’.Footnote 74

Probably the world’s best-known free speech clause is embodied in the US First Amendment. In case law, it is firmly established that the First Amendment does not permit the government to engage in a viewpoint-based regulation of speech without a compelling governmental interest, such as averting a clear and present danger of imminent violence.Footnote 75 Compared with the US, where the First Amendment provides almost an unlimited right to freedom of speech, the German Basic Law and the Canadian Charter, despite a presumption in favour of freedom of speech, offer mostly a qualified right, subject to limitations for different reasons. These different approaches have already been transplanted online. German and Canadian case law indicates that transplantation has followed the horizontality route. The American case shows that the US denied the horizontal applications of constitutional rights in the online context following its position in the offline world. Consider the following.

4.3.1.1 The American Approach: Ignoring Horizontality from LICRA to Gonzales and Taamneh

In 2000, the High Court in Paris famously ruled against Yahoo. The dispute began when two human rights organizations (La Ligue Internationale Contre Le Racisme Et l’Antisemitisme (LICRA) and L’Union Des Etudiants Juifs De France) sued Yahoo in France for allowing its users to offer Nazi-related items for sale on Yahoo.com, as the sale, exchange, or display of Nazi-related materials or Third Reich memorabilia, represented a hate crime outlawed by the French Penal Code.Footnote 76 The French Court ruled that Yahoo’s auction site violated the Penal Code and bluntly ordered Yahoo to preclude access to the auction site and other sites displaying Nazi-related material for French citizens and warn its users to refrain from accessing content prohibited by French law to avoid legal sanctions.Footnote 77

In the US, however, the claim that everyone ought to have their rights protected against everyone, whether offline or online, is not appealing. The argument resurfaces in the LICRA case. Because the dispute also involved jurisdictional issues, the US District Court for the Northern District of California was asked by Yahoo to intervene and declare that the French Court’s decision in LICRA was neither recognisable nor enforceable in the US.Footnote 78 The US Court specified that the lawsuit aimed to determine ‘whether a United States court may enforce the French order without running afoul of the First Amendment’.Footnote 79 It issued a declaratory judgment and ruled, inter alia, that ‘Yahoo has shown that the French order is valid under the laws of France, that it may be enforced with retroactive penalties, and that the ongoing possibility of its enforcement in the United States chills Yahoo’s First Amendment rights.’Footnote 80 In the view of the US Court, the French Court’s demand that Yahoo ‘take all necessary measures to dissuade and render impossible any access via Yahoo.com to the Nazi artifact auction service and any other site’ was too general and imprecise and amounted to censorship of protected speech.Footnote 81 The decision follows the US Supreme Court’s finding that a law may violate the First Amendment if it is so ‘overly broad’ that it infringes protected and unprotected speech.

At this point, the basic considerations should be clear. In its landmark ruling on the online freedom of expression in Reno v. ACLU, the Supreme Court declared unconstitutional two provisions of the Communications Decency Act that criminalised ‘obscene or indecent’ speech transmitted to children and the delivery of ‘patently offensive’ information to children.Footnote 82 As a matter of fact, social platforms are exempted from liability for material posted by someone else on their sites, regardless of whether the posts violate the right to free speech. In such cases, Section 230 of the US Communications Decency Act exempts internet platforms from liability expressly providing that they would not be treated as ‘publishers or speakers’.Footnote 83 Moreover, the established case law testifies that the state action doctrine does not apply to social platforms. The US federal courts have repeatedly rejected the notion that private corporations providing services via the internet are a public forum for the purposes of the First Amendment. Take, for example, the cases of Dipp-Paz v. Facebook and the Federal Agency of News v. Facebook.

In the Dipp-Paz case, the plaintiff asserted that Facebook violated his constitutional right to free speech by blocking his account.Footnote 84 However, the Court dismissed the claim finding that the plaintiff did not show that Facebook ‘acted under the colour of a state “statute, ordinance, regulation, custom or usage”’.Footnote 85 The Court denied that Facebook is a public forum to which the First Amendment requirements applied, stressing that ‘Facebook is a private corporation, and Plaintiff does not allege any facts suggesting that Facebook’s actions are attributable to the state.’Footnote 86 The District Court for the Northern District of California followed the same reasoning in the Federal Agency of News (FAN) case, when Facebook blocked and removed the account of the Russian agency (FAN) allegedly involved in the US presidential elections in 2016.Footnote 87 The Court ruled that Facebook did not operate as a public forum and that its actions did not amount to state action under the public function test on the ground that Facebook was not a wilful participant in joint action with the government, nor did it conspire with the government to violate any constitutional rights.Footnote 88

Some hope for a change was raised when recently the US Supreme Court was asked in Google v. Gonzales and Twitter v. Taamneh to shift the foundations of internet law by narrowing or revoking the protection that Section 230 secures for online platforms.Footnote 89 Both cases were initiated by the families of victims of ISIS terrorist attacks who alleged that Twitter and Google-owned YouTube helped the ISIS group carry out the attacks. Both opponents and proponents of Section 230 anxiously awaited the decisions. Eventually, the Supreme Court said nothing about Section 230 in either ruling. In Twitter v. Taamneh, it ruled exclusively on the grounds of the Justice Against Sponsors of Terrorism Act. After considering whether the defendant’s conduct constituted aiding and abetting by knowingly providing substantial assistance, the Supreme Court found that the plaintiffs failed to establish their case. In the Gonzales case, the Court openly declined to address the application of Section 230 and, by unanimous vote, returned the case to the lower Court to rehear it in light of its decision in Twitter v. Taamneh, implying again that there was no need for Section 230 to be addressed.

The Supreme Court’s approach to these decisions could be read differently. Thus, one may claim that the Court purposely followed the judicial minimalism strategy to allow the voters and Congress to decide whether they want the internet to change. In constitutional cases of high complexity, which divide people on moral or other grounds, a minimalist path makes sense because democracy then urges the legislature to decide.Footnote 90 Any limitation of the freedom of speech, particularly in the context of new or changing circumstances, is such a case in the US. The narrow rulings could also suggest that nine justices appeared to postpone the ruling on Section 230 as ‘other cases presenting different allegations and different records may lead to different conclusions’, as Justice Jackson observed concurring in Twitter v. Taamneh. This may be so, particularly if one takes into account yet another possible reading. Strictly speaking, in Gonzales and Taamneh, the Supreme Court did not say online platforms were protected under Section 230, but rather it found no direct link between the terrorist attacks and online posts and videos. Notwithstanding which of these readings holds promise from a constitutional rights perspective, the Supreme Court’s decision to avoid considering Section 230 alone represents a victory for online platforms, at least for the time being.

4.3.1.2 German and Canadian Approach: Horizontality Matters in Online Speech

Because social networks took over a significant portion of the public sphere, the constitutional dimension of their responsibility attracted profound attention among scholars in Germany. An initiative that called for intermediary responsibility under the same standards as the state was not taken seriously, but a discussion on the horizontal effects of the freedom of speech and the connected right of the platform to delete content gained significance.Footnote 91

The approach insisting on the transplantation of the Drittwirkung doctrine in the digital sphere obtained judicial recognition. In its decision delivered in 2021, the German Federal Court of Justice, in a case involving hate speech online, took a more balanced approach than the French court in LICRA and solved the issue on the grounds of the indirect horizontal effect of constitutional rights.Footnote 92

The Court was faced with two cases involving the Facebook decision to delete posts and partially block users’ accounts with the explanation that hostile remarks about migrants amounted to hate speech. Unlike the French Court, which in LICRA paid no attention to the interests of Yahoo, the German Federal Court of Justice was more cautious. It balanced two constitutionally protected and conflicting rights: users’ freedom of expression, protected under Article 5, and Facebook’s occupational freedom, covered in Article 12 of the German Basic Law.Footnote 93 The Court found that, based on its right to occupational freedom, Facebook, in principle, was entitled to require users to respect specific communication standards and to block users’ accounts responsible for possible breaches. At the same time, it emphasised that Facebook’s right to occupational freedom is not unlimited. Facebook’s terms of business, including standards for deleting posts or blocking users for a breach of standards, must, following Article 307 of the German Civil Code (the requirement for reasonable business terms), take into account the involved fundamental rights, as in this case, the freedom of expression.Footnote 94 Therefore, the Court ruled that Facebook’s business terms related to deleting users’ posts and blocking accounts in case of violation were invalid. Before blocking them or deleting the content that amounted to hate speech, Facebook should have consulted the affected users, informed them about the deletion, and made possible redress opportunities after the deletion.Footnote 95

Eventually, the German Federal Court of Justice did not rule that Facebook, as a private company, had a constitutional obligation to respect freedom of expression. Rather, it used the Drittwirkung doctrine to order Facebook how to delete posts and block users’ accounts for posting content with hate speech implications. The portion of the Court’s decision requesting information obligations and complaint mechanisms mirrors the same solution embedded in the German Network Enforcement Act (NetzDG).Footnote 96 It also corresponds to the solutions proposed in the EU Digital Services Act, which requests online platforms to comply with obligations related to transparency, information obligations, and complaint mechanisms in relation to the removal of illegal content and the protection of users’ fundamental rights online.Footnote 97

Now, the situation in Canada is intriguing. The 2020 United States–Mexico–Canada Agreement limits the civil liability of online platforms for third-party content and does not treat them as content providers.Footnote 98 However, even since 2005, in the defamation context, the Canadian courts have ruled that online platforms could be liable for defamatory comments posted by third parties under certain circumstances.Footnote 99 Yet a breakthrough has been recently announced thanks to the availability of the horizontality doctrine. I will explain this in more detail by tracing the decision in the ongoing lawsuit in Cool World Technologies Inc. v. Twitter Inc.

Who governs online content in Canada is a major issue in this case initiated against Twitter for rejecting to promote posts relating to a Canadian documentary film.Footnote 100 The applicant (the publicity firm Cool World Technologies) alleged that Twitter wrongfully refused to sell advertising space on the Twitter social media platform, which ended in the violation of the applicant’s freedom of speech.Footnote 101 The applicant relied on the influence of constitutional values related to the freedom of speech in Canadian contract law to support their claim. Because Twitter in Canada has such a political significance as to represent the ‘town hall’, the applicant asserted that the public policy of Canada, ‘informed by Charter values related to freedom of expression, preclude Twitter from enforcing contract terms to exclude high value, non-harmful speech from its self-proclaimed town hall’.Footnote 102

On the other hand, Twitter claimed that it had absolute and unfettered discretion to refuse any advertising posts and that all of Twitter’s user accounts had no nature of the governing contract.Footnote 103 Besides, Twitter urged that the applicant was not entitled to allege the breach of Charter values, reminding that no right for one private party to sue another private party for breach of the Charter or breach of analogous Charter values existed.Footnote 104

At the preliminary stage, the Court allowed the lawsuit to proceed, finding that the applicant could base their case on the effects that the constitutional guarantees of freedom of speech produce in contract law because of Twitter’s central role in the public life of Canada.Footnote 105 Whether Twitter’s unlimited power to control the content on its platforms in Canada will remain untouched by the end of the judicial proceedings remains to be seen. However, the indirect horizontality effect of constitutional values has opened the door for the judicial protection of freedom of speech on social platforms in Canada.

4.3.2 Horizontality in Service of Privacy-Related Rights in Online Contexts

Privacy-related rights are also frequently exposed to gross infringements in the digital environment. For the time being, all we know about the horizontal effect of privacy and privacy-related rights in the digital context mostly comes from the EU and the pioneering practice of the CJEU.

It has been quite a while since the constitutional nature of EU primary law and the constitutional value of rights in its legal order were emphasised.Footnote 106 The horizontal effects of certain equality-related rights were announced already in the Rome Treaty adopted in 1956. This came to the surface when the CJEU in 1976 ruled that Article 119 of the Rome Treaty, ensuring the principle of equal pay for male and female workers for work of equal value, obliged not only the Member States to whom the provision was directed but also private employers.Footnote 107

In EU law, considerable attention has also been given to the horizontal effects of the EU Charter of Fundamental Rights. Although it has the same legal effect as the EU Treaties, the horizontal effect of its provisions has provoked much debate. The CJEU implicitly confirmed that, when applied within the scope of EU law, the Charter could create obligations for private parties if its provisions grant a legal right for an individual and not just a principle.Footnote 108 This finding seems reasonable, considering that the Charter has the status of EU primary law. The ultimate confirmation came in 2018 when, in four decisions, the CJEU established the direct horizontal effect of several Charter rights in disputes between private parties, specifically the right to non-discrimination, certain rights related to fair and just working conditions, and the right to an effective remedy and a fair trial.Footnote 109

However, even before its 2018 revolutionary offline case law, the CJEU opened the door to the indirect horizontal application of fundamental rights online, in particular, to privacy-related rights, embodied in Articles 7 and 8 of the Charter. In the Google Spain case, it articulated the right to be forgotten, relevant in the framework of the right to data protection and freedom of expression and information.Footnote 110 The case involved the interpretation of the Data Protection Directive in the dispute between Google and the Spanish data protection agency. It concerned the removal (delisting) of personal data available online. Among several questions addressed to the CJEU, important for this discussion is the question of whether, under the Directive, an individual who does not wish to make personal data available to internet users has the right to address a search engine directly and ask it to delist the personal information published on third parties’ web pages.Footnote 111

The CJEU resolved this question by famously concluding that if the activity of a search engine significantly affects the fundamental rights to privacy and the protection of personal data, the operator of the search engine ‘must ensure […] that the guarantees laid down by the directive may have full effect and that effective and complete protection of data subjects, in particular of their right to privacy [….]’.Footnote 112 Although the said Directive did not explicitly create the right for an individual to request the data processor to remove their personal data, the Court stressed that the Directive had to be interpreted as if it included such a right because of the effects the rights to privacy and the protection of private data, guaranteed in Articles 7 and 8 of the Charter, produced on the Directive.Footnote 113 Consequently, the CJEU concluded that, ‘the data subject may, in the light of his fundamental rights under Articles 7 and 8 of the Charter, request that the information in question no longer be made available to the general public by its inclusion in such a list of results’.Footnote 114

Although certifying that the Charter produced an indirect horizontal effect on EU secondary legislation, the CJEU nevertheless recognised that under the given circumstances, the legitimate general interest of the public in accessing information also existed. Balancing thus became necessary: ‘A fair balance should be sought in particular between that interest [the legitimate interest of the public in accessing information] and the data subject’s fundamental rights under Articles 7 and 8 of the Charter.’Footnote 115 The burden of the balance was put on the search engines, which, according to the CJEU, qualified as personal data controllers within the meaning of the Data Protection Directive and ‘within its responsibilities, powers, and capabilities’.Footnote 116 According to this view, the CJEU acknowledged that these rights override, as a rule, the economic interest of the search engine operator and the general public’s interest in having access to that information.Footnote 117

Therefore, the ultimate outcome of the case carries the particular interest of the individual to have personal data delisted, the general interest of the public in accessing information, as well as the obligation of the search engine (e.g., Google), to balance the relevant rights when assessing users’ requests to delist personal data from search results. The broader point here is that the Google Spain case exemplifies how consequential the indirect horizontal application of the Charter can be in practice, as the Court made both the Charter and the said Directive applicable in disputes between private parties.Footnote 118 Moreover, via its readiness to vest the Charter’s privacy-related rights with horizontal effects, the CJEU reaffirmed another landmark decision on digital privacy in Schrems I – that the Data Protection Directive, since it regulates the processing of personal data and is liable to infringe fundamental freedoms, in particular, the right to respect private life, must always be interpreted in light of the Charter’s rights.Footnote 119

4.4 How the Horizontality Doctrine Helps Prevent Digital Threats to Democracy

The examples from German, Canadian, and the CJEU constitutional jurisprudence testify that the horizontal application of constitutional rights can accommodate both the concerns of those who object to any limitations to what is termed ‘internet governance’ and of those who insist on protecting rights online in the same manner as they are protected offline. Horizontality operating in the online world can also increase the democratic legitimacy of the online world. I will now turn my attention to this conclusion.

There is good reason to believe in the potential of the internet to upgrade democracy. Online communications and deliberations could help in developing an ideal, an internet-facilitated public sphere, with free discourse that could legitimise democratic government in Habermas’s sense.Footnote 120 Illuminating such potentials, the UN Human Rights Council concluded that ‘facilitating access to the internet for all individuals, with as little restriction to online content as possible, should be a priority for all States’.Footnote 121

However, there is also good reason for concerns about digital threats to democracy, ranging from weakening election and referendum integrity (e.g., in the 2016 US elections and the Brexit referendum campaign) to disinformation or hate speech in the online public sphere and interference with opinion formation in social networks, especially when they produce extremism.Footnote 122 In the digital world, the allegations that democracy implies only the majority rule and that a democratic system should be highly responsive to popular will are more easily sold to the public than in the offline world. Consequently, social practices that form preferences may question the legitimacy of decision-making processes. In particular, as Stephen Holmes suggests, the individuals whose activities are left without constraints exercise more significant influence than those responsible for making decisions.Footnote 123 In the presence of the unconstrained majority, the garden-variety examples either from the offline or online worlds testify that individual rights are the first to suffer in these circumstances.

Now, what a democratic constitution tends to achieve is to minimise the tension between democracy and individual rights, as people are prone to overstate this tension.Footnote 124 Moreover, apart from the usual argument that human rights are undemocratic, some took comfort from the observation that individual rights could be reconciled with democracy only if perceived as serving majorities.Footnote 125 Yet, on this account, Cass Sunstein essentially denies that democracy is an antagonist to rights. On the contrary, a democratic constitution, he claims, protects rights and thus constrains ‘what majorities can do to individuals or groups’.Footnote 126

Following Sunstein, it is not hard to conclude that no other strategy to tame the power of the online platforms brings the online world closer to democracy than the radiating effect of constitutional rights on the internet. The explanation of why democratic control matters here is almost self-evident. On the one hand, in a functional constitutional democracy, human rights are subjected to effective protection, while sanctions for their violations are pre-conditioned by the government’s democratic legitimacy, the rule of law, transparency requirements, and accountability under constitutional rules. On the other hand, the idea that the self-binding could be a strategy for the digital world was not comprehended initially. Those who maintained that some regulations were needed advocated the creation of a new system with no clear parallel in the offline world.Footnote 127 When internet governance emerged, it was established exclusively within the venue of private powers.

Thus, it turned out that those who make laws in the digital environment (a) order who can participate and who cannot in online communication, (b) design protocols and procedures in the case of digital rules’ violations, and (c) are private entities, including online platforms and their self-regulating bodies. How they make choices related, for example, to freedom of speech, the protection of privacy, or the collection of personal data, even when they do it under formally observed human rights law, such as Facebook’s Oversight Board, is left without democratic oversight. In other words, the power of the online platforms and their self-regulating bodies to design rules and control cyberspace compared with their accountability is underproportioned if not non-existent. The digital space’s private order suffers from a democratic deficit, which, interpreted by Haggart and Keller, exists since ‘private companies make the choices that set norms and directly influence the behavior of billions of users’.Footnote 128

Adjusting the constitutional system to the horizontal effects of constitutional rights is a reactive strategy to what happens online, but it legitimises the rules affecting individual rights and delivers results grounded on the citizens’ perceptions of what ‘the correct outcome is’ whenever rights should be balanced against general interests. In principle, when applied horizontally, the right to privacy or freedom of speech does not automatically prevail over the right of the online platform to conduct business but requires balancing, a process legitimised in the constitutional discourse whenever a court is asked to set aside a regulation of whatever kind, on the grounds of its non-compatibility with some constitutionally protected right.Footnote 129 On balance, it seems that under the horizontal model, no one loses; only democracy gains.

4.5 Conclusions

I have arrived at the end of a long trail of arguments offered to show why constitutional rights should be extended into the regime of internet governance, in particular to social platforms.

Social platforms have proven track records as to their capacity to pose harm to constitutional rights, which, according to the European Court of Human Rights, is even greater than that posed by the press.Footnote 130 Although the view that constitutional rights protect all private persons but oblige only the state has been abandoned in some jurisdictions under the doctrine of the horizontal effects of constitutional rights, and although this doctrine is a ready-made vehicle to make social platforms responsible for the intrusions upon constitutional rights, the claim that everyone ought to have his or her rights protected against everyone is still disputable. To remind the reader, the division between public and private law still dominates in constitutional systems across the globe: it mirrors the position that individual rights impose obligations only on the state and not private actors. Therefore, rights do not regulate relations between private parties whose autonomy should remain free from the compulsory regime created by constitutions. However, knowing that the state is not the only bearer of political and economic power and that individual rights are also threatened by private actors, including those operating in the digital world, the rhetoric must be changed. This step does not require the recognition of new rights but the recognition of new duty holders in relation to existing rights, such as social platforms. The examples from Germany, Canada, and the EU, jurisdictions traditionally open to the horizontal enforcement of constitutional rights, illustrate its promising potential to remedy human rights abuses that happen online.

5 How Do We Decide Whether Moving Online Makes a Difference?

5.1 Introduction

One of the best ways to comprehend what difference it makes to move online is to compare cases that differ essentially on the basis of only one thing – one of them is online.

In some cases, moving online might make a difference to how the law works. Identifying those particular cases (as well as those where the digital factor does not make a difference) is a worthwhile theoretical task. However, there is another related question that carries a similar significance: How can courts decide whether in a particular case moving online should make a difference on how the law works? This chapter presents a study with the aim of exploring this question.

Getting acquainted with this process can provide greater clarity on this relatively new phenomenon and stimulate new theoretical insights. Having knowledge of criteria relevant for the discussed decisions has the potential of impacting legal practice by making judicial decisions more predictable and substantiated. Rich and comprehensive reasoning in legal documents is an attractive alternative to legal documents that provide scarce citations of laws and actual logical links, which lead to the conclusion of relying on the intuition of the lawyer.

5.2 Before We Begin

Legal theory proposes several ways to determine whether a new case requires a different treatment. In this context, the assessment would require to decide whether a new fact requires a departure. One of the most famous tests, which requires taking into account only the material facts, was devised by Arthur L. Goodhart for the application of precedents.Footnote 1 Julius Stone explained this method by directing to seek a reason that explains whether the later case provides grounds for the same outcome as the precedent.Footnote 2 However, the more prevailing approach for precedents is the rule model that is sometimes described as the ratio decidendi, perceived as a legal rule that the judge used as necessary to justify the conclusion of the precedential judgment.Footnote 3 It is a widespread view that was aptly worded by Karl N. Llewellyn, that we have to keep in mind the reason for the rule (accordingly, apply it where the reason extends and rethink the rule where the reason becomes irrelevant or is found to be wrongful).Footnote 4 The concept of purpose is also discussed in the application of precedents. This can be used as the goal of a particular judicial precedent – the reasoning on which the precedent was based and the purpose of relevant legal categories, such as particular individual rights or obligations.Footnote 5 The purpose of judicial precedent can aid as a reason in the assessment of whether its ratio decidendi is applicable (according to the purpose that justifies it).Footnote 6 However, it can be hypothesised that evaluating the legal relevance of a factual difference is not chaotic beyond the limits of the aforementioned criteria, and a deeper examination reveals certain patterns, especially in the context of moving online.

A specific test to evaluate whether a digital factor affects the decision to apply a particular legal rule can be carried out only after there are general indications that the considered rule could be applicable. This chapter presents research on cases where a rule can be found, which was designed for legal relations in the non-digital space, and a new situation occurs, which differs in the sense that it takes place in a digital environment. However, before acknowledging that it is the only difference, a regular evaluation of the rule’s applicability must first be conducted.

5.2.1 Similar Aspects in Dissimilar Cases

The discussed initial evaluation could be regarded as a typical process in everyday legal practice. However, it is worth remembering that these issues should not be confused with novel legal challenges posed by the cases of focus here. Otherwise, such confusion risks that the conventional tools of legal reasoning be neglected, where they are actually appropriate.

On 19 April 2018, the Court of Appeal of Ireland delivered a judgment in Muwema v. Facebook Ireland Limited that is an important reminder in the context of the initial evaluation.Footnote 7 This case involved a legal problem regarding a required standard of proof to issue an order of disclosure. The order was requested against the operator of a social network – Facebook – to disclose the identity of its user. The respondent protested this request, arguing that the user in question would be exposed to arrest and ill-treatment at the hands of the authorities in Uganda whom he opposed. The court provided grounds for the solution of this issue in earlier precedents, which not only did not have the element of the digital space but was also established in rather different cases. The court mentioned Foley v. Sunday Newspapers Ltd (2005) regarding an injunction to prevent the publication by the defendant of an article that, if published, the plaintiff considered would place at real and serious risk his right to life and/or right to bodily integrity.Footnote 8 Another case was even more distant from the relevant circumstances and was decided in the context of the surrender of a person to Poland under a European arrest warrant. However, all these cases included the same sort of question of balancing one person’s right to life and bodily integrity against other concepts guarded by law (e.g., another person’s freedom of expression). The court relied on the aforementioned precedents and stated that ‘there is no reason why the court should not require the same standard of proof in relation to an assertion by the plaintiff seeking a Norwich Pharmacal order where the court is required to carry out a balancing exercise between competing constitutional rights’. This common denominator can be formulated as follows – where different cases involve the balancing of the same individual rights, this connection can be sufficient to make these cases comparable both in the digital and physical context.

The most important takeaway from this example is that legal provisions might be relevant even if they do not contain a typically applicable legal rule. There are undoubtedly numerous examples, which confirm that in legal practice judicial precedents are not only used in cases where all material facts coincide but also where the cases are quite different. The Muwema case confirms that the explored class of cases cannot be held as an absolute exception of this tendency – going online does not cancel the ruling.

5.2.2 Sometimes: Not Merely a Difference but a Wholly Different Affair

Reasoning algorithms in the preparatory stage must consider a probable scenario where no applicable rule can be found in statutory law or judicial precedents. This type of crossroad leads to one of two paths: either courts (a) recognise there is no regulation, or somehow (b) justify the application of an old rule, which was not initially intended for such cases.

Naturally, courts can be inclined towards the second path, since the first can require the creation of new rules that is only in the power of parliaments, not the judiciary. However, the first path may not require this and can lead to an outcome that is strongly dependent on one of these principles – ‘everything that is not forbidden is allowed’ or ‘everything that is not allowed is forbidden’. A particular outcome might vary according to the type of legal issue. If it is a matter of criminal liability, the defendant is acquitted. Similarly, a new type of commercial business cannot be treated as requiring a prior licence (and be seen as illegal) if it conceptually differs from the licensed form of commercial activities. However, in cases of a civil dispute between two private persons, different interests must be balanced – the court cannot refuse to resolve the dispute because there is no legal rule.

The last scenario applies usually in cases where courts might have to create a new rule. One case containing factual circumstances that do not have any analogical counterparts in the non-online environment is the Google LLC v. Oracle America, Inc. case.Footnote 9 It was resolved by the US Supreme Court judgment of 5 April 2021. This case concerned copyright infringement when a certain type of computer code was copied – the copied lines of code were part of a user interface for programmers to access prewritten computer code through the use of simple commands. As a result, this code is different from many other types of code. The copied lines are part of a tool called an Application Programming Interface. The court found applicable law in general provisions regarding copyright infringement, but it decided to distinguish the case from others regarding copying computer code – this unique type of code was treated differently. One of the reasons was that it was widely used by programmers and, given how much the programmers had invested in learning it, enforcement of copyright would limit the future creativity of new programs. Therefore, the enforcement would interfere with rather than further the copyright’s basic creativity objectives – the ‘copyright supplies the economic incentive to [both] create and disseminate ideas […] and the reimplementation of a user interface allows creative new computer code to more easily enter the market’. This shows that the US Supreme Court took into consideration the social impact of the decision and objectives of the law, but, most importantly, this case demonstrates that even though judicial precedents could be found that, at first sight, appeared applicable, the Court decided to distinguish the case and, in a way, create a new rule.

5.3 Purpose and Function of the Disputed Object or Actions

Among the criteria used in the analysed cases, the purpose of the disputed object was used very frequently. One such example can be found in Magyar Jeti Zrt v. Hungary of the European Court of Human Rights (ECtHR).Footnote 10 In this case, a violation of the European Convention on Human Rights (ECHR) was found after domestic authorities held an online news portal liable for posting a hyperlink leading to defamatory content. The courts established the objective liability of the news portal: although the practice of the domestic courts exempted the publishers from civil liability for the reproduction of statements made at press conferences, provided that they were reporting on a matter of public interest in an unbiased and objective manner, distinguished themselves from the source of the statement, and gave an opportunity to the person concerned to comment on the statement. However, the domestic courts decided that in this case of posting a hyperlink to defamatory information, the standard of objective liability applied, irrespective of the question of whether the author or publisher acted in good or bad faith and in compliance with their journalistic duties and obligations.

The ECtHR noted that:

the very purpose of hyperlinks is, by directing to other pages and web resources, to allow Internet users to navigate to and from material in a network characterised by the availability of an immense amount of information. Hyperlinks contribute to the smooth operation of the Internet by making information accessible through linking it to each other. Hyperlinks, as a technique of reporting, are essentially different from traditional acts of publication in that, as a general rule, they merely direct users to content available elsewhere on the Internet. They do not present the linked statements to the audience or communicate its content, but only serve to call readers’ attention to the existence of material on another website.Footnote 11

Taking into account this and other reasons, the ECtHR decided that the domestic authorities violated the applicant’s freedom of expression by applying the standard of objective liability, which was not used in similar non-digital circumstances.

In the C–264/14 case,Footnote 12 the European Court of Justice (ECJ) was faced with an issue regarding the taxation of transactions where traditional currency was exchanged for virtual currency – Bitcoin. The ECJ had to decide whether in this context virtual currency could be characterised as security and tangible property or fall into the same category as traditional currency. In this discussion, the purpose of virtual currency was also recognised as a relevant factor. The ECJ noted that:

[t]ransactions involving non-traditional currencies, that is to say, currencies other than those that are legal tender in one or more countries, in so far as those currencies have been accepted by the parties to a transaction as an alternative to legal tender and have no purpose other than to be a means of payment, are financial transactions […] the ‘bitcoin’ virtual currency has no other purpose than to be a means of payment and that it is accepted for that purpose by certain operators.

This led the court to the conclusion that this virtual currency is neither a security conferring a property right, nor a security of a comparable nature.

Case No. C-360/13 of the same court concerned the legality of making digital cached copies of copyrighted material on the internet.Footnote 13 Among other things, the court considered the application of EU law, according to which an act of reproduction is exempted from the reproduction right on condition that it is temporary, it is transient or incidental, it is an integral and essential part of a technological process, its sole purpose is to enable a transmission in a network between third parties by an intermediary, or a lawful use of a work or other subject-matter to be made, and has no independent economic significance. Although, the purpose of a copy is a separate condition according to this rule, the court recognised the purpose of the disputed copies in consideration of other factors. The judgment states:

As regards the other criterion mentioned in paragraph 39 above, an act of reproduction can be regarded as ‘incidental’ if it neither exists independently of, nor has a purpose independent of, the technological process of which it forms part […] the technological process in question wholly determines the purpose for which those copies are created and used, although, as is apparent from paragraph 34 above, that process can function, albeit less efficiently, without such copies being made. Secondly, it is apparent from the documents before the Court that internet users employing the technological process at issue in the main proceedings cannot create the cached copies outside of that process […] It follows that the cached copies neither exist independently of, nor have a purpose independent of, the technological process at issue in the main proceedings and must, for that reason, be regarded as ‘incidental’. [emphasis added]

Another example is case No. C–390/18,Footnote 14 which revolved around the issue of applicable domestic law for a new sort of business provided by the Airbnb Ireland company. This issue was determined by answering the question: Can this new sort of economic activity be treated as the profession of a real estate agent? Doubts were substantiated, among other things, by the fact that apart from the service of connecting hosts and guests using an electronic platform, Airbnb Ireland offers hosts a certain number of other services, such as a format for setting out the content of their offer, with an option for a photography service, an optional tool for estimating the rental price with regard to market averages taken from that platform, and many other supplemental services. The ECJ took into account the purpose of those services – according to the court:

…it follows that an intermediation service such as the one provided by Airbnb Ireland cannot be regarded as forming an integral part of an overall service, the main component of which is the provision of accommodation. None of the other services […] above, taken together or in isolation, call into question that finding. On the contrary, such services are ancillary in nature, given that, for the hosts, they do not constitute an end in themselves, but rather a means of benefiting from the intermediation service provided by Airbnb Ireland or of offering accommodation services in the best conditions [emphasis added].

Case No. C–62/19 involved questions related to the legality of a smartphone application ‘STAR TAXI – driver’, which was used in the field of transportation services.Footnote 15 The dilemma in this case was of a similar kind to the one in the Airbnb Ireland case: Is the law for a traditional sort of business applicable to a new type of business? Accordingly, the similarity of this application to traditional taxi services had to be assessed. The ECJ emphasised the purpose of the disputed services from more than one perspective. First, it was noted that the direct contractors of the disputed Star Taxi App were legally authorised professional taxi drivers and the purpose of the contracts with drivers is:

…to provide the drivers with an IT application, called STAR TAXI – driver, a smartphone on which the application has been installed, and a SIM card including a limited amount of data, in exchange for a monthly subscription fee […] Star Taxi App does not exercise any control over the quality of the vehicles or their drivers, or over the drivers’ conduct.

The court recognised this as a material difference distinguishing this case from an earlier similar dispute regarding the intermediation of a taxi transportation service – in the precedent, the purpose of the intermediation service was to connect, by means of a smartphone application and for remuneration, non-professional drivers using their own vehicle with persons wishing to make urban journeys. In the precedent, the ECJ decided that the service was to be classified as a ‘service in the field of transport’, but the Star Taxi App was classified differently. Furthermore, the ECJ concluded that the disputed decision of the domestic authorities (which was restricting) related to:

…intermediation services, the purpose of which is to put persons wishing to make urban journeys in touch, by means of a smartphone application and in exchange for remuneration, with authorised taxi drivers, it does no more, in broadening the scope of the term ‘dispatching’ […] so as to encompass that type of service, than to extend to that information society service a pre-existing requirement for prior authorisation applicable to the activities of taxi reservation centres [emphasis added].

5.4 Actual Harm: Does Going Online Affect the Extent of Harm/Damages Made by the Disputed Act?

It is natural that a lawmaker chooses to forbid certain actions or inactions when they violate someone’s individual rights, cause damage to values protected by law, or raises the risk of these occurrences. The line of lawfulness turns interference with someone’s interests into an illegal violation of someone’s individual rights. There are multiple examples of this criterion, such as violating property rights by damaging private property, violating the freedom of expression by imposing a fine for a published opinion, and so on. Illegal damage to values protected by law can sometimes have no interference with an individual’s rights, but be in violation of what is often called public interest. Examples of the latter can be violations of environmental law, animal cruelty, tax evasion, and so on. But sometimes the law forbids actions that do none of these – the prohibition to drive under the influence of alcohol or without a seatbelt; to disregard safety requirements on building construction sites, and so on. Although this behaviour does not necessarily violate someone’s individual rights or cause damage to values protected by law, it raises such risks; therefore this sort of legal regulation can be justifiable.

Although these considerations can usually be found in the process of lawmaking in parliaments, it is hard to find strong reasons to neglect them in the judicial interpretation of the law. Naturally, they are most relevant when the law is unclear and there are no precedents in similar cases or distinguishing the cases is considered. Disputes with a digital element are no exception.

In Savva Terentyev v. Russia,Footnote 16 the ECtHR found a violation of Article 10 of the ECHR because domestic authorities convicted the applicant and suspended the prison sentence for an offensive internet comment against police officers. Perhaps the most important reason the ECtHR relied on was the fact that the domestic courts did not evaluate the actual consequences of the comment, but merely relied on its text:

…applicant posted his comment on an individual blog of his acquaintance, […] courts, however, do not appear to have ever attempted to assess whether Mr B.S.’s blog was generally highly visited, or to establish the actual number of users who had accessed that blog during the period when the applicant’s comment remained available […] the applicant’s comment had remained online for one month before the applicant, who found out the reasons for a criminal case against him, removed it […]. Although the access to the impugned statement had not been restricted, it drew seemingly very little public attention. Indeed, even a number of the applicant’s acquaintances remained unaware of it, and, it appears it was only the criminal prosecution of the applicant for his online publication that prompted the interest of the public towards his comment […]. It is also important to note that, at the time of the events under examination, the applicant does not appear to have been a well-known blogger or a popular user of social media […], let alone a public or influential figure […], which fact could have attracted public attention to his comment and thus have enhanced the potential impact of the impugned statements. In such circumstances the Court considers that the potential of the applicant’s comment to reach the public and thus to influence its opinion was very limited […] although the wording of the impugned statements was, indeed, offensive, insulting and virulent […], they cannot be seen as stirring up base emotions or embedded prejudices in an attempt to incite hatred or violence against the Russian police officers […] The Court furthermore discerns no other elements, either in the domestic courts’ decisions or in the Government’s submission, which would enable it to conclude that the applicant’s comment had the potential to provoke any violence with regard to the Russian police officers, and thus posed a clear and imminent danger which required the applicant’s criminal prosecution and conviction.

Beizaras and Levickas v. Lithuania is another example of this sort of reasoning.Footnote 17 In this case, the ECtHR found a violation of the fourteenth article of the Convention because the domestic authorities refused to prosecute authors of serious homophobic comments on Facebook without an effective investigation beforehand. The applicants were two young men, one of whom had posted a photograph of the couple kissing on his Facebook page. This online post went viral and received hundreds of virulent homophobic comments (containing, for example, calls to ‘castrate’, ‘kill’, and ‘burn’ the applicants).

At the applicants’ request, a non-government organisation upholding the rights of LGBTQ+ people (of which they were members) lodged a complaint with the prosecutor’s office against thirty-one of these comments, asking the prosecution service to open an investigation for incitement to homophobic hatred and violence. The ECtHR agreed with the aforementioned NGO’s position that ‘the number of comments could constitute a circumstance determining the gravity of the crime or the extent of the culprit’s criminal liability, but that it did not constitute an indispensable element of the crime under the above-mentioned provision of the Criminal Code’. The ECtHR continued to evaluate this by noting, while the comments were in the public sphere, what was the ‘potential reach of comments on the Internet, as well as the danger they may cause, especially when published on popular Internet websites’; and that ‘the photograph had “gone viral” online and received more than 800 comments’. It was mentioned that ‘the potential impact of the medium concerned is an important factor’. Besides these circumstances, which can be found in the digital space, the judgment also includes a notion that ‘the comments on the first applicant’s Facebook page […] affected the applicants’ psychological well-being and dignity, thus falling within the sphere of their private life’. Although the ECtHR mentioned that ‘the posting of even a single hateful comment, let alone a comment that such persons should be “killed,” on the first applicant’s Facebook page was sufficient to be taken seriously’, the whole of the reasoning does not eliminate the real possibility that the court’s final conclusion could have been different if the comments were not as serious and their reach was significantly lower.

The judgment by the Grand Chamber of the ECtHR on 20 January 2020 in Magyar Kétfarkú Kutya Párt v. Hungary concerns a curious example of a new unique form of political campaigning enabled by internet technologies.Footnote 18 The ECtHR found that Hungarian authorities violated the freedom of expression of the applicant (the political party Magyar Kétfarkú Kutya Párt (MKKP)) by holding it liable for making available a mobile application allowing voters to share anonymous photographs of their ballot papers. The events occurred in a referendum related to the European Union’s (EU’s) migration relocation plan. The referendum was initiated by the Hungarian government, and the applicant, being an oppositional party, was campaigning against the referendum, encouraging voters to destroy their ballot papers. One of the ways the MKKP conducted the campaign was by making available a mobile application called ‘Cast an invalid ballot’, which enabled users to upload and share with other users, anonymously, photographs of their ballots or a photograph of the activity they were engaged in instead of voting. The ECtHR found that the domestic authorities imposed penalties for this act without a clearly prescribed law. The law, on which courts relied, established a principle of the exercise of rights in accordance with their purpose. The ECtHR stated that:

having regard to the particular importance of the foreseeability of the law when it comes to restricting the freedom of expression of a political party in the context of an election or a referendum […], the Court takes the view that the considerable uncertainty about the potential effects of the impugned legal provisions applied by the domestic authorities exceeded what is acceptable under Article 10 § 2 of the Convention.

Here, the ECtHR decided on a matter that at first sight might seem to concern only the text of the statute – liability for an offence can be estimated as unprescribed by law when the statute does not expressly prohibit such behaviour. On the one hand, this can be said about the discussed case, but on the other hand, the law could also be regarded as abstract, and the ECtHR admits that in other cases some level of vagueness in the law can be tolerated, as long as it is clarified in the domestic judicial case law. With that in mind, the quote displays that the context where the freedom of expression was exercised involved political activities with the potential of significant societal impact. This impact might even concern the fundamental values of democratic order. In this case, an indirect equivalent of considered acts (the issuing of a mobile application) might have been some sort of physical display of analogical photographs, perhaps communicating them through less modern channels of communication. But in this case, the online factor was not decisive. The court followed a traditional path of legal reasoning, which coincidentally led to an unregulated field; therefore, there was a need for a new rule to be established according to judicial precedent, which was drafted with regard to the extent of damages from interference with the applicant’s freedom of expression.

On 13 May 2014, the ECJ adopted a famous decision in Google Spain SL, Google Inc. v. Agencia Española de Protección de Datos, Mario Costeja González – case No. C-131/12,Footnote 19 regarding the right to be forgotten. This decision basically established that individuals have the right to request search engines, such as Google, to remove certain links from their search results if the information is outdated, irrelevant, or infringes their privacy rights. This decision includes the following statement: ‘the effect of the interference with those rights of the data subject is heightened on account of the important role played by the internet and search engines in modern society, which render the information contained in such a list of results ubiquitous’.

What this precedent contrasts with in a non-digital environment is the legal status of the paper archives of newspapers. If the right to be forgotten rule was universal, there would be very little reason not to enforce it for public libraries and other archives. However, it was constructed and recognised only when the traditional paper archives attained a counterpart in a digital environment. This counterpart – internet search engines – differs mostly in terms of the ease of access to the requested data for a non-professional domestic user, which means that interference with a person’s right to privacy in digital archives (i.e., search engines) is much greater than the same interference in non-digital archives, such as public libraries. This ECJ case is an example where the extent of harm/damages made by the disputed act in a digital environment had an enormous significance compared with other judicial cases.

5.5 After We Finish
5.5.1 Established Theory

It is an understatement to say that some research has been conducted on legal reasoning or the impact of the internet: there is a plethora of academic publications tackling particular legal issues in the digital space. Among other things, they include artificial intelligence (AI) liability,Footnote 20 data protection and privacy,Footnote 21 copyright law,Footnote 22 and others. However, very few address the issue of legal reasoning in this context. Naturally, a substantial number of findings on this topic are applicable to disputes in the field of modern internet technologies. It would be beyond the format of this publication to list all of the relevant insights on legal reasoning, but some of them are noteworthy because they focus on the matter of refreshing the law.

In legal disputes, the question ‘what difference does going online make’ (if at all) can be particularly hard to answer when judicial precedents can be found and they appear to come from similar cases. Often most of the material facts can be similar, except the online factor. Legal literature provides some insights into when precedents must be followed and when departed from.

Changes in the social, moral, and economic contexts are among the most widely mentioned reasons to overrule a judicial precedent.Footnote 23 Other, more detailed, justifications for a departure from precedents by overturning them include subsequent changes or developments in the law that undermine the rationale of the earlier decision; a need to bring a decision into agreement with experience and with facts newly ascertained; the precedent can be shown to have become a detriment to coherence and consistency in the law; a mistake exists in the precedent; the precedent is unworkable or badly reasoned; the departure can be justified by the possible significance of intervening events or the possible impact of settled expectations;Footnote 24 and where there are serious and objective reasons and where ‘the new solution better reflects the ratio legis, changed circumstances or altered legal views’.Footnote 25 In summary, it can also be said that overruling a precedent, when deciding a case based on how it was resolved earlier is suitable for the former, but not the present situation, and in light of new circumstances, the same empirical results that were reached in the precedent case cannot be achieved in the present case using the same measures. Every time we consider a departure from a precedent, it is useful to evaluate the role of the judicial precedent’s prerequisites in the particular case – how strong could the legal expectations have been, could the departure have been predicted, and so on.Footnote 26 Chris Reed is one of the few authors who have analysed in depth the issue of the legal significance of the internet (as a factor requiring different legal treatment).Footnote 27 In his paper ‘Online and offline equivalence: aspiration and achievement’, Reed lays down insightful observations on when the online factor is sufficient for a different legal treatment and proposes a methodology for the rules of behaviour regulation: (a) identifying the various interests that the rule needs to take into account; (b) analysing the ways in which the new rule is likely to affect those interests; and (c) evaluating the resultant balance of interests to decide if it is equivalent to the offline situation.Footnote 28 Although this test of interest-balancing is undoubtedly valuable for legal reasoning, the judicial judgments adopted after Reed’s paper was published display a large set of supplemental modes or reasoning. In addition, the question of whether going online makes a difference is sometimes followed by the question of whether going online still makes a difference (or not). This issue is presented in the following section.

5.5.2 Differences That Vanish

Sometimes ‘going online’ makes a difference that is not permanent. Established precedents on the matter of a certain online phenomenon can become outdated if the underlying reasons for those precedents are no longer applicable because of technological developments. So, an important question to keep in mind is how judicial precedents on this matter function. It was already mentioned that with important changes precedents might become obsolete, which is partly illustrated by most of the cases presented in this chapter.

The presented cases show the significance of the purpose of the disputed object or actions and the significance of the actual extent of the harm from the act under consideration. But the theory on precedents in many of these cases provides an important notice – transferring to a digital environment is not the last step. Going online often does make a difference but after we are there, developments of modern technologies can continue making changes.

In the Pihl v. Sweden case,Footnote 29 the ECtHR found that an application regarding an alleged failure by domestic authorities to hold a service provider responsible for the content of third-party comments on a blog was inadmissible. The applicant included a complaint that, although a defamatory comment on the internet was removed and a new post was added on the blog by the association stating that the earlier post had been wrong and based on inaccurate information, according to the applicant, it was still possible to find the old post and the comment on the internet via search engines. The court responded to this by pointing out that ‘the applicant is entitled to request that the search engines remove any such traces of the comment (see the ECJ judgment of 13 May 2014, Google Spain and Google, no. C-131/12, EU:C:2014:317)’. This offers room to speculate what would the ECtHR’s view be if this case had taken place before the ECJ established the right to be forgotten. There cannot be a definitive answer, but if we strictly follow the rules of formal logic, this alternative hypothetical scenario should turn out differently (if there were no other decisive circumstances).

An even stronger example of the relevant developments is the case of Ashcroft v. American Civil Liberties Union, et al. 535 U.S. 564 (2002) presided over by the Supreme Court of the US. In this case the legality of the Child Online Protection Act (COPA) was reviewed. This act restricted publishing indecent material on the internet that can be harmful to children. One of the reasons mentioned in this case was a statement by the Court of Appeals that prior community standards jurisprudence ‘has no applicability to the Internet and the Web’ because ‘Web publishers are currently without the ability to control the geographic scope of the recipients of their communications’. It was argued that ‘COPA is “unconstitutionally overbroad” because it requires Web publishers to shield some material behind age verification screens that could be displayed openly in many communities across the Nation if Web speakers were able to limit access to their sites on a geographic basis’. The Court noted that ‘given Internet speakers’ inability to control the geographic location of their audience, expecting them to bear the burden of controlling the recipients of their speech, as we did in Hamling and Sable, may be entirely too much to ask, and would potentially suppress an inordinate amount of expression’. However, in today’s world there are numerous cases where courts impose restrictions on the content on the internet that confine it to certain geographic locations. Contemporary technology is much more capable of controlling the geographic scope of the recipients, which makes the aforementioned US Supreme Court’s reasoning in this Ashcroft v. American Civil Liberties Union case basically obsolete.

5.6 The Implications

In legal practice, it is very common to follow a collection of universally recognised legal principles, such as the principle of proportionality, nullum crimen sine lege, pacta sund servanda, in dubio pro reo, ignorantia legis neminem excusat, and many others. In this, they are taken as a given, and it is rarely asked who (and when) created those principles. The usual first answer could be that they were crafted by ancient Roman legal scholars or (and) forged in judicial practice. Should it not be encouraged to keep raising the questions: Are we done with drafting legal principles? Are they all already discovered? Perhaps some of them already exist between the lines of judicial case law and the only step left is to give them the title of ‘principles’.

If we do not require the sanction of the lawmaker to present the title ‘principle’, there is no other more reliable source for discovering legal principles than the tendencies in judicial case law. With proper respect to David Hume’s guillotine, it cannot be denied that, when the statutory law is not amended, there is no better way to predict future judicial decisions than on the basis of past judicial decisions – to predict what will be done by recognising what is being done. Articulating those tendencies can help initiate a discussion about whether what the courts are doing should be amended. If we come to the conclusion that there is no such need, more extensive reasoning can better help reflect the reasoning process that took place in the judge’s intuition or maybe even help her come up with a better solution overall. This consideration is reminiscent of a big-picture question concerning the goals of methods for legal reasoning overall: What are they? What should they be? If those goals include a more representative articulation of the judge’s thought process and help to achieve a more just and beneficial judgment, then the search for principles that are yet to be given the title of ‘principles’ should be carried out.

In this chapter, two main tendencies of judicial practice were found – the relevance of the purpose or function of the disputed object or actions and the extent of harm. Judicial practice in the future has the tools to keep ‘polishing’ these and, perhaps, one day title them principles. The first has the potential to be worded as a principle that ‘similar actions or objects in different contexts receive a similar legal treatment if their purpose and function does not differ’. The second – ‘an act that is incompatible with the text of law but causes no harm or any risk of it will be treated as lawful’. The latter also might have the potential to be worded a contrario – ‘an act which causes harm or the risk of it will be treated as unlawful, even if its incompatibility with the text of the law is vague’. Of course, these preliminary wordings are imperfect and legal professionals should be able to come up with significantly improved versions to describe the tendencies presented in this chapter.

5.7 Conclusions

The analysed cases illustrate that when faced with the task of resolving whether going online makes a difference, the courts, among other things, consider (a) the purpose and function of the disputed object or actions, and (b) the extent of harm/damages made by the disputed behaviour or act. These criteria might be helpful when analogical objects or actions can be found in a non-digital environment, which is already regulated by clearly established legal rules. The evaluation of similarities between cases according to these criteria focus on material facts and can be useful in concluding whether differences between similar cases in digital and non-digital environments are legally relevant or do not have more significance than the hair colour of the litigants.

The purpose and function of the disputed objects or actions can be relevant in comparison with the purpose and function of analogical objects or actions in a non-digital environment. If they are different, the ‘online’ case may receive a different legal treatment from its non-digital counterpart, and by contrast, when they are not different, they may receive the same legal treatment.

The extent of harm/damages made by the disputed act influences the conclusion about whether going online makes a difference in an intuitive way, which can be typical in all sorts of legal disputes – when there is no discernible harm, the behaviour can be treated as lawful. Correspondingly, when an act balances on the boundaries of legality according to the text of the law, it can be recognised as unlawful when this act causes considerable harm.

These findings also force us to propose a theoretical hypothesis: Can going online ever make a difference by itself? Perhaps a different legal treatment based solely on the difference of this circumstance is a rarity and can only be found when going online has other consequences (e.g., different levels of harm as a result of the illegal content being easier to access through the internet). However, since the presented study was not quantitative, further research would be necessary in order to confirm this statement.

The pure evaluation of whether going online makes a difference in itself is only part of the process arising from disputes in the context of technological innovations. There are important insinuations both before and after we begin the aforementioned evaluation. At the start, it is important to determine whether this test is appropriate at all. Afterwards, it must not be forgotten that developments of modern technologies can keep making differences perpetually. Accordingly, enforcers of the law and legal regulators must remain alert not to miss important developments, which might require them to supplement the system of the law based on new precedents.

6 Some Reflections on the Non-coherence Theory of Digital Human Rights

6.1 Introduction

The non-coherence theory of digital human rights was introduced in a Cambridge University Press monograph in 2024,Footnote 1 and has thereafter been presented and discussed at various conferences and book launches. Academic discussion is at its inaugural stage. This chapter wishes to accomplish three goals. First, to summarise the main elements of the theory with some illustrative examples; second, discuss some points raised during oral discussions when the theory has been presented; and third, point to a major doubt about the relativity of human rights as opposed to universality, which seems to be logically concludable from the theory.

6.2 The Main Elements of Non-coherence Theory of Digital Human Rights

The non-coherence theory of digital human rights focuses partly upon the transposition of established human rights principles from the non-digital domain to the digital realm. The normative and practice landscapes of digital and non-digital human rights appear non-coherent; that is, the meaning and scope of well-established ideas, concepts, and the content of norms appear at variance to a certain degree. These are to some extent incompatible images. This observation raises significant ontological and epistemological concerns. The main concern – expressed in an abstract manner – is the mistaken assumption that ‘A’ means the same in both domains, whereas on closer inspection it appears that upon transposition the meaning of ‘A’ has undergone a transformation. ‘A’ stands here for any human right norm or principle that is encountered in both domains.

6.2.1 The Scenarios of Normative Transformation

The transfer of judicial ideas and norms by itself is a phenomenon that is detectable wherever and whenever there is a confrontation of two or more (more or less) established socio-judicial environments.Footnote 2 Various scenarios are possible. The first concerns a situation where a certain normative-ideological regulatory framework is carried to another socio-geographic environment, but is countered by an existing framework of normative ideas and practices. This scenario can be labelled the socio-geographic transposition of law and can be illustrated by the process of imposing ‘colonial’ supremacy upon territories of the indigenous or more established but different cultures. Saliha Belmessous has shown that colonisation of indigenous people in the Americas, Africa, Australia, and New Zealand was countered not only by force, but also by ideas and understandings of a law that should apply between people. She uses the expression of indigenous legal opposition to the European legislative framework and justifications.Footnote 3 From the perspective of legal scholarship, this opposition means contestation between ideas and procedures; that is, mechanisms for how the ideas are made to govern. Non-coherence theory has the assumption that there is no or only a minor contestation because contestation usually leads to the disappearance of one competing ideology. Yet such contestation may appear in time, when the normative power of the digital domain grows through self-normativity.

The second scenario concerns regime change, leading to the ideological transposition of law. Legal scholarship dominated by comparative methodology does not conclusively show the possibility of the full replacement of established legal ideologies and structures by an incompatible set of ideas, leading to an entirely different understanding of how the law should govern society.Footnote 4 The overthrow of the Russian tsarist regime by the Bolsheviks led to the enactment of socialist ideology, but the administrative and court systems were largely functionally retained. The French Revolution led perhaps, at least initially, to even broader chaos in terms of laws and procedures.Footnote 5 Regime change may go hand in hand with the adoption of legal norms from a regime falling into the same ideological paradigm.Footnote 6 The common characteristic of regime change is the generic variance of the legal framework before and after the change, where the variance can always be characterised by degree. Such a scenario would not make sense in the initial stages after the appearance of the digital domain, for the simple reason that there was no regime to be changed. But this does not exclude the possibility that the regime may have appeared in time, either in disguise or openly, which is then subject to the aspiration of regime change from the offline domain. Yet the reverse is also possible, namely the aspiration of the digital domain to trigger the legal regime change of the offline domain.

The third scenario concerns a situation where people and institutions enter unknown territory with their own discursive history, ideas, and values, but for one reason or another note that their regulatory and ideological equipment does not fully work. We can term this scenario the transposition of law into a normative carte blanche. This position might take for granted that the digital domain could have been characterised – in the early years – as such an empty sheet open to the transposition of normative regulation; that is, there were no countering normative forces, and the offline regulatory and conceptual framework should have appeared welcome in the online context. A deeper examination questions this predisposition for the reason of the inadequacy of protection thesis, which I have developed elsewhere to explain the element of novelty in human rights development.Footnote 7 This thesis claims that the development of new human rights is explainable through the recognition of the incapability of established human rights to provide adequate protection for certain groups in comparison with others, or that the novel contemporary conditions challenge the capability of an established human right to provide sufficient protection of an important social value. The common element of both reasons leading to the articulation of new human rights according to the inadequacy of protection thesis is contestation. Application of this thesis to the internet is at the root of questions about whether it is possible to provide protection in the online domain that is comparable to the offline by using concepts entirely placed in and originating from offline, or whether offline remedies can be effectively applied online.

6.2.2 Reflections on the Inadequacy of Protection Thesis

The inadequacy of protection thesis when applied to the internet is offline-morphic and practice-dependent. Offline-morphic means the view from the offline domain, which is analogous to when an animal rights lawyer projects her understandings onto other species and concludes that they exhibit behavioural patterns comparable to humans.Footnote 8

The inadequacy of protection thesis offers a practical lens through which to examine non-coherence theory. This thesis posits a dichotomy between online and offline protection, where frameworks deemed adequate in the offline world may prove inadequate online. This offline-morphic perspective, analogous to projecting human understanding onto non-human contexts, highlights the subjective nature of assessing the adequacy of protection in the digital environment. The practice-dependent aspect of this thesis emphasises empirical observation. The presence of contestation – detailed arguments regarding the compatibility of offline human rights norms with online realities – indicates a non-coherent image. The theory distinguishes mere disagreement from actual contestation, requiring a critical mass of actors recognising the inadequacy of the protection. This contestation is manifested in various ways, including conflicting categorisations of legal transposition processes (e.g., the direct/indirect and receptive/unreceptive models). This contestation also reveals the evolving and dynamic nature of the non-coherent image of digital human rights, as the discrepancy between online and offline frameworks is not a static phenomenon but is subject to change over time.

6.2.3 The Dual Nature Thesis

Discussions about the non-coherence theory of digital human rights have raised the question of how to position this theory in the framework of existing approaches to legal transformations. What is the novel aspect of this theory – when the topic of transfers has been known as long as different cultures have collided – can and has been asked. My response is through the dual nature of the human rights thesis. It says that differently from legal transformation approaches where one domain has to yield to another, the digital and non-digital coexist in parallel, using similar human rights words, which carry a meaning in variation. The non-coherent image of human rights in the digital domain does not appear immediately and develops over time. It becomes evident once attempts to find in the digital domain an almost analogous image from the non-digital human rights system have proven illusory. The theory of non-coherence can explain this through the concept of the multi-stakeholderist veil thesis and the weakness of the sameness of rights doctrine.

6.2.4 The Multi-stakeholderist Veil Thesis

Multi-stakeholderism of internet governance, as formulated by Joanna Kulesza, is ‘a distributed policymaking model based on the voluntary cooperation of key actors, usually identified as: states, business, and civil society, operating “in their respective roles” through “rough consensus and running code”’.Footnote 9 For more than one party to connect, a genuine willingness from both sides is needed, otherwise the logical sequence of rhetorical functions comes to a standstill. In the digital human rights landscape it can be viewed as an attempt to incrementally transpose the offline human rights framework into the digital domain. Multi-stakeholderism as an end result in itself carries primarily non-regulatory aspirations, such as an advocacy tool for civil society or ‘whitewashing’ the aspirations of stakeholders who are more equal than others.Footnote 10

This thesis is inspired by the various transposition scenarios described earlier. The rise of the internet in the late 1990s and early 2000s can be characterised by the expansion of technological horizons, overshadowing the absence of internet-specific regulation, thereby assuming that the law offline as we know it can be easily transposed to the digital. This was the period of transformation to carte blanche. The recognition of the unclear image led to the ideological transposition through the multi-stakeholderism scenario, where co-creation and dialogue were viewed as arriving at a clear image, while still relying on the conceptual and procedural building blocks from offline. In parallel, the online community construed its own understandings of human rights in the digital domain, either implicitly or explicitly intending to conceptually counter the offline framework. By the time the limitations of the multi-stakeholder approach became evident, the only scenario left was the rivalry between competing regulatory frameworks; that is, between the horizontal and vertical governance models. This condition of regulatory uncertainty and proliferation has an axiomatic effect on human rights in the digital domain, which can be viewed as an element of non-coherence. It has to suffice to say at this point that multishakeholderism as a holistic approach applied to reconcile the non-digital and digital domains has failed. Non-coherence has become a permanent condition of the coexistence of human rights in the digital and non-digital domains. Multi-stakeholderism is no longer needed.

6.2.5 About the Sameness of Rights Approach

With some imagination, it is possible to cite the claim of sameness from the Geneva Principles in 2003.Footnote 11 This result of an effort for which there are no traveaux preparatoires, that lacks argumentation in support of its claims, and is thus representing a clear example of an instrument of faith. It reaffirms ‘the universality, indivisibility, interdependence and interrelation of all human rights and fundamental freedoms […]’.Footnote 12 One can say that such a statement is not a full acknowledgement of the idea of sameness, but it conveys the impression that human rights are not domain-dependent. Then it took almost ten years to reach an expressis verbis statement of the sameness in an international instrument. This is the United Nations (UN) Human Rights Council resolution from 29 June 2012, which ‘[a]ffirms that the same rights that people have offline must also be protected online, […]’.Footnote 13 Sometimes the idea of the sameness does not appear as a direct statement, but is deductible from the context. Take for example the UN Human Rights Committee’s General comment No. 34 to Article 19 of the UN General Declaration from 2011.Footnote 14 This comment acknowledges in paragraph 15 the substantial change in communication practices around the world owing to information and communication technologies. And yet implicitly it is based on accepting full transposition of human rights principles from offline to online.

It may be sufficient here to point to this idea’s ubiquity on the basis of the commendable work of others. For example, Dror-Shpoliansky and Shany propose the normative equivalency paradigm, having shown how in a series of resolutions the Human Rights Committee and the General Assembly ‘have reiterated the notion that human rights apply in the digital “online world” as they apply in the “offline world”’.Footnote 15 The normative equivalency, as they are asserting, contains a major flaw, which is about considering ‘digital technology as a new tool or arena for exercising offline rights or governmental powers, as opposed to a conceptualization of digital space as giving rise to new human condition and governance domain’.Footnote 16 This view represents non-coherence between digital and non-digital domains in a narrow sense, not encompassing well-established rights from the offline domain, such as the right to privacy, and covers the so-called second generation rights.

I have raised two main objections against the doctrine of the sameness of rights. The first objection is from relativity and the second objection is from practicality. The objection from relativity is related to the basic or fundamental building blocks of human rights. There are certain ideas that are difficult to justify through explication, and even then, the element of belief in something remains relevant. For example, take the idea of the universality of human rights. Is this a premise that can be verified through rational argumentation, or is this the result of discursive, political, and judicial practices? But its importance is evident from asking what would happen to the suggestion of the omnipotence of human rights if the idea was given up. The sameness idea carries similar connecting and rhetorical force. In today’s human rights discourses, policies, judicial judgments, and civil society approaches, in other words, everywhere, any answer to the question of whether human rights offline and online are the same or not has far-reaching consequences. An affirmative answer means coherence of content and leads to the subsequent question of the breadth of non-coherence in enforcement instruments. However, if the negation of the sameness idea could claim validity as a general principle, then we can say that human rights online do not exist in a similar fashion as (we are accustomed to know them) in the offline domain. This leads us to say that the theory of non-coherence is one of relativity.

The objection in practicality terms is articulated by the advocacy group Article 19. This group translated the reaffirmation that ‘the same rights that people have offline must also be protected online’ in questions of practicality. It says that unless greater priority is placed on addressing violations and changes to law and practice, this statement of sameness is no more ‘than words just written on paper’.Footnote 17 The non-coherence theory explains this image by suggesting that the sameness of rights remains at the highest level of generality, expressed in words only, and starts to evaporate once we enter the sphere of more detailed content and enforcement. There continue parallel roads with occasional crossings, to use a figurative expression.

6.2.6 Double Nature of Privacy

To give a robust example of the coexistence of the non-coherent concepts of rights, it can be shown that privacy offline exhibits different features to privacy online. The formulation of the right to privacy in the contemporary era is widely credited to Warren and Brandeis’s formulation from 1890 as an interest ‘to be let alone’.Footnote 18 Online privacy exhibits the opposite, namely, an interest not to be let alone. I have explained these opposites through fundamental differences of entry into non-digital and digital domains. We enter the non-digital domain involuntarily. This is the existentialist viewpoint, Jean-Paul Sartre’s view that we are thrown into freedom, yet we do not choose to be free.Footnote 19 It can be said that the right to privacy in its original manifestation is part of the existentialist ‘package’, which we may or may not choose to utilise. The entry into the digital domain is different and carries both voluntary and involuntary elements. Voluntary entry is the result of the conscious choice to use the digital domain for social communication, obtaining information, interacting with official government structures, or accessing vital services from private providers, such as financial or medical sectors. Involuntary entry happens despite one’s position in regard to the digital domain and is connected to e-state and blockchain technologies, which are widely used to maintain official records of events and persons. A few more sentences are in order to express these differences in more detail.

Voluntary entry is related to the interest to exhibit oneself in the digital domain in the most suitable manner. It is not accompanied by the positive validity of one’s image but is triggered by what one wishes to put forward and at the same time hide. Such a digital image has been termed the digital identity. Michalkiewicz-Kadziela and Milczarek have noted that ‘human identity on the internet functions on completely different terms than outside it. The internet gives unlimited possibilities of creating subjective elements of one’s own identity, but it also allows a change of objective elements’.Footnote 20 There are warnings from policy organisations that digital identity does not fit into the framework of human rights, and it entails a threat leading to totalitarianism.Footnote 21

Involuntary entry into the digital domain is caused by the growing reliance on e-solutions by public and private authorities offering essential services. It is practically impossible to avoid that various digital surveillance mechanisms record us under Big Data, and that financial or medical institutions record our personal information. Here it is impossible to raise the interest of being left alone, and the only claim is related to requesting that the data be accurate through the newly emerged right to erasure.Footnote 22 Once someone enters involuntarily into the blockchain system, their interest to be let alone becomes meaningless. As put eloquently by Sullivan, this is because of ‘considerable challenges in establishing not only that “I am who I say I am” but in establishing “I am not who the identity register says I am”’.Footnote 23

At this point there is noticeable prima facie variance in the meanings of privacy offline and online. The variance is of such a degree that it should be more appropriate to refrain from the usage of the expression ‘privacy’ for the online environment. Non-coherence is centre stage here. Because the digital and non-digital domains are interconnected and someone leads parallel lives in both domains, the right to privacy has meaning in the digital reality only so far as it enables the realisation of the interest to be let alone in the non-digital reality.

6.2.7 Concerns Related to Time

Among epistemological concerns the one related to time is of significance. In the non-digital domain, time can be regarded as a quality assurance for knowledge development, and as such is considered to have a positive effect upon human rights adjudication practice. This is because of the possibilities of reflection and discourse. However, the element of time becomes negativity in the digital domain because the time window for resolving online human rights conflicts appears immensely narrow. Liu articulates this aspect as follows: ‘Instead of valuing enduring or permanent truths (the temporal version of “high” knowledge), the digital age is preoccupied with information of much shorter durations – time spans plunging down to the diurnal rhythm of blog posts, the microseconds of a data packet’s TTL (time to live) […]’.Footnote 24

Elsewhere, I have shown that shortness of time weakens one of the basic instruments for offline rights conflict resolution – balancing.Footnote 25 Within any framework of judicial balancing, the courts have as much time as needed to claim a correct assessment of the concrete importance of conflicting rights and establish a proper balance. Conversely on the internet, the element of time is non-existent, as it would lead to the non-effective, retrospective recognition that rights were violated and proper balance was not achieved. Effective balancing online depends on swift decisions about whether certain information published overrides someone else’s concern about an intrusion into his or her private domain or other rights. Here time is a different concern from the understanding that with the passage of time the balance of interests may reverse; for example, when processing personal data.Footnote 26 The shortness of time for reasoned decision-making can easily lead ultimately to the rejection of the rationality of online balancing.

A related doubt about whether balancing as we know it from the offline domain is at all possible in the digital domain is related to the matter of transparency. In online balancing via private entities, the transparency deficit of portals has been raised as a concern. This matter has been noticed for longer than a decade and there are calls to address it, but nothing has changed. For instance, the Council of Europe has stated that ‘Internet service providers should put in place appropriate, clear, open and efficient procedures to respond within reasonable time limits to complaints from internet users alleging breaches of the principles included in the foregoing provisions […]’.Footnote 27 The UN calls upon ‘all States to consider formulating, through transparent and inclusive processes with all stakeholders, and adopting national internet-related public policies that have the objective of universal access and enjoyment of human rights at their core’.Footnote 28 Similarly, La Rue encourages ‘corporations to establish clear and unambiguous terms of service in line with international human rights norms and principles […]’.Footnote 29

The transparency deficit leads to a phenomenon that can be termed the legitimacy deficit in relation to balancing conflicting rights online. There is a growing academic discourse on the topic. Enguerrand Marique and Yseult Marique search for the source of legitimacy when private platforms set rules, monitor compliance with their own rules, and finally, apply sanctions.Footnote 30 Noting the unilateral character of the ‘contractual’ horizontal relationship, they proceed to highlight that online portals give the ‘take it or leave it’ option without any possibility of negotiation, which in turn leaves the issue of the legitimacy of private content assessment undefined. It is suggested that new forms of legitimacy may develop based on public–private or hybrid interactions in setting and enforcing the regulatory order of the internet.Footnote 31 The legitimacy deficit is currently also apparent in a mathematical solution with practice-affecting aspiration, proposed by a multi-country scholarly community. For instance, a simplified model by Zufall, Kimura, and Peng uses the elements of privacy of information and nature of information, while the relevance of the intrusion is affected by time.Footnote 32 This solution isolates and polarises just one element from the conflicting rights to be balanced, because privacy of information is just one aspect of the level of intrusion into privacy, and nature of information is just one aspect allowing to determine the scope of the freedom of expression. The matter of algorithmic simplicity was to be expected and possibly characterises some or many solutions in practical use today.

These brief observations convey the perception that balancing online stands at significant variance from balancing online owing to legitimacy and transparency deficits. This is a clear phenomenon of non-coherence.

6.2.8 Some Additional Theses

Space available for this chapter also permits me to give a brief introduction to some additional theses connected to the non-coherence theory.

While exploring the notion of digital dignity, I have formulated the relativisation of core values thesis, which is a logical consequence of the recognition that in the digital domain there are no absolute human rights – the space allocated to this chapter allows me only to make this as an assertion, which means that everything becomes relative. This is because the basic norm in which offline stands at the bottom of the structure of human rights norms and principles itself becomes relative. Online dignity does not possess the feature of absoluteness that we are accustomed to accepting offline – as the core value written into the preambles of various human rights instruments. Here in the digital domain the key notion of dignity will be measured against ‘competing’ rights and values. But it would no longer correspond to the place of dignity in the offline human rights realm. In this case, the weight of dignity in the offline and online domains becomes non-coherent. This lies in the simple recognition that while dignity is a constant offline, it is relative online.

A novel aspect of the transfer of human rights images and ideas can be described through the ‘comfort in non-coherence thesis’. This refers to the simplistic perception that we prefer to operate in a realm that is at least to some extent familiar to us, as opposed to being confronted with a reality that is completely new, unfamiliar, and unpredictable. In such a reality, human rights ideas and concepts would not be meaningful, and the only reason for using human rights language would be to approach the new reality with tools that are familiar to us from the physical realm. The high level of variance in observations between non-digital and digital domains has become part of the mainstream in judicial discourse. For instance, Kettemann, Peukert, and Spiecker write in their 2022 edited volume on law in the digital space: ‘The book reveals that the number of specifically digital and global phenomena, which raise unique regulatory questions, is increasing, and that the applicable digital law is characterised by private ordering, transnational standardisation, economisation but at the same time by persistent and possibly deepening legal fragmentation’.Footnote 33 Magarin lists structural concerns connected to limited free speech online in need of regulatory attention, among these being inequalities of access, power relationships among governments, private speech intermediaries and internet users, and the ways in which the internet’s architecture complicates effective regulation.Footnote 34 All these observations are about non-coherence between the two domains under review. The comfort thesis says that we prefer to live in an environment of double standards and fragmentation instead of an environment where one sphere is devoid of any normativity, be it rational or non-rational.

6.3 Concluding Remarks
6.3.1 Conclusion by the Artificial Intelligence ChatGPT

For the first time in my writing experience, I asked ChatGPT to write a short summary of my writing. I submitted the manuscript of the non-coherence theory of digital human rights, which is around 250 pages, and asked it to write a short one-page summary. The resulting text is as follows, which for the purposes of clarity is in quotation marks. The only curiosity of the text that follows – which I endorse for the purposes of knowledge sharing – is the choice of the only reference. Why ChatGPT chose Merleau-Ponty remains unknown.

‘The theory’s core premise is that the transposition of offline human rights law and practice to the online environment often results in a distorted, “non-coherent” image. This non-coherence stems from inherent discrepancies between the offline and online contexts, rendering the simple transferability of established norms problematic. The resulting lack of clarity regarding the application, realization, and enforcement of human rights online necessitates a re-evaluation of existing frameworks.

‘The non-coherence theory emerges from the observation that established human rights principles, their implementation, related obligations, and remedies for violations, as understood in the “offline” world, lack clear analogues in the digital sphere. The central question becomes: Do offline human rights rules and principles maintain their integrity online, or do they undergo modifications, distortions, or even cease to apply altogether? This uncertainty, termed “non-coherence”, arises when the transposition from the offline to the online context results in significant differences. While the terms “variance” and “distortion” are used interchangeably in this context, the critical distinction lies in the degree of difference. Minor variances may be acceptable, but substantial divergences indicate a distortion that may render the offline framework inapplicable or limit its scope in the digital context. This core concept challenges the assumption of the straightforward transposability of human rights from the offline to the online world. The theory anticipates potential objections, particularly regarding its theoretical underpinnings and the empirical evidence supporting its claims.

‘These models differ fundamentally in their approach: vertical governance relies on the state’s coercive power, while horizontal governance involves a partnership of diverse actors. Non-coherence theory is not concerned with the content of rights defined by these models but rather with the processes by which rights are established, enforced, and remedied. The simultaneous operation of these differing models illustrates the inherent unpredictability and inconsistency in the digital domain, undermining the notion of a unified and coherent human rights framework online. The coexistence and interaction between these models highlight the complex and often conflicting nature of online regulation.

‘Non-coherence theory also explores the ontological and epistemological dimensions of digital human rights. Ontologically, the theory questions the very existence of human rights in the digital space, posing the fundamental question: Do human rights exist intrinsically for sentient beings entering the virtual realm or are they conferred by some external authority? The theory argues that even under ideal discursive conditions, the transposition of human rights concepts from the offline to the online domain leads to a loss of meaning and clarity, akin to Merleau-Ponty’s concept of the subjective experience of colour.Footnote 35 The theory’s focus is on how the offline image of human rights is reflected online, implying that even distorted reflections are meaningful for understanding the non-coherence. The inherent limitations of the practice-independent approach raise the question of the continued validity of applying non-positivist categories such as “human rights” to the digital domain, particularly given their possible transformation into merely “legal rights”. This analysis requires an acknowledgement of the inherent subjectivity and context-dependence of human rights interpretations, and the theory’s emphasis on how different frameworks interact within the digital environment highlights this further.

‘Epistemologically, the theory examines how knowledge about human rights is created, articulated, and enforced in the digital sphere. The theory identifies a polarisation of views regarding the coherence of online and offline knowledge creation processes. It argues that even if the means of knowledge creation differ, this does not necessarily invalidate the claim of normative validity, highlighting the role of processes such as communicative rationality and the “wisdom of the crowd” in generating credible knowledge. However, the theory suggests a relativisation of truth in digital human rights. What is considered truthful in one domain may not be considered truthful in the other, highlighting the paradigm-dependent nature of truth claims and questioning the universal applicability of non-positivist concepts. The theory concludes that the non-coherence of digital human rights is not simply a hypothesis but an inherent feature of the digital environment, requiring a reconceptualisation of human rights frameworks in this space. Further research would need to elaborate on the specifics of how competing online and offline normative systems interact and the implications for the future of digital human rights.’

6.3.2 Conclusion by the Author

The central conclusion of non-coherence theory is relativity. It shows how the division into absolute and relative human rights evaporates in the digital domain, and the human rights landscape becomes uni-layered. Basic norms that stand at the base of the human rights architecture in the physical world – such as dignity – become relativised and subject to balancing with other principles of a lower degree of generality. Balancing that constrains and directs human rights adjudication in the physical world assumes a completely different nature owing to transparency and legitimacy deficits. Relativity deprives human rights of their universal power. Competing systems of human rights can claim equal legitimacy, which leads to the validation of conflicting contents of concrete human rights bearing similar labels.

Yet there can be some commonalities that are applicable in any domain where human rights can exist, independently from their concrete content. These commonalities would have universal nature, which restores at least partially the idea of universality into the human rights landscape. Building on the analogy with quantum mechanics, it can be asked whether certain human rights exist in a shared state. This would mean, for instance, whether the right to privacy exists in a shared state with the right to freedom of expression; or whether the right to reputation – leaving aside the issue of whether such a self-standing right can be justified – exists in a shared state with the right of access to information. If the premise of such a shared state is correct, then it would follow that change in the scope and meaning of one right (particle) in this shared state leads to a simultaneous change to the opposite in the other right (particle). To give a concrete example, if the scope of the right to privacy is broadening, then the scope of the right to freedom of expression has to narrow. Such correlation would then be assumed. To give an abstract example from the other side, when we can measure the broadening of the right to freedom of expression, say in an online portal, we know without additional measuring that the scope of the right to privacy is narrowing. For human rights, the matter is related to, first and foremost, the relative weight of rights against one another. A new thesis can be advanced. I would term this the equilibrium of relative rights thesis, which says that for those rights existing in a shared state, when the relative weight of one right increases, the relative weight of another right has to decrease by the respective amount. Validation of the universality of the equilibrium of relative rights thesis could partly restore universality to the human rights discourse.

7 Internet Addiction as a Human Rights Issue

7.1 Introduction

This chapter examines internet addiction as a threat related to the use of the internet, which in a sense is not a new problem. Research shows that throughout history, there have been recurring waves of concern and fear over pathological media use. We have witnessed afflictions such as radio addiction, television (TV) addiction, internet addiction, and smartphone addiction. Concern about these afflictions especially addresses young people, who are presumed more reckless and irresponsible in their media use but also more vulnerable to its effects. Up to a third of US parents and half of teens believe they spend too much time online, and half of teens and more than one in four parents in the US describe themselves as addicted to their smartphone. In other words, even though clinical diagnoses are exceptionally rare, as a cultural phenomenon, internet addiction appears to be widespread. What is new is that the advent of information and communication technologies (ICTs) marked a shift from mass media to a ‘personal communication society’. In particular, smartphones render us permanently online and connected.Footnote 1 This suggests that internet addiction may be more acute than any other technology-related media addiction and may raise new challenges for law and policymakers.

The phenomenon of internet addiction is global. Previously, abnormal internet-related behaviours mainly concerned Asian countries, but it is now established that internet addiction represents a global emerging public health issue, with relevant societal costs.Footnote 2 For example, one study showed that 31 per cent of undergraduate students in northern Tanzania were addicted to the internet. Internet addiction was associated with using the internet at college, lengthy periods using the internet per day, and social networking.Footnote 3

Although the problem of internet addiction is perceived as important, it has so far attracted limited attention from legal scholars. Law and policy responses to internet addiction have also been somewhat underdeveloped. Partly, this can be related to the lack of robust medical evidence of internet addiction as a diagnosis and a health issue that requires the attention of law and policymakers. There is, however, a rich and growing body of medical research into internet addiction issues.Footnote 4 This chapter will rely on medical research in order to identify concrete human rights issues that need to be addressed by law.

7.2 Interplay of Medical and Legal Research

From the medical perspective, issues of addiction to the internet or to content accessed via the internet are examined as an issue of problematic usage of the internet (PUI). This umbrella term used by mental health researchers encompasses all potentially problematic internet-related behaviours, including those relating to gaming, gambling, buying, pornography viewing, social networking, cyberbullying, and cyberchondria among others. PUI may have mental and physical health consequences.Footnote 5 A common element across all expressions of PUI is the excessive time spent online, which contributes to significant functional impairment with negative consequences for the daily life of the subjects involved and their relatives.Footnote 6 In medical terms, PUI covers conditions considered to be behavioural addictions and also a spectrum of conditions going beyond behavioural addictions and covering problematic, hazardous, and harmful usage.Footnote 7 In this chapter, we are interested in behavioural addictions and have therefore chosen the term ‘internet addiction’.

In 1995, Ivan Goldberg coined the term ‘Internet Addiction Disorder’ and formulated a list of symptoms drawn from the characteristics of pathological gambling. The term remains widely used nowadays to indicate internet-related psychopathology. Around the same time (1995–9), Mark Griffiths proposed a conceptual distinction between addictions on the internet (those affecting individuals who simply use the internet as a medium to engage in a specific behaviour that could be conducted offline) versus addictions to the internet (where individuals are primarily addicted to content solely generated inside the World Wide Web).Footnote 8 Many studies have used the umbrella term ‘Internet Addiction’. Other terms such as ‘Smartphone Addiction’ are used and have been criticised owing to the lack of specificity, as users of mobile devices rather show tendencies towards the problematic use of certain apps.Footnote 9

For medical doctors, the term ‘addiction to the internet’ may be problematic as it is only specific behavioural patterns (‘addictions’), such as ‘internet gaming disorder’, that have been officially recognised as requiring further study (i.e., having the potential to be recognised as a disorder) rather than addiction to the internet itself.Footnote 10

From the legal perspective, there are more problematic aspects related to the usage of the internet in addition to those characterised by addiction. Cyberbullying, hate crime, the digital divide, and privacy threats are just some examples. This chapter, however, focuses solely on the aspect of addiction, viewing it primarily as a threat to health. Both aspects of addiction to the internet and addiction to specific content on the internet are covered. In human rights terms, this threat can affect the right to health, the right not to be discriminated against, and rights of specific vulnerable groups, such as the rights of the child. In certain contexts, this threat also affects consumer rights to the safety of products and services. Issues of internet addiction may also be relevant in the context of exercising other human rights, such as work or education-related human rights.

Robust mental health research data indicating that persons suffering from internet addiction actually suffer from a medical condition and need healthcare, could have implications for both how the law and policymakers should respond to the needs of such persons and how courts should take into account the characteristics of such individuals in applying the law to them.

Currently, however, there are limits to what psychiatry and neuroscience can prove. Individuals with indications of PUI do not necessarily meet the criteria for a psychiatric disorder. There is little evidence to suggest that they experience a high degree of behavioural dyscontrol and thus need to be exempted from certain requirements of the law. Where neuroscience suggests sexual internet addiction in an individual, it is not yet clear whether the diagnosis of a form of sexual disorder (e.g., paedophilia), regardless of the medium used, would be more appropriate than a PUI diagnosis. Although PUI has been the subject of growing research interest, this diagnosis has not been widely accepted among psychiatrists. PUI is not yet a valid diagnostic entity. Clear, operationalised criteria for PUI are still lacking. Given the lack of consensus regarding PUI, courts and forensic experts should examine claims of internet addiction critically. Specific attention should be paid to the issue of whether the internet is the source of an individual’s problems, or whether the root cause lies elsewhere.Footnote 11

Domestic courts have so far been cautious to accept internet addiction arguments. In the US legal system, arguments based on the concept of internet addiction have been an issue in both civil and criminal litigation. In civil cases, individuals have brought lawsuits against video game developers owing to harm allegedly caused by their products (specifically, in-game ‘loot boxes’ – virtual items containing random rewards that can be purchased or earned). Notably, one complaint used emerging neuroscientific data to argue that adolescents are especially prone to addiction and risk-taking behaviour (such as gambling). Additionally, at least one plaintiff has contended that his internet addiction was a disability entitling him to protection under the Americans with Disabilities Act. In criminal cases, internet addiction arguments have been raised as a potential defence or mitigating factor to charges of sexual crimes, particularly those involving online child pornography or the sexual solicitation of minors. It was, however, only in one case that the concept of internet addiction has overcome a Daubert challenge (i.e., a hearing on the scientific validity and admissibility of expert testimony).Footnote 12

Hence, this chapter looks into the issue of how medical research can inform legal research, and also how legal research into internet addiction should proceed in the absence of evidence from medical research.

7.3 Types of Internet Addiction

Medical research provides sufficient evidence to identify the following subtypes of internet addiction: internet-related gaming disorder (excessive use of the internet for the purposes of playing online games that becomes associated with the development of loss of control of the gaming behaviour, prioritisation of gaming over other important everyday activities), internet-related gambling disorder (a pattern of persistent or recurrent gambling behaviour, which may be online or offline and which results in impaired control over gambling, increased priority given to gambling over other interests and activities, and the continuation or escalation of gambling despite the occurrence of negative consequences), internet-related buying or shopping disorder (extreme preoccupations with and craving for buying or shopping and irresistible urges to possess consumer goods), cyberchondria (an individual’s excessive or repeated online search for medical information driven by a need to alleviate distress or anxiety regarding their health), cyberpornography addiction (excessively time-consuming, distressing, and difficult to resist use of the internet to view or interact with pornographic material), cyberbullying (the use of digital technology to seek to harm, intimidate, or coerce other people online), and internet social media/forum addiction (excessive preoccupation with social media use as well as increased amounts of time using social media resulting in detrimental consequences on an individual’s functioning and especially affecting the young). Limited scientific evidence exists to present excessive web surfing, mail checking, cyberhoarding and cyberstalking as specific internet addiction subtypes.Footnote 13 Other subtypes, such as binge-watching (watching multiple episodes of a TV programme in one sitting or in rapid succession), have been suggested by researchers,Footnote 14 but have not so far acquired universal acceptance.

Researchers have noted how one type of internet addiction can lead to another one, notably in situations of the problematic design of video games that contain elements such as loot boxes, because of which excessive online gaming can lead to excessive online gambling. The phenomenon of overlapping internet addictions is also noticeable in fantasy sports, where gaming often includes gambling and is accompanied by additional time spent online seeking relevant information or using internet forums for additional dialogues concerning the game.

Fantasy sports consist of selecting an online team of real-world players based on the rules of the particular fantasy sport. Participants are then awarded points based on the real-world statistics of those players. Gambling can play a major role in fantasy sports, especially in the form of Daily Fantasy Sports (DFS). DFS involves an accelerated version of fantasy sports where participants can bet on the performance of their players and win a proportion of their opponent’s entry fees. DFS participants have been shown to have similar psychological and emotional characteristics to their traditional fantasy sports counterparts but have been associated with increased problem gambling behaviours. Gambling is common within traditional fantasy sports too, with one US study reporting 43.5 per cent of participants gambling on fantasy sports. Fantasy sports participants seek out online information to help them research and participate in fantasy sports, increasing their time spent online and the potential of developing an internet addiction. Participants using forums for additional discussions and information on fantasy sports are likely to be more avid users and may be more susceptible to developing an internet addiction owing to the increased amount of internet-based content consumed.Footnote 15

More generally, researchers have found that internet addiction often co-occurs with other psychiatric disorders, such as anxiety disorders, mood disorders, obsessive-compulsive and related disorders, substance-related and addictive disorders, disruptive impulse-control and conduct disorders, personality disorders, and sleep/wake disorders. The relationship between internet addiction and other psychiatric disorders is mutual. People who suffer from internet addiction are at higher risk of developing anxiety, depression or other psychiatric conditions, and the occurrence of internet addiction in people with psychiatric problems is higher than in the general population.Footnote 16

7.4 Negative Consequences of Internet Addiction

In regard to problems caused by excessive internet use, researchers have suggested that those problems can be divided into four areas, namely mental health problems, behavioural safety/accidents, physical health problems, and malfunctioning. In the case of mental health problems, depression, suicidal ideation, impulsiveness, attention deficit hyperactivity disorder (ADHD), smoking, and drinking are all increased in association with internet addiction. In the physical health area, eye disease, musculoskeletal disorders, sleep problems, and sudden death are reported to be related to excessive internet use. In the area of behavioural safety/accidents, increases in aggression and violence, accidents, cyberbullying, and risky sexual behaviours have been reported. In the area of malfunctioning problems, learning, intellectual abilities, and functioning in the family are all reported to suffer a decrease.Footnote 17 Problems, or threats, can also be divided into broader categories of individual and social threats.

Some examples of the findings on threats from specific studies into various aspects of internet addiction are presented here.

Studies of adolescents affected by internet addiction found increased rates of obesity to be linked with diminished physical activity or exercise, poor sleep quality, an irregular diet, eating snacks instead of regular meals, and inadequate or insufficient sleep. Importantly, obesity itself adversely affects self‐esteem, further contributing to symptoms associated with anxiety and depression.Footnote 18

As the internet is very often accessed via smartphones, findings related to the risk of smartphone addiction may also be relevant. Research shows that problematic smartphone use is moderately but robustly associated with both anxiety and depression. Problematic smartphone use can be viewed as an indicator of symptoms of anxiety and depression and a possible manifestation of these mental health problems in modern society. Furthermore, problematic smartphone use as a maladaptive coping behaviour may contribute by worsening these symptoms.Footnote 19

As the excessive use of social media is one subtype of internet addiction, research findings related to social media use risks are also relevant. The use of multiple social media platforms was found to be independently associated with symptoms of depression and anxiety, even when controlling for overall time spent in their use.Footnote 20

In regard to social threats, it is noteworthy that one study conducted in Italy shows that adolescents who reported the problematic use of electronic media communication (mobile phone use and social media use) or internet addiction showed an increased risk of cyber-victimisation.Footnote 21

In terms of the use of digital tools in the workplace, an exploratory qualitative study of professionals working in various industries in France showed that the majority of survey participants (93.4 per cent) either agreed or strongly agreed with the view that the digital tools were highly addictive. The findings of the study confirm that the use of digital tools at work is widespread. Heavy users of digital tools indicated that these tools provided a fertile ground for the development of professional exhaustion and burnout.Footnote 22

More generally, addictive internet use can result in a goal conflict (i.e., a condition where two or more goals compete against each other and cause conflict in a person’s mind) between entertainment and obligations, and negatively affect the social functioning of a person. In the case of binge-watching, for instance, the goal of getting enough sleep is offset by the goal of watching ‘one more episode’ before bedtime. The available evidence suggests that excessive binge-watching might lead to the impairment of daily functioning and disturbance in the quality of social life and sleep cycles.Footnote 23

In addition to the problems described here, it needs to be kept in mind that problems caused by internet addiction have economic costs. The costs and burden from PUI not only accrue to the individuals suffering from the condition but also others, such as the individual’s family or society as a whole, with regard to the impact on healthcare and other social security systems. Currently, the extent of such costs is unclear owing to the broad conditions related to digital technology use and insufficient data, but researchers are working on this issue and have proposed methodological requirements for an assessment of the cost burden of PUI, which includes but is not limited to internet addiction.Footnote 24

7.5 Specific Features of Addiction Online
7.5.1 Lack of Internet Specificity?

One of the issues that arises in addressing the need for a law and policy response to the problem of internet addiction is whether there is a need for a specific response as the problem can be viewed as being part of a broader phenomenon of addiction to technologies or can be understood as being inaccurately named because it is not the medium of the internet but its problematic content (online gaming, gambling, pornography) that a person is addicted to that calls for law and policy measures regarding specific content, but not the internet in general.

Research suggests that internet addiction can be seen as one manifestation of the so-called technopanics where every new technology is met with fear about its pathological usage. As the internet is often accessed via smartphones, internet addiction analysis could benefit from smartphone addiction analysis. As smartphone addiction has similarities with TV addiction (similarities are most visible when TV and smartphone addiction are discussed as medical conditions),Footnote 25 the problem of internet addiction can be usefully compared to TV addiction. As excessive use of a mobile phone (irrespective of whether internet features are used) has similarities with excessive use of the internet (specifically, the paradox of technology facilitating communication but leading to an erosion in social well-being),Footnote 26 mobile phone addiction analysis could inform the analysis of the problem of internet addiction. In practical terms, the law and policy response to other technological addictions can have an indirect effect on internet addiction, which would suggest the need for a general rather than specific approach to internet addiction. The same would be true when addressing specific problematic content. As online gambling has similarities to gambling generally or as online pornography viewing is similar to viewing pornography using other media, addressing gambling addiction and pornography addiction would also tackle online gambling addiction and online pornography viewing addiction.

Research also shows, however, that certain features of the internet create favourable conditions for developing an addiction.

7.5.2 Features of the Internet Which Contribute to Addiction

The internet as a medium has features that reinforce problematic behaviour potentially leading to addiction. For example, features of internet-related gaming include the ease of accessing a game via portable or handheld devices, the possibility of engaging in competitions with other gamers, the perception of oneself in a manner that is more rewarding and less impacted by real-world issues, and the specific genres, designs, and content of the games that are played, including the possibility of financial rewards or achieving winning status. In online gambling, new types of gambling (in addition to the traditional games such as poker, casino games, and sports betting) are available online, and online gambling is more accessible than gambling in the real world. Online shopping can be considered more addictive than shopping in the real world because e-commerce provides a range of potential addictive features, such as immediate product availability, anonymity, easy access, and affordability.Footnote 27 Pornography viewing has expanded substantially in the era of the internet, likely because of specific aspects of the internet (relating to anonymity, affordability, and availability).Footnote 28

7.5.3 Increased Vulnerability Online

Everybody using the internet is subject to increased vulnerability. Two aspects are particularly important. First, every user of the internet is vulnerable to internet platforms, especially large ones, because of the power the platforms often have in defining, interpreting, and applying the rules governing online matters. Second, everybody is exposed to the possibility of technological vulnerabilities owing to the large amounts of personal data available online, lack of clarity regarding the algorithms used by platforms, possibilities of targeting individuals based on their profile, and so on. These two features make everybody vulnerable and in need of protection against the platforms interested in gaining attention for their products and designing their products accordingly, without necessarily warning the users of the danger of excessive use or addiction.

At the same time, some groups of internet users are particularly vulnerable (such as children) and require special protection, which suggests that the law and policy response should combine measures aimed at the general protection of all internet users and specific protection of particular vulnerable groups.

7.5.4 Specific Vulnerable Groups

Only a minority of internet users develop addiction, but belonging to certain groups increases the likelihood of developing such an addiction.

European studies show that relevant proportions of the population suffer from internet addiction, particularly in young age groups. Internet addiction is associated with the male gender, younger age, mental health problems, and unfavourable social conditions.Footnote 29 Research confirms the importance of gender, age, mental health issues, personality, and neuroscientific traits, as well as social factors as risk factors for developing an internet addiction.

In terms of gender, even though the male gender is a bigger risk factor (as shown, for example, in a study concerning the relationship between internet addiction and obesity),Footnote 30 in some cases (such as explored in a study on cyber-victimisation and internet addiction where thirteen-year-old girls reported the highest proportions of both cyber-victimisation and electronic media communication problematic use),Footnote 31 females are more vulnerable.

Severe forms of internet addiction are more likely in people of a young age.Footnote 32 However, older persons are not immune to internet addiction, but the risk factors for older and younger people are different. Internet addiction for older individuals is strongly associated with obsessive-compulsive disorder and generalised anxiety disorder, whereas internet addiction in the young is strongest among those with ADHD and social anxiety disorder.Footnote 33

In terms of mental health problems, research has confirmed that the occurrence of internet addiction in people with psychiatric problems is higher than in the general population. All substance use, early alcohol use, and smoking strongly indicate a high risk of developing internet addiction. Abuse of multiple substances is even more strongly associated with internet addiction. There is a high rate of internet addiction in people suffering from ADHD. Obsessive-compulsive personality traits such as rigidity, perfectionism, being dependent on others, and harm avoidance are common in problematic internet users, suggesting that an obsessive temperament may also predispose an individual to the development of an internet addiction.Footnote 34

More generally, personality traits can be important. Research findings have shown that depression seems to be the strongest risk factor for internet addiction and the trait of optimism has a protective effect. In addition, the personality factor of extraversion was found to be associated negatively with internet addiction.Footnote 35

Research suggests that neuroscientific traits can be important too. Recent studies suggest a neurobiological component for internet addiction. Anatomical and functional changes, genetic polymorphisms, and impairment of neurotransmitter systems have been found in brains of individuals with an internet addiction.Footnote 36

Social factors, such as family life and parent–child relationships play a strong role in the development of persistent behaviours (e.g., gaming to escape parental conflict).Footnote 37 Maternal depression is related to children’s internet addiction; in particular, mothers at graduate level or above, male children, and a child’s normal or better academic performance show the strongest relationship with child internet addiction.Footnote 38

Furthermore, we should not overlook the legal requirement to be connected and the prevailing culture of connectivity as risks that may contribute to the development of internet addiction.

This outline of risk factors enables us to identify vulnerable groups characterised by one or more such factors. Law and policymakers should adopt evidence-based measures targeted at most vulnerable groups, primarily children, especially where other risk factors co-occur.

7.5.5 Specific Challenges: Conflicting Findings regarding the Harms and Benefits of Internet Usage

One specific challenge in trying to prevent and fight internet addiction is related to conflicting findings regarding the overall assessment of the harms and benefits of internet usage. The internet paradox – that the technology provides conditions for better communication but using it leads to worsening social interactions – is well known. Some researchers have suggested that online social interaction and ICT use are likely to undermine social bonds as well as decrease social capital at both individual and societal levels, arguing that the more time one spends online, the less one can spend socialising with others. On the other hand, others have found that digital media use is associated with an increase in interpersonal communication and community participation, and in turn may provide both bridging and bonding of social capital.Footnote 39 The internet provides an important source of information and a precious channel for communication. More generally, it is considered to be a medium where human rights are exercised. Therefore, notwithstanding the risks associated with the excessive use of the internet, limiting access to the internet is frowned upon, as it limits opportunities to exercise human rights.

The possibility of different viewpoints on the harms and benefits of the internet practically results in a difference of attitudes of various government agencies based on their functions and relative interest in public health and economic value.Footnote 40 This makes it difficult to promote a consistent approach regarding, for example, limiting supply of online products containing addictive features and limiting screen time in the process of education, among others.

Another challenge related to the use of online media is the lack of knowledge on how to safely use the internet. Knowledge gaps in the domain of digital media use may be more severe than gaps in uses of traditional media, given that meaningful internet use requires new skill sets, such as refined search strategies and critical approaches to evaluating content credibility that are less associated with using traditional media.Footnote 41 Recognising the addictive design of online products, applying protective strategies, and knowing where to turn for professional help when everything else fails are the skills that need to be taught to diminish the threat of internet addiction.

7.6 Law and Policy Responses and Solutions

Law and policy responses and solutions in different legal systems vary and are not necessarily very elaborate or comprehensive. Four approaches are presented by way of example here.

7.6.1 Public Health Policies: Comprehensive Approach

Given the predominance of health aspects in addressing PUI, one could expect health policy responses to internet addiction. At present, there have been limited public health interventions with regard to many forms of PUI, with the partial exception of online gambling disorder and gaming disorder. Public health approaches have been attempted in countries particularly affected by online gaming disorder, including South Korea, China, Japan, and Germany. Broadly in line with existing approaches to substance use disorders, these interventions have largely focused on (a) limiting supply, (b) reducing demand, and (c) reducing harms.Footnote 42

Researchers note that the substance use disorders approach is suitable because internet addiction has similar clinical characteristics to, for example, an alcohol use disorder.

The public health model of alcoholism, which is composed of three risk factors (including host, agent, and the environment) can also be applied to excessive (or addictive) internet use. This differs from the medical model, which explains a disease mainly based on individual vulnerability. Under the public health model, public health policy interventions are necessary to reduce all the relevant internet addiction risk factors.

Furthermore, people using the internet can be grouped as regular users, users at the initial stage of the addiction process, and an addicted group experiencing clear addiction symptoms with various impairments in a wide range of areas. Similar to substance addiction, internet addiction should also be understood as a problem on a continuum. Therefore, public health policy interventions need to be adjusted to the severity of the internet addiction. Such a model has already been applied to alcohol and gambling.

Preventative education in combination with the public health promotion campaigns proposed in gambling addiction can be proposed as a primary intervention targeting the general population. Secondary interventions would implement screening and intensive prevention programmes using screening tools. Educating individuals who are most likely to encounter others with internet addiction problems and help them cope with their addiction is another form of secondary intervention. Tertiary interventions involve constructing an infrastructure to employ evidence-based treatment programmes, fostering addiction experts, and constructing a community-based aftercare system for relapse prevention.

The Korean public health policy response to internet addiction has largely followed the public health model described here, as it addresses various risk factors and is adjusted to different levels of the severity of internet use.

In response to the internet addiction problem, Korea established the Third Internet Addiction Prevention & Resolution Comprehensive Plan in 2016, led by the Ministry of Science, ICT, and Future Planning in cooperation with the Ministry of Culture, Sports, and Tourism (governing game contents), and the Department of Education, the Ministry of Gender Equality and Family, and the Ministry of Health and Welfare (governing youth protection services). Measures applied include education and group counselling at schools aimed at prevention, screening students and providing counselling for those identified as at risk, and psychiatric treatment for excessive gamers. Help is provided inter alia via Internet Addiction Response Centres in sixteen metropolitan cities, internet addiction counsellors in the forty-five Education District Offices, Excessive Gaming Healing Centres, the Rescue School (a short-term residential treatment programme), community-based Addiction Management Centres, and social service institutions. One of the institutional challenges is to ensure better collaboration and avoid overlap of activities conducted by different agencies. Owing to conflicting perspectives in regard to the public health risks and economic value of the digital media industry, there are different attitudes to internet addiction-related problems even in government agencies.Footnote 43

The European Union (EU) also considers taking a comprehensive public health policy approach to address the problem of internet addiction. Public health policy options offered by researchers for consideration to the European Parliament (EP) include health promotion, strengthening health services available for internet users that engage in harmful use, and the creation of units to address the harmful use of the internet within various EU Commission Directorates-General and Member State ministries. Information campaigns can create awareness and help users to develop skills that prevent harm. Public recognition of internet addiction as a disorder is believed to encourage people to seek help. Strengthening health services means providing support for health professionals so that they are able to recognise cases of harmful internet use and offer clinical services. If internet addiction were to be recognised as a mental disorder, this would (a) enhance psychological and pharmacological treatment options (‘digital detox’) available to individuals affected by this condition, (b) facilitate reimbursement by insurance companies, and (c) increase the screening that could be undertaken for children with preliminary symptoms of internet addiction.Footnote 44

7.6.2 Consumer Protection: Product Safety Standards

Loot boxes have been identified as an example of a problematic element of computer game design, which is akin to gambling and can potentially lead to addiction to gambling online. Some game designs resemble addictive designs of conditioning known, for example, from slot machines.

Loot boxes are subject to general national legislation on contracts and consumer protection. In addition, several national authorities have investigated the conditions under which loot boxes may qualify as gambling. With the exception of Belgium, the Netherlands, and Slovakia, no EU country has come to the conclusion that loot boxes fulfil their national gambling criteria. As a result, only those countries have so far taken or are considering taking regulatory steps to ban loot boxes. In those countries that have already banned them, this has led to the withdrawal of the loot box feature from games in these markets. In other countries, less invasive action has been taken including awareness raising and developing guidelines for parents and players.

As gambling belongs to the sphere of national competence, it has been suggested that the EU could approach the issue of loot boxes and problematic game design more generally from the consumer protection perspective.Footnote 45

Qualifying loot boxes as gambling may not work also in those cases where loot boxes are just one element of chance in an otherwise skill-based game, which was the verdict reached by the Dutch Administrative Jurisdiction Division in overturning a fine for developers of the FIFA game,Footnote 46 who had previously been fined €10 million over a failure to obtain a gambling licence to operate ‘Ultimate Team’ packs, which gave in-game players who pay for the packs access to an unspecified group of players they could add to their squad or onwards trade.Footnote 47

The practice of marking the products as fit for a certain age based on inter alia its loot box feature is in line with the consumer protection rationale. Germany introduced such marking as of 2023.Footnote 48

In 2023, the EP adopted a resolution on consumer protection in online video games in line with a European single market approach.Footnote 49 The resolution calls for gamers to be better protected from addiction and other manipulative practices and stresses that children’s games must take into account their age, rights, and vulnerabilities. At the same time, it notes the enormous potential for growth and innovation in the video game sector and its need of support.Footnote 50 The resolution contains specific proposals on how to enhance consumer protection in this field. It notes that aggressive commercial practices used in the manipulative design of some games are already prohibited under EU law (para. 15). It calls on the Commission to assess whether the current consumer law framework is sufficient to address all the consumer law issues raised by loot boxes and in-game purchases and, if not, to present the necessary legislative proposals. ‘These proposals should assess whether an obligation to disable in-game payments and loot box mechanisms by default or a ban on paid loot boxes should be proposed to protect minors, avoid the fragmentation of the single market and ensure that consumers benefit from the same level of protection, no matter their place of residence’ (para. 27). It suggests an ex-ante child impact assessment might be required from providers of online video games (para. 28). The resolution also calls on the Commission and the Member States’ consumer protection authorities to ensure that consumer law is fully respected and enforced (para. 48).

7.6.3 Vulnerable Groups Approach: Rights of the Child

Taking a consumer protection perspective and protecting children from the harm of loot boxes and other problematic video game design features is one example of the vulnerable group protection approach. Other examples of activities aimed at protecting children from internet addiction abound.

At the global level, the World Health Organization has been defining standards of healthy sedentary screen time for children of various age groups. Under its recent guidelines, for children under one year screen time is not recommended. For one-year-olds, sedentary screen time (such as watching TV or videos, playing computer games) is not recommended. For those aged two years, sedentary screen time should be no more than one hour; less is better.Footnote 51 On the other hand, medical research also suggests that with twenty-four-hour connectivity and when digital and social media technologies are an integral part of the experiences and identities of children and young people, it is the quality and nature of engagement rather than its quantity that determine inappropriate use of digital media or the internet.Footnote 52

The need for an approach that enables the reaping of the benefits of internet use while avoiding the risks of its overuse has been highlighted by international human rights bodies both at the global and regional level.

The United Nations (UN) Committee on the Rights of the Child (CRC) in its General Comment on children’s rights in relation to the digital environment recognises that the digital environment affords new opportunities for the realisation of children’s rights,Footnote 53 but also poses a risk of their violation or abuse (para. 3). It recognises health risks related to the use of digital tools but does not specifically identify the problem of internet addiction as a health issue. It does, however, admit the importance of a healthy balance of digital and non-digital activities and sufficient rest (para. 98), which resembles the rationale of public health policy interventions aimed at limiting screen time for those suffering from internet addiction. The CRC also recognises the need to address other risks that are related to internet addiction. Notably, the CRC admits that measures may be needed to prevent unhealthy engagement in digital games or social media, such as regulating against digital designs that undermine children’s development and rights (para. 96). Furthermore, the General Comment mentions the increasing importance of children gaining an understanding of the digital environment, including its infrastructure, business practices, and persuasive strategies (para. 105). These two provisions remind us of the internet addiction research into the problematic design of video games, such as loot boxes, as a result of which online gaming can lead to online gambling and develop into an addiction.

The Council of Europe, in its guidelines to respect, protect and fulfil the rights of the child in the digital environment,Footnote 54 recognises that access to and use of the digital environment is important for the realisation of children’s rights and fundamental freedoms. Where children do not have access to the digital environment or where this access is limited owing to poor connectivity, their ability to fully exercise their human rights may be affected (para. 10). The Guidelines recognise some health risks related to the use of the internet – excessive use, sleep deprivation, and physical harm (para. 51) – but do not specifically identify internet addiction as a health risk. They do, however, recognise the importance of safety by design as a guiding principle for product and service features and functionalities addressed to or used by children (para. 54), and, despite the strong protection of the right of access to the internet, recognise the legitimacy of and encourage the development of parental controls installed in various products as a measure to mitigate risks for children in the digital environment (para. 54). The Guidelines also specifically recognise the problem of premature exposure to the internet (para. 55). States are expected to require the use of effective systems of age verification to ensure children are protected from products, services, and content in the digital environment that are legally restricted with reference to specific ages (para. 56) and to take measures to ensure that children are protected from commercial exploitation in the digital environment, including exposure to age-inappropriate forms of advertising and marketing (para. 57).

7.6.4 Right to Disconnect and Its Insufficiency

There is growing attention to ‘disconnection’ as a solution to digital technology overuse. Disconnection is a new area of focus for both tech and health and wellness industries, which develop and sell a wide range of digital well-being interventions, ranging from digital detox programmes to screen time monitoring apps to products that create physical barriers to use. These interventions are intended to help individuals achieve a ‘healthier’ relationship to their technology, and therefore stand in opposition to conceptualisations of ‘unhealthy’ overuse.Footnote 55 Disconnection also features as a research question in new media studies.Footnote 56 Researchers have acknowledged the existence of the so-called digital divide of disconnection as certain individuals and social groups lack the privilege to disconnect.Footnote 57 Importantly, the right to disconnect has also become an object of law and policymaking.

According to Eurofound (the EU Agency for the Improvement of Living and Working Conditions), the right to disconnect refers to a worker’s right to be able to disengage from work and refrain from engaging in work-related electronic communications, such as emails or other messages, during non-work hours.Footnote 58 This applies in the context of work and refers to one aspect of safe and healthy working conditions.

In January 2017, France passed a new employment law allowing workers in organisations with more than fifty employees to negotiate the conditions of a ‘right to disconnect’ from work after working hours. Article 55 under Chapter II ‘Adapting the Labour Law to the Digital Age’ of the Labour Code was introduced, aiming to protect workers against the problems associated with the increasing use of digital technology in the workplace.Footnote 59 Scholars do not necessarily see this law as a solution to the problem of being required to be connected. Some consider that with the passing of this law, it would seem that the employee can no longer invoke this ‘right’ under the traditional conditions of waged labour, where it would be on a par with one’s unpaid or free time. Instead of protecting the employee, the law runs the risk of turning all our available hours into the time of (unwaged) labour, thus feeding into the very problematic it tries to oppose, a problematic that characterises our current ‘culture of connectivity’.Footnote 60

Notwithstanding the lack of clear evidence that legislation on the right to disconnect effectively contributes to solving the problem of the requirement to be connected, some other EU Member States have also adopted or debated the need to adopt similar legislation. According to a 2021 Eurofound report, to date, Belgium, France, Italy, and Spain have legislation that includes the right to disconnect and in five other EU Member States (Finland, Germany, Lithuania, Slovenia, Sweden) policy debate on the right to disconnect was in progress.Footnote 61 In various EU Member States, the right to disconnect was also an object of company-level initiatives. In Germany, Volkswagen was reportedly the first company to implement a company-wide freeze on out-of-hours emails in 2012.Footnote 62 Moreover, the German Labour Ministry itself has also adopted policies regarding after-hours communication, in order to encourage other employers to follow suit. The ministry has banned any communication with staff outside working hours, except in emergencies, and implemented rules that do not allow managers to take disciplinary action against employees who switch off their mobile devices or fail to respond to communication after working hours.Footnote 63

The EP adopted a resolution in 2021 on the right to disconnect,Footnote 64 inter alia calling on the Commission to prepare a directive ‘that enables those who work digitally to disconnect outside their working hours’. This directive ‘should also establish minimum requirements for remote working and clarify working conditions, hours and rest periods’. Members of the EP believe that ‘workers’ right to disconnect is vital to protecting their physical and mental health and well-being and to protecting them from psychological risks’, requesting the Commission to submit a proposal for an act on the right to disconnect. In its EU strategic framework on health and safety at work, the Commission explains that it will ‘ensure appropriate follow-up’. The Commission admits that working remotely full time, which increased during the pandemic together with other remote-working trends, such as permanent connectivity, a lack of social interaction, and the increased use of ICT, has given an additional rise to psychosocial and ergonomic risks.Footnote 65 Importantly, the EP resolution among risks posed by excessive use of technological devices mentions (in recital E) techno-addiction (along with isolation, sleep deprivation, emotional exhaustion, anxiety and burnout), which implies that the right to disconnect might also be instrumental for addressing concerns related specifically to internet addiction.

The right to disconnect is also included in the Declaration on Digital Rights and Principles for the Digital Decade signed on 15 December 2022 by the presidents of the EU Commission, the EP, and the Council.Footnote 66 The preamble (para. 4) mentions that the EP called for a strengthened protection of ‘workers’ rights and a right to disconnect’. The main body of the declaration (para. 6) contains a commitment to ‘ensuring that everyone is able to disconnect and benefit from safeguards for work–life balance in a digital environment’.

Academic debate on the right to disconnect in the EU focuses on a possible need to redefine the rest period under the Working Time Directive to ensure that a person is entitled to be not only outside the workplace but also beyond the employer’s reach, to urge Member States to consider introducing the right to disconnect into their domestic law,Footnote 67 or to do nothing as the issue can already be resolved using existing EU legislation or settled case law.Footnote 68 The EU has the necessary competence to pass legal acts on the right to disconnect. Even if there is no enacted or proposed EU regulation that directly addresses the right to disconnect, Articles 153 and 154 TFEU could be the basis for the adoption of directives setting out minimum requirements, as well as supporting and complementing the activities of the Member States in the area of working conditions.Footnote 69

Domestic legal standards related to the right to disconnect were also adopted or debated beyond Europe. On 2 December 2021, the Ontario government in Canada introduced the ‘Right to Disconnect’ policy in the Employment Standards Act of 2000. Employers that employ twenty-five or more employees are required to have a written policy on disconnecting from work in place for all employees. The Employment Standards Act of 2000 defines ‘disconnecting from work’ as ‘not engaging in work-related communications, including emails, telephone calls, video calls or the sending or reviewing of other messages, so as to be free from the performance of work’. This is not an exhaustive list, meaning that other forms of work-related communications can also fall under this definition. The goal of the right to disconnect law is to allow employees to disconnect from work and enjoy downtime. Ideally, the ability to disconnect from work will protect an employee’s mental well-being and avoid burnout.Footnote 70 The Working for Workers Act was proposed by the provincial government in 2021 in response to concerns around burnout, particularly during the pandemic, when working from home meant lines between work and home blurred even further.Footnote 71

Although the right to disconnect may be seen as contributing to the prevention of internet addiction, it is of limited value for this purpose.

First, for those addicted to the internet, merely having a right to disconnect would not suffice to change their addictive behaviour patterns. In an exploratory qualitative study of professionals working in various industries in France, some participants indicated setting barriers to protect themselves from the negative side of being always hyperconnected. However, they felt it was not easy to overcome a technology addiction as it required willpower and wisdom to regain control of the technology. As not everyone manages to regulate it by themselves, some interviewees pinned their hope on the law as a way to help protect workers from the overuse of digital tools. Although almost every survey participant declared being in favour of the ‘right to disconnect’ legislation, they were not sure how the law could make them more productive. The study also revealed the importance of educative actions. Awareness campaigns and training sessions advising on how to use digital tools more productively and reminding workers of the benefits of disconnecting could help significantly.Footnote 72

Second, research shows that being connected when required to do so (notably, for the purpose of studying) is less likely to result in an internet addiction than being connected during leisure time. Research carried out in Lithuania shows that for schoolchildren of all ages it is the length of screentime as well as internet activities specifically aimed at amusement that increase the risk of PUI.Footnote 73 In north Tanzania, undergraduate students using the internet at the college were less likely to be addicted to the internet compared with those using it both at a hostel or home and college.Footnote 74 Therefore, the right to disconnect, if its application was limited to the work context, would not be of use in those situations where internet addiction manifests itself when digital tools are used in other contexts.

This suggests that to combat the overuse of digital tools, more proactive methods, such as raising awareness of the dangers and the safe use of the internet, are necessary, and even being required to disconnect could be considered.

Arguably, the requirement to disconnect has already become a cultural requisite and a lifestyle component for some. Media researchers note the prevalence of the self-responsibility narrative, which now takes the form of the ‘responsibility to disconnect’ as a precondition to keeping up with digital society. This expectation of self-discipline also comes with a heavy moral component: Societal assumptions about what is healthy or good feed into a moral superiority for those who can dutifully disconnect and shame for those who cannot. By framing ‘failure to disconnect’ as a problem of willpower, feelings of shame and guilt are legitimised and even capitalised on as mechanisms that could be used to effect change. In doing so, disconnection is implicitly framed as an act of self-improvement.Footnote 75 In the legal context, the requirement to disconnect has been attempted for schools in some countries where a ban on the use of cell phones was introduced, notably in France, the province of Ontario in Canada, and the state of Victoria in Australia.

Research shows that for most children across Europe, smartphones are now the preferred means of going online. This often means that they have ‘anywhere, anytime’ connectivity, with the majority of children reporting using their smartphones daily or almost all the time.Footnote 76 One study has shown that the restrictions on the use of smartphones can have a positive impact on academic performance. Restricting mobile phone use can also be a low-cost policy to reduce educational inequalities.Footnote 77

In France, the ban was introduced at the level of national legislation. Under the Code of Education as amended in 2018, schoolchildren are not allowed to use mobile phones or any other electronic communication at schools and during all school activities even if they are held outside school. Exceptions are allowed for teaching purposes and for children with special needs. The phone can be confiscated for a certain period if the rule is not followed.Footnote 78 At the time of the ban, the French Education Minister mentioned the problem of screen addiction as a phenomenon of bad mobile phone use and talked about the role of the state in protecting children by means of education.Footnote 79 In the state of Victoria in Australia, the ban on the use of mobile phones at schools was introduced in 2020,Footnote 80 as a policy of the Minister of Education,Footnote 81 aimed at improving student performance and managing risks related to the use of technology, without specifically identifying the problem of addiction but mentioning that the use of mobile phones requires more and more of the user’s time. In Canada, the ban was introduced in the province of Ontario in 2019,Footnote 82 but not in Quebec after a debate leaving the decision for the management of individual schools.Footnote 83 Similarly, in the UK and the US it is up to individual schools to set their own rules. In the UK, parents seem to support the idea of such a ban. In the US, the situation is different. Owing to the spate of school shootings, parents tend to want the reverse – the opportunity to contact their children to remain open. Reportedly, this was one of the reasons why a school cell phone ban in New York was overturned in 2015.Footnote 84 Such variety of approaches shows that the problematic use of the internet among schoolchildren and the value of being temporarily disconnected is recognised, the bans on the use of the internet are not necessarily recognised as a necessary approach and even when such bans are introduced they apply only in certain situations, for certain people and not without exceptions. Finally, the role of education and educators in creating a culture of better managed use of the internet is acknowledged, which suggests that responsibility to disconnect or disconnection as a lifestyle can be taught.

7.7 How International Human Rights Law Can Help

International and European human rights law provide the necessary preconditions for protecting a person against the human rights harms caused by internet addiction, but those preconditions have not been spelled out in the context of concrete cases. Nevertheless, international human rights law is slowly growing as soft law instruments on internet addiction-related matters are adopted.

7.7.1 Existing Norms at the Global Level

International human rights law already contains norms relevant for preventing and addressing internet addiction. The best interests of the child, which is one of the underlying principles of the UN system of the protection of the rights of the child, encompasses interests related to the safe usage of the internet. The fact that the UN Committee on the Rights of the Child has already addressed the issue of rights in the digital realm is important as it provides a basis for further interpretations of what it takes to ensure that children do not develop unhealthy habits when using the internet.

The right of everyone to the enjoyment of the highest attainable standard of physical and mental health provided for in Article 12 of the International Covenant on Economic, Social and Cultural Rights is a basis for interpretations requiring states to take measures to prevent internet addiction and to provide medical services for those affected. As demonstrated in this chapter, efforts by the state should be directed at the development of a comprehensive public health model aimed inter alia at the problem of internet addiction. The challenge in this respect is that under a surviving dichotomic civil and political versus socio-economic rights system, the right to health enjoys relatively weak protection as its implementation is dependent on the available resources of the state. One way to diminish this weakness is to rely either on the core of the right to health, which under the concept of core obligations needs to be guaranteed in any circumstances,Footnote 85 or on the non-discrimination element of the right to health, which is required irrespective of existing limitations as to available resources.Footnote 86

Protection from discrimination in the context of internet addiction is possible on various grounds. On the ground of young age and on the ground of sex it can be claimed that children and, depending on the context, boys or girls require special protection against internet addiction compared with the general population. On the ground of disability, claiming that internet addiction because of resulting obstacles amounts to a disability (an approach inspired by Canadian experience: the Ontario Human Rights Commission takes an expansive and flexible approach to defining mental health disabilities and addictions that are protected by the Ontario Human Rights Code; it considers that the Code protects people with mental health disabilities and addictions from discrimination and harassment on the grounds of disability),Footnote 87 it could be argued that it is under the UN Convention of the Rights of Persons of Disabilities that states should take measures to provide the required services for those afflicted by internet addiction.

7.7.2 Avenues to be Explored Based on Regional Experience

Regional human rights law often acts as a laboratory in which new definitions and interpretations of human rights are developed and tested. In regard to internet addiction, the Council of Europe approach towards the protection of children in the digital realm is an example of how this process of identifying existing threats and developing the required standards gradually progresses (even though at present internet addiction has not yet been clearly identified by the Council of Europe as a human rights issue).

EU law has tools that enable it to expand into new areas faster than would be possible under international (global or regional) human rights law. The EU fundamental rights catalogue – the Charter of Fundamental Rights – is not limited to any specific type of human rights and, notably, contains provisions on health (as an element of working conditions and in Article 35 on healthcare) and consumer protection (Article 38). Even though official explanations of the Charter state that both Article 35 and Article 38 contain principles rather than rights,Footnote 88 Charter provisions have the potential to be transformed from vaguely described principles to specifically defined rights. This potential derives from the fact that the EU can concretise its commitments by way of adopting, within the scope of its competencies, secondary legal acts. Moreover, the EU’s fundamental rights can be a trump card in a political discussion on the need to adopt further legal acts on a certain issue. If the Charter applies in a certain type of legal relationship, this indicates that a fundamental right is at issue, and in this manner raises the importance of the question at issue.

When we specifically consider internet addiction, the EU approach of addressing problematic video game design via the consumer rights perspective is novel and instructive, as consumer rights protection is included in the catalogue of human (fundamental) rights only in the EU Charter of Fundamental Rights but not in international human rights law documents. Similarly, the right to disconnect, which first became a subject matter of certain domestic laws, entered the rights debate first in the EU where, even though it is not a right contained in the Charter, it was included in the declaration on digital rights, where it will serve as an indicator for EU institutions that the protection of this right will have to be further developed. As international human rights law follows a similar path of putting ideas into soft law and then gradually transforming these into hard law, it is advisable to consider including the ideas tested in the EU in texts of non-binding resolutions of relevant international institutions (such as the UN Human Rights Council and UN human rights treaty bodies documents) to spark further debate on the required action at the global level. Reflecting on the experience of regional human rights law systems, such as the Council of Europe, should also continue to be an inspiration in developing the global system for the protection of human rights and, in particular, addressing the problem of internet addiction.

7.8 Conclusion

This chapter has shown that a multidisciplinary approach is useful for the purpose of identifying human rights issues related to the phenomenon of internet addiction and, ultimately, for improving human rights law. Mental health research, in particular, can provide lawyers with evidence necessary to assess the severity of internet addiction-related threats, to identify the most vulnerable groups and the required legal response. Understanding that children are the most vulnerable group but that other age groups are not immune to internet addiction is one example where medical research findings can lead to legal research on which human rights instruments can provide protection for persons of different ages afflicted by internet addiction. Importantly, mental health research also shows that moving online makes certain addictions more likely, which proves that internet addiction is a novel type of addiction that requires a novel human rights law response.

Within the discipline of law, the interaction of various legal systems is necessary to transplant best practices from one legal system to another. Whether or not ‘the right to be disconnected’ will continue its journey from domestic legal systems to international human rights law via its most recent appearance in EU law, remains to be seen, but the fact that there are domestic practices related to disconnecting seems to invite further dialogue of legal systems.

Finally, as we see the limits of available medical evidence to validate diagnoses of internet addiction and realise that the law may remain uninformed by medical research for some time to come, human rights law and its interpretation can be influenced by cultural factors. Where the connectivity (or ‘always on’) culture is perceived as increasing the risk of burnout, and disconnection for a short time is seen as an escape, further research into the right to disconnect as a human rights issue seems to be required. Disconnection as a lifestyle can possibly alleviate the severity of internet addiction problems, but whether it is states or internet platforms that can act as better influencers promoting such practices and what is the minimum required from states in this case by international human rights law (e.g., can the state leave it to the platforms to promote healthy internet use?) is a question for further research.

8 Just Don’t Get Caught!

8.1 Introduction

We can observe various forms of harm, wrongdoing and crime in the digital environment, violating traditional or newly modified digital human rights.Footnote 1 ‘Globalisation, digitalisation and smart technologies have escalated the propensity and severity of cybercrime.’Footnote 2 A serious increase in cybercrime and cyberattacks has been reported,Footnote 3 and estimations for the year 2020 predicted that cybercrime would cost the global economy around US$1 trillion, which is a 50 per cent increase in comparison with 2018.Footnote 4 However, the financial impact is not the only problem resulting from the unregulated possibilities in the digital world. For instance, as stated in 2018, around 45 per cent of adult internet users in the UK had experienced some form of online harm;Footnote 5 potential harm is affecting not just adults but also children,Footnote 6 and not just private individuals but also larger entities such as companies, organisations, and so on. The harm can also be psychological, social, ethical, and medical.

Many experts, leaders, stakeholders, even laypeople claim that legal restrictions are what would help. At first glance, legal regulations have several advantages:

  • they provide clear guidelines for everyone,

  • they set national/international standards,

  • the legal system is based on institutions,

  • having an authoritative institution facilitates reporting violations of rights.

The legal system clearly has the advantage that rules when well written are transparent and clear to everyone, and provide a set of standards and guidelines about what is prohibited and permitted in the digital realm.Footnote 7 The legal rules provide national and international standards of conduct, which can be helpful to orientate agents, citizens, and other legal persons. Rules can be guaranteed by institutions, which represent the distribution of power between the legislature, executive, and judiciary. As pointed out by many social contract theorists (e.g., John Locke),Footnote 8 it is reasonable to transfer certain duties; for example, judging and punishing wrongdoers, to an entity that is above individuals and thus independent and acts in a just and commensurate manner. The violation of rights can then be claimed to the authority, and the authority provides protection for individuals. For instance, internet fraud, such as stealing an identity, can be reported to the police, and they can then act accordingly to execute the law.

8.2 Problems Associated with the Legal Regulation of the Digital Realm

However, there is also the other side of the coin that legal restrictions are not an ideal and flawless solution. The problems associated with legal rules include, for instance:

  • the development of legal regulations is too slow,Footnote 9

  • legal standards are not yet developed because the cause of human rights violations in the online realm is not yet adequately understood,Footnote 10

  • the field and the legal context is country specific,Footnote 11

  • there can be vagueness, limited scope, and conflicting interpretations of the law,Footnote 12

  • law enforcement can be problematic,Footnote 13

  • some crimes go unreported.Footnote 14

The problem of the regulation of digital issues stems from the rapid exponential growth of what is possible with digital technology and the comparatively slow development of legal restrictions. ‘As technological developments take place faster than regulation can catch up, the guidelines and rules that have been adopted have not provided solutions to many of the issues, including the human rights implications of digital activities.’Footnote 15

A related problem is that the restriction of certain areas focusing on, for instance, the regulation of digital human rights is not yet fully understood, and thus the regulation is not yet developed in some areas at all. As an example, we could mention that ‘a threat of cancel culture and social engineering in the context of elections where the harm of manipulated voting is obvious but how exactly the prohibited acts should be defined or how they could be prevented is an open question. Similarly, a question to what extent the risk of addiction to the Internet could provide entitlements to health care services.’Footnote 16

A more general problem of the law and legal restrictions is that situations are often country specific, and it is difficult to find common ground for legal norms in the international global context. Countries have citizens and inhabitants with various beliefs and they represent the plurality of diverse cultures. Other problems related to legal rules stem from how they are written and presented, and this is related to uncertainty, limited scope, or divergent interpretations of law. Another interesting aspect of law, the legal system, and the state distribution of power is law enforcement. The problem is thus not only whether the legal rules are well made, timely, and targeting relevant issues, but also whether the violation of the law is treated in a relevant manner and the criminals punished.

Similarly, the detection of a violation of the law needs to be well monitored in order to catch the criminal. If the system fails, for instance, if the victim does not report a bully because the victims are not aware of their rights or are afraid to do so, the legal regulation does not fully help diminish the problem. In the digital world, the problem of unreported violations and harms can be quite fruitful owing to its potential for anonymity. According to the US Federal Bureau of Investigation (FBI), approximately only one in seven cybercrimes were reported in 2017.Footnote 17

This prompts a question: How can we achieve better regulation of the digital realm?

It seems there are multiple possible solutions, and they can be summed up as follows within the framework of a better legal system:

  • better regulation, better laws, and law enforcement,

  • developing better legal standards,

  • better law enforcement,

  • stricter regulation and control.

There are many emerging developments in the national and international context of aiming to better regulate the digital environment.Footnote 18 However, as ideal as it may seem at first sight, these approaches also have their Achilles heels, including:

  • not all societies and citizens are developed enough to be able to follow the law and manage proper law enforcement,Footnote 19

  • the motivation for action is mostly – not to get caught.

It is obvious that some societies are suffering with greater problems of law enforcement. For instance, this is related to the high level of corruption that affects the impartiality, justness, and fairness in the decision-making of responsible agents.Footnote 20

8.3 The Problem of Surveillance

One option would be to say, all right, we can still make it work and focus on the following:

  • monitoring agents,

  • even stricter regulations – cameras, tracking devices everywhere, use of facial recognition,Footnote 21

  • provide better control and data about who is doing what, so that the responsible institutions can protect citizens and other stakeholders.

There is quite a fruitful debate currently regarding surveillance systems, for instance in China, but also other countries. As pointed out by Peter Königs, non-democratic states nowadays and in recent history ‘known for their extensive surveillance systems include the GDR, the People’s Republic of China, the Soviet Union, and North Korea. In the past few decades, however, established liberal democracies have been increasingly ready to monitor their citizens on a massive scale. One notorious surveillance program run by an alliance of democracies was uncovered by Edward Snowden.’Footnote 22 The proponents of surveillance present arguments regarding greater safety and crime prevention, and the resulting benefits for everyone involved. Unfortunately, such Big Brother concepts are not ideal.Footnote 23 ‘Government surveillance and the erosion of privacy it is associated with are being discussed as a cause of distrust and feelings of vulnerability, as a potential source of discrimination and unjust domination, and as a threat to democracy and the integrity of the public sphere, to name but a few concerns.’Footnote 24 Therefore, as mentioned earlier, reducing harms in the digital realm by increasing surveillance also has its downsides, which stem from:

The concept of the protection of privacy is central in relation to human rights, and relates to the threat of the abuse of information, its manipulation, use in an inappropriate context, even stealing an identity. For instance, the idea of transparency is helping not to conflate the protection of privacy by masking other inappropriate actions, such as cheating, stealing, and other forms of wrongdoing.

8.4 Motivations

Another aspect of this problem relates to what motivates a person or agent in the digital world. The motivation not to get caught.

If the agents are mostly motivated to not do certain things (e.g., cheat online) by the idea of not getting caught, they may try to find ways to avoid punishment. The problem is in the system but also in human character. For instance, the ancient philosopher Aristotle already sketched the problem of the human character in relation to the state.Footnote 27

Even with controlling devices and greater surveillance, there is a danger that the people who work with them can manipulate the data or evidence, and can be bribed to do so. So there is the question of who controls those who are in power and those who should control others. Of course, there again the solution of the division of power emerges, but as we can observe even that can be abused if individuals with an unsuitable character end up in these roles.

When we consider the legal restrictions, we may ask: What is the motivation to act legally?

It seems that the motivation is, as already sketched out, not to get caught. We can, of course, speculate and consider that in the end it is not important to act based on certain motives, but the consequences are what matters. Even if we act in compliance with the law and out of a fear of punishment, we can still act well and the consequences can be good. Nevertheless, it seems this is not always optimal either, as there is still room to act in ways that are not so beneficial for society if we expect no one can catch us.

Therefore, we may go further and ask: What if no one can see us?

The digital world offers us several potential threats that are often not so prevalent offline, related to anonymity, the idea of ‘no identity’, or creating a ‘new’ identity.Footnote 28 It seems to be easier to remain anonymous using a fake identity online than offline. Of course, even in the pre-digital era, we could send anonymous (paper) letters, or wear masks, or modulate our voice; however, nowadays the possibilities in the online world are more numerous and can have a much greater impact. It is possible to identify an IP address or other digital traces; however, at the same time there are ways to avoid being spotted.

8.5 Individual Regulation

There are various forms of regulation online, but they are also related to different motivations. Since the idea of regulation presented here is not flawless, we should consider other options. Therefore, there emerges the issue of individual regulation.

Our motivation should be more than just to not get caught and could also include:

  • individual responsibility,

  • internal/intrinsic regulation/ motivation.

By individual responsibility, we mean the responsible behaviour of an individual, considering the consequences of his or her actions and being aware and able to be held accountable for them. The responsible person is aware of his or her own actions and makes qualified judgements and decisions. He or she does not rely only on external forms of authority, telling him or her what to do and what not, but also on rational understanding of the situation.

The idea of internal or intrinsic regulation and motivation exists in contrast with external, extrinsic regulation and motivation. An example of external, extrinsic regulation is a police officer standing at an intersection regulating the traffic, where it affects motivation, and people act in the requested manner in the given situation in order to not incur a fine or end up in jail.

On the other hand, the idea of intrinsic or internal regulation is about regulation not from outside but rather from inside. Therefore, to put it simply, there is no policeman on the street but our internal policeman or conscience tells us we should not exceed the speed limit. The reason is, for instance, because it will very likely endanger other passengers, pedestrians, or even ourselves, and it is not good to take such a risk without sufficient justification.

To imagine a situation in the digital realm, we can think of examples such as an authority checking the identity of a user by validating their credentials (e.g., password, biometric data, phone number) as a form of external regulation preventing users accessing another person’s account. The motivation for not trying to access someone else’s account is the fear of punishment. On the other hand, internal regulation in this situation is when the user is aware that it is not right to hack into another person’s account and steal his or her identity or finances.

However, we may ask whether there is a necessary connection between internal regulation and internal motivation, or external regulation and external motivation. Of course, it seems it is not so straightforward, since it is possible to have, for instance, external regulation and at the same time internal motivation to act in compliance with the law.

At the same time, we also need to be aware that internal or intrinsic motivation and the related regulations also have some limits; for instance, they:

  • need time and effort to develop,

  • rely on people’s conscience.

There are also problems with regulations set and adopted by private companies; they can set their own standards and create diverse responses, as there are many (often mostly financial) interests. Then such rules are often referred to as ‘ethics washing’; that is, they pretend the company is regulated to the benefit of society but in fact the benefit is only aimed at a narrow group of people. Despite the fact that this area is an interesting topic, in this chapter I will focus on the ethical regulation of individuals.

There have been quite fruitful discussions about the nature of intrinsic and extrinsic motivation.Footnote 29 The debate is not just at the conceptual level but also explores the emerging problem of how to deal with this in practice. The obvious question, therefore, is how we can implement internal regulation.

One plausible answer is that agents internalise existing rules and act according to them.Footnote 30 This seems quite a good idea at first sight. Citizens, agents become informed and educated about the legal norms in relation to the digital environment and understand the importance of them for themselves and others, and also understand that violating them will be punished, and they will internalise such rules and act accordingly.

Regarding the regulation and compliance motivation, there are many interesting ideas.Footnote 31 However, unfortunately, there are also some problems with this approach, as mentioned at the beginning of the chapter, such as the fact that:

  • legal norms regarding the digital realm are evolving slowly,

  • the legal system is evolving, but the legal norms are not always right,

  • people can manipulate the rules,

  • there is a problem with freedom and autonomy,

  • internalising the rules may not be effective or efficient.

Therefore, even this approach is problematic and is not a flawless solution.

It seems that we need something more. One idea is to use various nudges to motivate people to act in a desirable way; however, this can also be understood as manipulation and can be abused by the authorities or certain people.Footnote 32

8.6 Digital Ethics and Education

It seems that we need something else. And that could be ethics and moral norms.Footnote 33 What more do they add to legal regulation? They are often oriented towards internal regulation, and therefore can help even in situations when agents are in a setting where no one can see them, when there is no surveillance, and it can even help them deal with situations when there are no existing legal rules regulating their actions.

Therefore, when mentioning digital ethics as a solution, the question of whether it should fully replace legal regulation of the digital realm can emerge.Footnote 34 Of course not – both have their pros and cons, and therefore their mutual enrichment could be beneficial. They should be understood as complementing each other – internal should complement external regulation.

Similarly, we could mention Lawrence Kohlberg and his theory of moral development here, where he speaks about associating the lower stages with external regulation and the higher stages with the internal regulation of moral actions.Footnote 35 For instance, the pre-conventional level is oriented towards avoiding punishment and focused on obtaining praise, while the other stages are more oriented towards compliance with rules, the law for the benefit of society, and the highest stages are related to greater moral competence in moral judgements. How can we put ideas about digital ethics into practice?

There seem to be quite a rich debate regarding digital ethics regulating agents online, and many companies show they have a code of conduct dealing with such issues or that ethics committees exist; however, there is a risk of using this merely as marketing to wash their hands so consumers do not question their actions.Footnote 36 At the same time, as mentioned earlier, although I think well-made self-regulation of companies can supplement the overall regulation of digital issues, here I will focus more on the regulation of individuals. So how can we make digital ethics work?

There is room for digital ethics education to help to implement the ideas of digital ethics. It should focus on:

  • the development of internal regulation,

  • critical thinking and moral judgement,

  • a deeper understanding of ethical dilemmas in the digital realm.

There are various approaches to digital education and digital ethics education,Footnote 37 and various challenges emerge related to the following:

  • targeting individuals on different levels – children, adults,Footnote 38

  • the means of teaching,

  • dealing with disagreements,

  • having qualified teachers,

  • having good teaching materials.

It seems reasonable to focus not just on children but also on adult education in the area of digital ethics. There are several approaches to teaching ethics,Footnote 39 but I propose that ethics education be oriented towards the development of critical thinking and not merely training about ethics – indoctrination or rules without any critical examination and reflection. Reflection and critical analysis has the advantage of being flexible in seeking the best solution to moral dilemmas. Digital ethics education should be oriented towards critical thinking, where the focus is not primarily on what to think but rather on method or how to think.Footnote 40 It is also about focusing on understanding why. The potential for disagreements about what is right and what is wrong also needs to be taken into account; however, this approach should also help us learn how to communicate clearly, effectively, logically, and how to evaluate arguments.

However, what if the critical thinking collides with legal norms? It seems that in open societies it can even be desirable to have citizens who are able to critically evaluate the existing legal system and norms.Footnote 41

Of course, the problem may be in finding qualified teachers and having good teaching materials and manuals. The lack in these areas is addressed for instance by various initiatives and projects: digital education is often oriented towards the technical or psychological aspects of using digital technologies, which is important since based on empirical findings in Slovakia, for example, 20 per cent of children do not know that they can block a contact on social media and 40 per cent do not know that they can report a case when someone is bullying them online.Footnote 42 At the same time, it is also important to address digital ethics with regard to what is right and what is wrong and why, and what we should do in a given situation. The problem of digital ethics education has been addressed by various projects, for instance, Erasmus+ PLATO’S EU, where the team has developed teaching materials and manuals for teachers as well as organizing workshops for students in order to promote philosophical methods of thinking as a tool for dealing with issues online.Footnote 43

8.7 Conclusion

This chapter has explored the problem of digital regulation, and the idea of digital ethics and education has been proposed as a complementary solution.Footnote 44 The main idea presented is that we need to focus not just on external regulation and the motivation to act in compliance with legal norms out of fear of being caught and punished, but also on the development of an internal moral compass in our citizens. Various advantages and disadvantages to the different approaches to regulation have been presented.

Legal norms are not considered in contrast with digital ethical regulation but as complementary. As mentioned in an earlier publication of the COST Global Digital Human Rights Network project: ‘there is the inherently multidimensional nature of compliance with human rights in the digital realm, which requires a multidimensional response, including in the context of awareness raising and training. Law should be complemented by ethics and the training on cyber ethics should be provided along with training on digital human rights law.’Footnote 45

Internal regulation and intrinsic autonomous motivation seem to be an important area for further development towards the regulation of the digital realm. The topic of internal regulation is very fruitful, and has been investigated by philosophers and ethicists since ancient times also in terms of people flourishing and realising their full human potential.

Footnotes

2 Is There a Need for New Digital Human Rights in AI Governance?

1 See the outcome documents at ITU, World summit on the information society, Geneva 2003, www.itu.int/net/wsis/documents/doc_multi.asp?lang=en&id=1161|1160|2266|2267; see also W. Benedek, ‘International organizations and digital human rights’, in B. Wagner, M. C. Kettemann, and K. Veith (eds.), Research Handbook on Human Rights and Digital Technology. Global Politics, Law and International Relations, 2nd ed. (Cheltenham: Edward Elgar, 2024), pp. 311–26.

2 W. Benedek, ‘Internet governance and human rights’, in W. Benedek, V. Bauer, and M. C. Kettemann (eds.), Internet Governance and the Information Society (The Hague: Eleven, 2008), pp. 31–50, at 36 et seq.

3 See the Charter on the webpage of the Internet Rights and Principles Coalition, https://internetrightsandprinciples.org.

4 See A. Pettrachin, ‘Towards a universal declaration on internet rights and freedoms?’ (2018) 80 International Communication Gazette 4, 337–53, who analysed fifty-eight documents proclaiming internet-related human rights.

5 UNHRC, UN Special Rapporteur on the Promotion and Protection of the Right of Freedom of Opinion and Expression, Frank LaRue, 16 May 2011, UN Doc. A/HRC/17/27.

6 UNHRC, ‘Resolution on the promotion, protection and enjoyment of human rights on the internet’, 16 July 2012, UN Doc. A/HRC/RES/20/8, para. 1. Since the resolution was renewed about every second year, see latest UNHRC Resolution 47/16 of 13 July 2021 with the same title, which repeats the principle in its para. 1.

7 See UN Office of the Secretary-General’s Envoy on Technology, www.un.org/techenvoy/global-digital-compact.

8 Benedek, ‘International organizations and digital human rights’, 370 et seq.; see also W. Benedek and M. C. Kettemann, ‘The Council of Europe and the information society’, in R. Kicker (ed.), The Council of Europe, Pioneer and Guarantor for Human Rights and Democracy (Strasbourg: Council of Europe Publishing, 2010), p. 109 et seq.

9 See the guide, Council of Europe, ‘Guide to human rights for internet users’, www.coe.int/en/web/freedom-expression/guide-to-human-rights-for-internet-users.

10 See W. Benedek and M. C. Kettemann, Freedom of Expression and the Internet, 2nd ed. (Strasbourg: Council of Europe Publishing, 2020).

11 See ECtHR, ‘Factsheet – Access to internet and freedom to receive and impart information and ideas’ (June 2024), www.echr.coe.int/documents/d/echr/FS_Access_Internet_ENG; and ECtHR, ‘Factsheet – New technologies’ (October 2024) www.echr.coe.int/documents/d/echr/FS_New_technologies_ENG.

12 See the Vienna Manifesto on Digital Humanism, May 2019, https://caiml.org/dighum//dighum-manifesto/.

14 See YouMove Europe, ‘For new fundamental rights in Europe’, https://you.wemove.eu/campaigns/for-new-fundamental-rights-in-europe.

15 See Pettrachin, ‘Towards a universal declaration’, and J. Kulesza, ‘Multistakeholderism – meaning and implications’, in M. Susi (ed.), Human Rights, Digital Society and the Law: A Research Companion (London: Routledge, 2019), pp. 117–31; W. Benedek, ‘Multi-stakeholderism in the development of international law’, in U. Fastenrath et al. (eds.), From Bilateralism to Community Interest: Essays in Honour of Judge Bruno Simma (Oxford: Oxford University Press, 2011), pp. 201–10.

16 See D. Wong and L. Floridi, ‘Meta’s oversight board: a review and critical assessment’ (2023) 33 Minds and Machines 2, 261–84.

17 P. Alston, ‘Conjuring up new human rights: a proposal for quality control’ (1984) 78 American Journal of International Law 3, 607–21.

18 M. Susi, ‘Novelty in new human rights: the decrease in universality and abstractness thesis’, in A. von Arnauld, K. von der Decken, and M. Susi (eds.), The Cambridge Handbook on New Human Rights: Recognition, Novelty, Rhetoric (Cambridge: Cambridge University Press, 2020), pp. 21–33.

20 See on its composition and activities, https://edri.org.

22 A. von Arnauld, K. von der Decken, and M. Susi (eds.), The Cambridge Handbook on New Human Rights: Recognition, Novelty, Rhetoric (Cambridge: Cambridge University Press, 2020).

23 Susi, ‘Novelty in new human rights’.

24 O. Puccinelli, ‘The right to be forgotten 2.0’, in von Arnauld, von der Decken, and Susi (eds.), The Cambridge Handbook on New Human Rights, pp. 300–10.

25 See CoE, ‘Privacy and data protection – explanatory memorandum’, www.coe.int/en/web/freedom-expression/privacy-and-data-protection-explanatory-memo.

26 M. Susi, The Non-Coherence Theory of Digital Human Rights (Cambridge: Cambridge University Press, 2024).

27 See B. Cali, ‘The right to meaningful access to the internet’, in von Arnauld, von der Decken, and Susi (eds.), The Cambridge Handbook on New Human Rights, pp. 276–84.

28 OHCHR, ‘Internet shutdowns: trends, causes, legal implications and impacts on a range of human rights’, 13 May 2022, UN Doc. A/HRC/50/55.

29 Footnote Ibid., para. 2.

30 M. Susi, ‘The right to be forgotten’, in von Arnauld, von der Decken, and Susi (eds.), The Cambridge Handbook on New Human Rights, pp. 287–99, esp. 297.

31 T. Pajuste, ‘The protection of personal data in the digital society: the role of the GDPR’, in Susi (ed.), Human Rights, Digital Society and the Law, pp. 303−15.

32 Susi, ‘The right to be forgotten’, p. 287 et seq.

33 Biancardi v. Italy, Application no. 77419/16, Judgment of 25 November 2021.

34 See UNGA, ‘The human right to water and sanitation’, 28 July 2010, UN Doc. A/HRC/64/292; UNGA, ‘The human right to a clean, healthy, and sustainable environment’, 1 August 2022, UN Doc. A/RES/76/300.

35 GPT stands for Generative Pretrained Transformer. First assessments of the new technology and its commercial successor GDP 4 show its strengths and weaknesses; see K. Rose, ‘The brilliance and weirdness of ChatGPT’, New York Times, 5 December 2022; C. Metz, ‘OpenAI plans to up the ante in tech’s A.I. race’, New York Times, 14 March 2023.

36 See Future of Life Institute, ‘Pause giant AI experiments: an open letter’, 22 March 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

37 See HRC, ‘New and emerging digital technologies and human rights’, 12 July 2023, UN Doc. A/HRC/RES/53/29.

38 OHCHR, ‘Mapping report: new and emerging technologies’, 24 June 2024, UN Doc. A/HRC/56/45.

39 The Asilomar Principles are part of another open letter; see Future of Life Institute, ‘Asilomar AI Principles’, 11 August 2017, https://futureoflife.org/open-letter/ai-principles/.

40 See the Recommendation of the OECD Council on Artificial Intelligence of 22 May 2019, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.

41 See the UNESCO Recommendation, https://unesdoc.unesco.org/ark:/48223/pf0000381137.

42 See the Statement on Artificial General Intelligence, 24 February 2023, https://openai.com/blog/planning-for-agi-and-beyond.

43 See ‘“The Godfather of AI” leaves Google and warns of danger ahead’, New York Times, 1 May 2023, www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html?te=1&nl=from-the-times&emc=edit_ufn_20230501.

44 W. Benedek, ‘Digital human rights and artificial intelligence’ (2023) 14 Pravni Zapisi, 2, 227–37.

45 See White House, ‘Blueprint for an AI Bill of Rights – making automated systems work for the American people’, www.whitehouse.gov/ostp/ai-bill-of-rights/; White House, ‘Executive Order on the safe, secure and trustworthy development and use of artificial intelligence’, 30 October 2023, www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.

46 See ‘Open AI’s Sam Altmann urges regulation at Senate hearing’, New York Times, 16 May 2023, www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html?te=1&nl=the-evening&emc=edit_ne_20230516.

47 See the Chinese ‘Interim measures’, www.pwccn.com/en/tmt/interim-measures-for-generative-ai-services-implemented-aug2023.pdf; M. Sheehan, ‘China’s AI regulations and how they get made’, Carnegie Foundation, 10 July 2023, https://carnegieendowment.org/research/2023/07/chinas-ai-regulations-and-how-they-get-made?lang=en.

48 See G. De Gregorio and R. Radu, ‘Digital constitutionalism in the new era of internet governance’ (2022) 30 International Journal of Law and Technology, 1, 68–87.

49 See the developments on the global digital compact at the website of the UN Envoy for technology, www.un.org/techenvoy/global-digital-compact.

50 See European Parliament, Recommendation 2102 (2017), ‘Technological convergence, artificial intelligence and human rights’, 28 April 2017, http://assembly.coe.int/nw/xml/XRef/Xref-XML2HTML-en.asp?fileid=23726.

51 See CoE Commissioner for Human Rights, ‘Unboxing artificial intelligence: 10 steps to protect human rights (2019), https://rm.coe.int/unboxing-artificial-intelligence-10-steps-to-protect-human-rights-reco/1680946e64.

52 See CoE, ‘Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes’, 13 February 2019, https://search.coe.int/cm/pages/result_details.aspx?objectid=090000168092dd4b.

53 See CoE, Recommendation CM/Rec(2020)1 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems’, 8 April 2020, https://search.coe.int/cm/pages/result_details.aspx?objectid=09000016809e1154.

54 See CAHAI, Feasibility study, 17 December 2020, https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da.

55 See Footnote ibid., paras. 83–90 and paras. 95 et seq.

56 See CoE, ‘Artificial Intelligence, Council of Europe’s work in progress’, November 2024, www.coe.int/en/web/artificial-intelligence/work-in-progress.

57 See CAHAI, ‘Possible elements of a legal framework on artificial intelligence, based on the CoE’s standards on human rights, democracy and the rule of law’, 3 December 2021, https://rm.coe.int/cahai-2021-09rev-elements/1680a6d90d.

58 See for UNESCO its Recommendation on the Ethics of Artificial Intelligence, 23 November 2021, https://unesdoc.unesco.org/ark:/48223/pf0000381137.

59 See CAHAI, ‘Possible elements of a legal framework on artificial intelligence’, paras. 45 et seq.

60 A. Mantelero, Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI (New York: Springer, 2022), p. 165.

61 See for the work of CAI and CoE, ‘Artificial Intelligence. Committee on Artificial Intelligence’, www.coe.int/en/web/artificial-intelligence/cai.

62 See on CAI, and the text of the Framework Convention, CoE, ‘Artificial Intelligence. Committee on Artificial Intelligence’, www.coe.int/en/web/artificial-intelligence/cai.

63 CoE Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, CoE Treaty Series No. 225, Vilnius, 5 September 2024, Article 7.

64 Footnote Ibid., Articles 8, 9, and 17.

65 Footnote Ibid., Article 15, para. 2.

66 Footnote Ibid., Article 16.

67 Footnote Ibid., Article 14.

68 Footnote Ibid., Article 5.

69 See on the work of CAHAI, CoE, ‘Artificial Intelligence. CAHAI – Ad hoc Committee on Artificial Intelligence’, www.coe.int/en/web/artificial-intelligence/cahai.

70 See, e.g., Article 13 AIA on transparency and provision of information to deployers and in particular Article 14 on human oversight for high-risk AI systems and Article 50 – natural persons to be informed that they interact with certain AI systems or that content has been artificially created or manipulated (deep fakes).

71 See Article 3 of the CoE Framework Convention.

72 See the open letter by Algorithm Watch and numerous other NGOs of 5 March 2024 at: https://algorithmwatch.org/de/wp-content/uploads/2024/03/Open_letter_Council_of_Europe_AI_Convention.pdf.

73 Convention for the Protection of Individuals with regard to the Automatic Processing of Personal Data, Strasbourg, 28 January 1981, ETS 108, and its protocol of 2001, Additional Protocol to the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data regarding supervisory authorities and transborder data flows, Strasbourg, 8 November 2001, ETS 181.

74 See the UNESCO Recommendation on the Ethics of Artificial Intelligence, 23 November 2021, https://unesdoc.unesco.org/ark:/48223/pf0000381137.

75 See on ethics guidelines for trustworthy AI by the Independent High-Level Group of Experts of EU at European Commission, ‘Shaping Europe’s digital future. Ethics guidelines for trustworthy AI’, 8 April 2019, https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.

76 See European Commission, ‘Shaping Europe’s digital future. European Declaration on Digital Rights and Principles’, 15 December 2022, https://digital-strategy.ec.europa.eu/en/library/european-declaration-digital-rights-and-principles.

77 Communication by the Commission, COM (2022) 27 final of 26 January 2022.

78 Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) of 21 April 2021, COM (2021) 206 final, https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF; see also the Communication of the European Commission, ‘Fostering a European approach to Artificial Intelligence’, 21 April 2021, COM(2021) 205 final, https://eur-lex.europa.eu/resource.html?uri=cellar:01ff45fa-a375-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF.

79 Draft AIA, Footnote ibid., 3.5.

80 Footnote Ibid., 5.2.4.

81 See the Communication of the Commission on Artificial Intelligence for Europe, 25 April 2018, Com. (2018) 237, https://digital-strategy.ec.europa.eu/en/library/communication-artificial-intelligence-europe.

82 See European Parliament resolution on AI in a Digital Age, 3 May 2022, www.europarl.europa.eu/doceo/document/TA-9-2022-0140_EN.html.

83 Footnote Ibid., para. 141.

84 European Union Agency for Fundamental Rights, Getting the Future Right – Artificial Intelligence and Fundamental Rights (Luxembourg: Publications Office of the EU, 2020) and its study on facial recognition technology, FRA, ‘Facial recognition technology: fundamental rights considerations in the context of law enforcement’ (2020), https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper-1_en.pdf.

85 See, e.g., D. Onitsiu, ‘How a human rights perspective could complement the EU’s AI Act’, LSE blog, 31 January 2022, https://blogs.lse.ac.uk/europpblog/2022/01/31/how-a-human-rights-perspective-could-complement-the-eus-ai-act/.

86 See Civil Society statement, ‘EU Artificial Intelligence Act for fundamental rights’, 30 November 2021, https://algorithmwatch.org/en/eu-artificial-intelligence-act-for-fundamental-rights/.

87 See AlgorithmWatch, ‘Publications’, https://algorithmwatch.org/en/publications/.

88 European Parliament, ‘AI Act: a step closer to the first rules on Artificial Intelligence’, 11 May 2023, www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence.

90 See Regulation 2024/1689, 13 June 2024, OJ L of 12 July 2024.

91 See Footnote ibid., Article 6, para. 3.

92 See L. Bertuzzi, ‘AI Act: MEPs close in on rules for general purpose AI, foundation models’, Euroactiv, 24 April 2023, www.euractiv.com/section/artificial-intelligence/news/ai-act-meps-close-in-on-rules-for-general-purpose-ai-foundation-models/.

93 See the report by A. Engler, ‘The EU AI Act will have global impact, but a limited Brussels Effect’, Brookings, 8 June 2022, www.brookings.edu/research/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/.

94 See preambular para. 179 of the AIA.

95 See European Commission, ‘Shaping Europe’s digital future. AI Pact’, https://digital-strategy.ec.europa.eu/en/policies/ai-pact.

96 See Article 27, para. 2 of the CoE Convention.

97 European Court of Justice, Opinion 1/19 of the Court (Grand Chamber) of 6 October 2021.

98 See Access Now, ‘The EU AIA: how to (truly) protect people on the move’, 13 May 2022, https://edri.org/our-work/the-eu-ai-act-how-to-truly-protect-people-on-the-move/.

99 EU Commission, Proposal for AIA, 5.5.2.

100 See CAHAI, Feasibility study, paras. 91 and 113, and CAHAI, ‘Possible elements of as legal framework on artificial intelligence’, paras. 34 and 39 et seq.

3 Why and How the State Should Regulate the Internet

I am deeply grateful to the team of disinformation internet experts, lawyers and general thinkers who helped me tackle the intellectual, technical and practical challenges of this area. Thank you, especially, to Chris Newby, Navjothi Raju, Elisa Galgut, William Bird, Jesse Cann, Camaren Peter, Stewart Jones, Khomotso Moshikaro, Nurina Ally, and Tomoe Watashiba. And thank you to my doctoral committee at the University of Toronto (Jutta Brunnée, David Dyzenhaus, and Karen Knop), as I draw on my SJD thesis for much of my discussion of Lon L. Fuller.

1 Charter of Fundamental Rights of the EU, OJ C 326, 26.10.2012, 391–407, preamble, para. 2.

2 M. Foran, ‘Rights, common good, and the separation of powers’ (2023) 86 The Modern Law Review 3, 599–628, at 605, citing N. E. Simmonds, ‘Constitutional rights, civility and artifice’ (2019) 78 The Cambridge Law Journal 1, 175–99, at 175; J. Tasioulas, ‘Saving human rights from human rights law’ (2021) 52 Vanderbilt Law Review 5, 1167–1207.

3 Foran, ‘Rights, common good, and the separation of powers’, p. 605.

4 Footnote Ibid., p. 606.

5 L. L. Fuller, The Morality of Law: Revised Edition (New Haven: Yale University Press, 1969). See also J. Klabbers, ‘Constitutionalism and the making of international law: Fuller’s procedural natural law’ (2008) 5 No Foundations: An Interdisciplinary Journal of Law and Justice, 84–112; D. Sturm, ‘Lon Fuller’s multidimensional natural law theory’ (1965–6) 18 Stanford Law Review 3, 612–39. Fuller’s procedural natural law is explored more fully later in this chapter.

6 Foran, ‘Rights, common good, and the separation of powers’, p. 628.

7 Footnote Ibid., p. 606. The citation within the text refers to John Finnis: J. Finnis, Natural Law and Natural Rights (Oxford: Oxford University Press, 1980), chapter 6.

8 T. W. Bennett, A. Munro, and P. J. Jacobs, Ubuntu: An African Jurisprudence (Cape Town: Juta, 2018), chapter 3.

9 OAU, African Charter on Human and Peoples’ Rights (‘Banjul Charter’), 27 June 1981, CAB/LEG/67/3 rev. 5, 21 ILM 58 (1982).

11 Footnote Ibid., Art. 27(2).

12 Footnote Ibid., Arts 19–24.

13 Foran, ‘Rights, common good, and the separation of powers’, p. 605.

14 Footnote Ibid., p. 625.

15 Adrian Vermeule, who introduced the term to current legal discourse, expressly draws on the classical legal tradition when he uses the concept. See A. Vermeule, Common Good Constitutionalism: Recovering the Classical Legal Tradition (Cambridge: Polity Press, 2022); C. Casey and A. Vermeule, ‘Myths of common good constitutionalism’ (2022) 45 Harvard Journal of Law and Public Policy 1, 103–46.

16 See, e.g., G. Epps, ‘Common-good constitutionalism: an idea as dangerous as they come’, 3 April 2020, The Atlantic, www.theatlantic.com/ideas/archive/2020/04/common-good-constitutionalism-dangerous-idea/609385/; D. Dyzenhaus, ‘Schmitten in the US’, 4 April 2020, Verfassungsblog, https://verfassungsblog.de/schmitten-in-the-usa; M. D. Kelly, ‘Challenging common good constitutionalism’ (2024) 15 Jurisprudence 1, 418–40; L. C. McClain and J. E. Fleming, ‘Toward a liberal common good constitutionalism for polarized times’ (2023) 46 Harvard Journal of Law and Public Policy, 1123–48.

17 Footnote Ibid., pp. 1–2.

18 R. Dworkin, ‘Liberalism’, in S. Hampshire (ed.), Public and Private Morality (Cambridge: Cambridge University Press, 1978), pp. 113–43.

19 Foran, ‘Rights, common good, and the separation of powers’, p. 600.

20 Dworkin, ‘Liberalism’, p. 127. See also P. Neal, ‘Liberalism & neutrality’ (1985) 17 Polity 4, 664–84; R. J. Arneson, ‘Liberal neutrality on the good: an autopsy’ in S. Wall and G. Klosko (eds.), Perfectionism and Neutrality: Essays in Liberal Theory (Lanham, MD: Rowman & Littlefield Publishers, 2003), pp. 191–208.

21 Fuller, The Morality of Law, pp. 96–7. See also Klabbers, ‘Constitutionalism and the making of international law’; Sturm, ‘Lon Fuller’s multidimensional natural law theory’.

22 K. Rundle ‘The impossibility of an exterminatory legality: law and the Holocaust’ (2009) 59 University of Toronto Law Journal 1, 65–125, citing the last chapter of the original edition of Fuller’ s Morality of Law (L. L. Fuller, The Morality of Law (New Haven, Yale University Press, 1964)).

23 Footnote Ibid., p. 69.

24 The Charter of Fundamental Rights of the EU (OJ C 326, 26.10.2012, 391–407) expressly acknowledges the rule of law in its Preamble. A range of other human rights instruments require equality before the law and protection of the law in various formulations. See OAU, African Charter on Human and Peoples’ Rights (“Banjul Charter”), 27 June 1981, CAB/LEG/67/3 rev. 5, 21 ILM 58 (1982), Art. 3; UN General Assembly, Universal Declaration of Human Rights, 10 December 1948, 217 A (III), in its Preamble, the International Convention on the Elimination of All Forms of Racial Discrimination, 21 December 1965, 660 UNTS 195, at 214, require equality before the law and the protection of the law; International Covenant on Civil and Political Rights, 16 December 1966, 999 UNTS 171, Arts 14 and 26; Convention on the Elimination of All Forms of Discrimination Against Women, 18 December 1979, 1249 UNTS 1249, Art. 15.

25 C. Murphy, ‘Lon Fuller and the moral value of the rule of law’ (2005) 24 Law and Philosophy 3, 239–62, at 240.

26 J. Waldron, ‘Is the rule of law an essentially contested concept (in Florida)?’ (2002) 21 Law and Philosophy 2, 137–64, at 154. The ‘laundry list’ is generally that there be (a) (general) rules, which are (b) publicised, (c) understandable, (d) not retroactive, and (e) internally consistent (that is, not contradictory). The rules must also be (f) relatively consistent over time; that is, they may not change so frequently that the legal subjects can no longer orient their conduct in compliance with the rules. In addition, (g) compliance must not be physically impossible; that is, the law cannot demand that legal subjects act beyond their powers. Finally, the (h) administration of law must reflect the rules as announced.

27 Fuller, The Morality of Law, chapter 2.

28 Rundle, ‘The impossibility of an exterminatory legality’, citing the last chapter of the original edition of Fuller’s Morality of Law.

29 See Postema’s discussion of the different aspects of interaction in G. Postema, ‘Implicit law’, in W. J. Witteveen and W. van der Burg (eds.), Rediscovering Fuller: Essays on Implicit Law and Institutional Design (Amsterdam: Amsterdam University Press, 1999), pp. 253−75, at 255.

30 J. Brunée and S. Toope, Legitimacy and Legality in International Law (Cambridge: Cambridge University Press, 2010), pp. 29–30.

31 Footnote Ibid., p. 53.

32 Sturm, ‘Lon Fuller’s multidimensional natural law theory’, p. 616.

33 J. C. Barker, ‘The politics of international law-making: constructing security in response to global terrorism’ (2007) 3 Journal of International Law and International Relations 1, 5–24, at 24. For the form that this communication should take, see Brunée and Toope, Legitimacy and Legality in International Law, p. 31, and I. Johnstone, ‘Legislation and adjudication in the Security Council: bringing down the deliberative deficit’ (2008) 102 American Journal of International Law 2, 275–308, at 279.

34 Fuller, The Morality of Law, p. 185.

35 J. Brunée and S. Toope, ‘International law and constructivism: elements of an interactional theory of international law’ (2000) 39 Columbia Journal of Transnational Law 1, 19–74, at 65.

36 Brunée and Toope, Legitimacy and Legality in International Law, p. 31.

37 Footnote Ibid., p. 13.

38 Barker, ‘The politics of international law-making’, p. 27.

39 C. Bjola, ‘Legitimating the use of force in international politics: a communicative action perspective’ (2005) 11 European Journal of International Relations 2, 266–303, cited by Barker, ‘The politics of international law-making’, p. 24.

40 European Commission, ‘Shaping Europe’s digital future. The 2022 Code of Practice on disinformation’, https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation.

42 C. Cortés and L. F. Isaza, ‘The new normal? Disinformation and content control on social media during Covid-19’ (2021), CELE, Palermo University, www.palermo.edu/Archivos_content/2021/cele/papers/Disinformation-and-Content-Control.pdf.

43 UNHRC, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression – Disinformation and freedom of opinion and expression’, 13 April 2021, UN Doc. A/HRC/47/25.

44 M. Susi et al., Governing Information Flows During War: A Comparative Study of Content Governance and Media Policy Responses After Russia’s Attack on Ukraine (Hamburg: Verlag Hans-Bredow-Institut, 2022).

45 Postema, ‘Implicit law’, p. 259.

46 D. E. Wueste, ‘Fuller’s Processual philosophy of law’ (1986) 71 Cornell Law Review 6, 1205–30.

47 See Postema’s discussion of the different aspects of interaction in Postema, ‘Implicit law’, pp. 259–65.

48 H. H. L. Cheng, ‘Beyond forms, functions and limits: the interactionism of Lon L. Fuller and its implications for alternative dispute resolution’ (2013) 26 The Canadian Journal of Law and Jurisprudence 2, 257–92, at 268.

49 R. A. Duff and S. E. Marshall, ‘“Abstract endangerment”, two harm principles, and two routes to criminalisation’ (2015) 3 Bergen Journal of Criminal Law and Criminal Justice 2, 131–61, at 148.

50 Footnote Ibid., p. 133, cites J. S. Mill, On Liberty (1859) in chapter 1, para. 9.

51 J. Feinberg, Harm to Others (New York: Oxford University Press, 1984), p. 26.

53 Infocomm Media Development Authority, ‘Who we are’, www.imda.gov.sg/About-IMDA/Who-We-Are.

54 Infocomm Media Development Authority, ‘Internet regulatory framework’.

55 The operation of the KCSC is prescribed in chapter 5 of the Act on the Establishment and Operation of Korea Communications Commission (KCCA), last amended by Act No. 11711, 23 March 2013, www.law.go.kr/LSW/lsInfoP.do?lsiSeq=137296#0000.

56 Article 44-2(2) of Act on Promotion of Information and Communications Network Utilization and Information Protection, https://elaw.klri.re.kr/eng_service/lawDownload.do?hseq=38422&type=PDF.

57 Freedom House, ‘South Korea: freedom on the Net 2021 country report’, https://freedomhouse.org/country/south-korea/freedom-net/2021, citing Sung-won Yoon, ‘Watchdog hit for excessive digital censorship’, 30 March 2015, Korean Times, www.koreatimes.co.kr/www/news/tech/2015/04/133_176155.html.

58 African Declaration on Internet Rights and Freedoms, ‘About the initiative’, https://africaninternetrights.org/en/about.

59 Association for Progressive Communications, ‘African Declaration on Internet Rights and Freedoms Coalition: promotion of freedom of expression a priority for Southern Africa’, www.apc.org/en/news/african-declaration-internet-rights-and-freedoms-coalition-promotion-freedom-expression.

60 African Declaration on Internet Rights and Freedoms, ‘About the initiative’.

61 In all, 195 participants attended from twenty-nine countries. See the African Internet Governance Forum – AfIGF 2013, 23 September 2013, ‘Final draft report’, www.intgovforum.org/en/filedepot_download/7508/1620.

62 An effective South African example of such a body is the Centre for Analytics and Behavioural Change. See The Centre for Analytics & Behavioural Change, https://cabc.org.za/. Real411, ‘Report digital disinformation’, www.real411.org/.

63 I. Gagliardone and A. Brhane, ‘Ethiopia digital rights landscape report – digital rights in closing civic space: lessons from ten African countries’ (2021), https://opendocs.ids.ac.uk/opendocs/bitstream/handle/20.500.12413/15964/Ethiopia_Report.pdf.; C. H. Powell and T. Schonwetter, ‘Africa, the Internet and human rights’, in M. Susi (ed.), Human Rights, the Digital Society and Law: A Research Companion (New York: Routledge Publishing, 2019), pp. 316−34.

64 Tech Against Terrorism, ‘The Online Regulation series | Singapore – tech against terrorism’, 5 October 2020, www.techagainstterrorism.org/2020/10/05/the-online-regulation-series-singapore/.

65 J. Lee, ‘A private organization directly under the president? Structural contradiction of the Korea Communications Commission’, www.mediatoday.co.kr/news/articleView.html?idxno=97350.

67 Freedom House, ‘South Korea: freedom on the Net 2021 country report’, https://freedomhouse.org/country/south-korea/freedom-net/2021, citing Sung-won Yoon, ‘Watchdog hit for excessive digital censorship’, 30 March 2015, Korean Times, www.koreatimes.co.kr/www/news/tech/2015/04/133_176155.html.

70 South African Government ‘Films and Publications Amendment Act, Act No. 11 of 2019’, section 4.

71 While these bodies themselves may be susceptible to political influence, the open nature of their interviews does at least provide some level of a check in the form of public responses to the interview process.

72 The Centre for Analytics & Behavioural Change, ‘2021 June riots’, https://cabc.org.za/search/2021Juneriots/.

73 Business Tech, ‘South Africa’s internet censorship laws are now in full effect – and legal notices are going out’, 31 October 2022, https://businesstech.co.za/news/government/639087/south-africas-internet-censorship-laws-are-now-in-full-effect-and-legal-notices-are-going-out/.

4 How to Tame the ‘Digital’ Shrew Constitutional Rights Going Online

1 Knight First Amendment Institute at Columbia University, ‘Knight Institute v. Trump: a lawsuit challenging President Trumps’s blocking of critics on Twitter’, https://knightcolumbia.org/cases/knight-institute-v-trump.

2 Knight First Amendment Inst. at Columbia Univ. v. Trump, No. 1:17-cv-5205 (SDNY) No. 18-1691(2d Cir.).

3 Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Frank La Rue, Human Rights Council, P2, U.N. Doc. A/HRC/17/27 (16 May 2011) www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf.

4 See D. K. Citron, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age (New York: W.W. Norton & Company, 2022).

5 E. Guo, ‘A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?’ (2022) MIT Technology Review, www.technologyreview.com/2022/12/19/1065306/roomba-irobot-robot-vacuums-artificial-intelligence-training-data-privacy/.

6 A. Savin, ‘The EU Digital Service Act: toward a more responsible Internet’ (2021) 24 Journal of Internet Law 7, 15–25.

7 See, e.g., A. Beckers and G. Teubner, ‘Human-algorithm hybrids as (quasi-)organizations? On the accountability of digital collective actors’ (2023) 50 Journal of Law and Society 1, 100–19.

8 G. Teubner, ‘Societal constitutionalism: alternatives to state-centred constitutional theory?’, in C. Joerges, I.-J. Sand, and G. Teubner (eds.), Transnational Governance and Constitutionalism (Oxford: Hart, 2004), pp. 3–28.

9 The Facebook Oversight Board, Case decision no. 2021-001-FB-FBR (2021).

10 See the 2011 United Nations Guiding Principles on Business and Human Rights (UNGP).

11 For more, see S. Deva and D. Bilchitz (eds.), Human Rights Obligations of Business: Beyond the Corporate Responsibility to Respect (Cambridge: Cambridge University Press, 2013).

12 See Delfi AS v. Estonia, Application no. 64569/09, Judgment of 16 June 2015; Magyar Tartalomszolgáltatók Egyesülete and Index.Hu Zrt v. Hungary (MTE v. Hungary), Application no. 22947/13, Judgment of 2 February 2016.

13 For more see M. Maroni, ‘The liability of internet intermediaries and the European Court of Human Rights’, in B. Petkova and T. Ojanen (eds.), Fundamental Rights Protection Online: The Future Regulation of Intermediaries (Cheltenham: Edward Elgar Publishing, 2020), pp. 255–79.

14 Joined Cases C-236/08 to C-238/08, Google France SARL and Google Inc. v. Louis Vuitton Malletier SA (C-236/08), Google France SARL v. Viaticum SA and Luteciel SARL (C-237/08) and Google France SARL v. Centre national de recherche en relations humaines (CNRRH) SARL and Others (C-238/08) [2010] EU:C:2010:159, para. 113.

15 Footnote Ibid., para. 114; see also Case C-291/13 Sotiris Papasavvas v. O Fileleftheros Dimosia Etaireia Ltd and Others, [2014] EU:C:2014:2209, para. 45.

16 See, e.g., A. Savin, ‘Digital sovereignty and its impact on EU policymaking’, (2022) CBS LAW Research Paper 22–02.

17 S. Stalla-Bourdillon and R. Thorburn, ‘The scandal of intermediary: acknowledging the both/and dispensation for regulating hybrid actors’, in B. Petkova and T. Ojanen (eds.), Fundamental Rights Protection Online: The Future Regulation of Intermediaries (Cheltenham: Edward Elgar Publishing, 2020), pp. 141–74, at 145–6.

18 The literature on digital constitutionalism is growing. See, e.g., G. De Gregorio, Digital Constitutionalism in Europe (Cambridge: Cambridge University Press, 2022); G. De Gregorio, ‘Digital constitutionalism across the Atlantic’ (2022) 11 Global Constitutionalism 2, 297–324; E. Celeste, ‘Digital constitutionalism: a new systematic theorization’ (2019) 33 International Review of Law, Computers & Technology 1, 76–99; D. Redeker, L. Gill, and U. Gasser, ‘Towards digital constitutionalism? Mapping attempts to craft an Internet Bill of Rights’ (2018) 80 International Communication Gazette 4, 302–19. Despite the growing popularity, some argue that the digital constitutionalist approach may encounter specific problems owing to the technological embeddedness of governance mechanisms and the discrepancy between jurisdictional borders and digital processes of a transnational nature. See N. Palladino, ‘The role of epistemic communities in the “constitutionalization” of Internet governance: the example of the European Commission High-Level Expert Group on Artificial Intelligence’ (2021) 45 Telecommunications Policy 6, Article 102149, 1.

19 O. Pollicino, Judicial Protection of Fundamental Rights on the Internet: A Road Towards Digital Constitutionalism? (Oxford: Hart Publishing, 2021), pp. 204–6.

20 This approach has been debated in the works of G. De Gregorio and O. Pollicino. See De Gregorio, ‘Digital constitutionalism across the Atlantic’; Pollicino, Judicial Protection of Fundamental Rights on the Internet.

21 Recommendation CM/Rec(2018)2 of the Committee of Ministers to Member States on the roles and responsibilities of internet intermediaries, adopted by the Committee of Ministers on 7 March 2018.

22 J. Raz, The Morality of Freedom (Chicago: Clarendon Press, 1986), p. 171.

23 On this account, see G. Teubner, ‘Horizontal effects of constitutional rights in the Internet: a legal case on the digital constitution’ (2017) 3 The Italian Law Journal 1, 193–205.

24 H. Kelsen, General Theory of Law and State, trans. Andreas Wedberg (Cambridge, MA: Harvard University Press, 1945), p. 201.

25 See in H. J. Berman, Law and Revolution, II: The Impact of the Protestant Reformations on the Western Legal Tradition (Cambridge, MA, and London: Harvard University Press, 2003), pp. 156, 439, fn.1.

26 Footnote Ibid., p. 439, fn.1; M. Rosenfeld, ‘Rethinking the boundaries between public law and private law for the twenty-first century: an introduction’ (2013) 11 International Journal of Constitutional Law 1, 125–8.

27 Kelsen, General Theory of Law and State, p. 202.

28 Berman, Law and Revolution, II, p. 298.

29 See M. Loughlin, ‘Theory and values in public law: an interpretation’ (2005) Public Law, 48–66; P. Cane, ‘Theory and values in public law’, in P. Craig and R. Rawlings (eds.), Law and Administration in Europe: Essays in Honour of Carol Harlow (Oxford: Oxford University Press, 2003), pp. 3–21; P. Craig, ‘Theory, “pure theory” and values in public law’ (2005) Public Law, 440–7; C. Harlow and R. Rawlings, Law and Administration, 2nd ed. (London: Butterworths, 1997); C. Harlow, ‘Public law and popular justice’ (2002) 65 Modern Law Review 1, 1–18.

30 Rosenfeld, ‘Rethinking the boundaries’, p. 125.

31 V. Beširević, ‘Introduction’, in Violeta Beširević (ed.), Public Law in Serbia: Twenty Years After (London: European Public Law Organization and Esperia Publications Ltd, 2012), pp. 15–19, at 16.

32 Kelsen, General Theory of Law and State, p. 207.

33 Footnote Ibid., pp. 201–2.

34 J. Locke, Two Treatises of Government (Vermont: Everyman, 1997), pp. 107, 116–22, 159.

35 Thus, The American Declaration of Independence proclaimed that ‘[…] All men are created equal, that they are endowed by their Creator with certain unalienable rights; that among these are life, liberty, and the pursuit of happiness… [T]o secure these rights, governments are instituted among men.’ The French Declaration on the Rights of Man and the Citizen accented that ‘The aim of all political association is the preservation of the natural and imprescriptible rights of man.’

36 S. R. Ratner, ‘Corporations and human rights: a theory of legal responsibility (2001) 111 The Yale Law Journal 3, 468–9.

37 S. Gardbaum, ‘The structure and scope of constitutional rights’, in R. Dixon and T. Ginsburg (eds.), Research Handbook in Comparative Constitutional Law (Cheltenham: Edward Elgar Publishing, 2011), pp. 387–403, at 393–4; See also W. Rivera-Perez, ‘What the constitution got to do with it: expanding the scope of constitutional rights into the private sphere’ (2012) 3 Creighton International and Comparative Law Journal 1, 189–214.

38 V. Beširević’, ‘“Uhvati me ako možeš”: o (ne)odgovornosti transnacionalnih korporacija zbog kršenja ljudskih prava’ [‘“Catch me if you can”: reflections on legal (un)accountability of transnational corporations for human rights violations’] (2018) Pravni zapisi 1, 22–45.

39 Gardbaum, ‘The structure and scope of constitutional rights’, p. 392.

40 See, e.g., A. Sajó and R. Uitz, The Constitution of Freedom: An Introduction to Legal Constitutionalism (Oxford, Oxford University Press, 2017), pp. 399–401; J. Thomas, Public Rights, Private Relations (Oxford: Oxford University Press, 2015), pp. 34–5.

41 See E. Chemerinsky, ‘Rethinking state action doctrine’ (1985) 80 Northwestern University Law Review, 503–57.

42 M. Kumm, ‘Who is afraid of the total constitution? Constitutional rights as principles and the constitutionalization of private law’ (2006) 7 German Law Journal, 4, 341–69.

43 Gardbaum, ‘The structure and scope of constitutional rights’, p. 394; see also S. Gardbaum, ‘The “horizontal effect” of constitutional rights’ (2003) 102 Michigan Law Review 2, 387–459, at 436.

44 Thomas, Public Rights, Private Relations, p. 28.

45 Gardbaum, ‘The structure and scope of constitutional rights’, p. 394.

46 G. Phillipson, ‘The Human Rights Act, “horizontal effect” and the common law: a bang or a whimper?’ (1999) 62 The Modern Law Review 6, 824–49, at 830.

48 Fourteen Latin American countries have adopted some form of direct horizontal effect: Argentina, Bolivia, Chile, Colombia, Costa Rica, Dominican Republic, Ecuador, Guatemala, Honduras, Paraguay, Pert, Puerto Rico, Uruguay, and Venezuela. See Rivera-Perez, ‘What the constitution got to do with it’, p. 198.

49 A. Nolan, ‘Holding non-state actors to account for constitutional economic and social rights violations: experiences and lessons from South Africa and Ireland’ (2014) 12 International Journal of Constitutional Law 1, 66–93.

50 P. Weingerl, ‘The influence of fundamental rights in Slovene private law’, in V. Trstenjak and P. Weingerl (eds.), The Influence of Human Rights and Basic Rights in Private Law (Cham, Heidelberg, New York: Springer, 2016), pp. 535−58, at 541–2.

51 See Ryan v. The Attorney General, [1965] I.R. 294; Thomas, Public Rights, Private Relations, p. 29.

52 Nolan, ‘Holding non-state actors to account’, pp. 69–71.

53 Footnote Ibid., pp. 76–86.

54 Section 8 (2) of the 1996 Constitution.

55 Footnote Ibid., Section 39 (2).

56 M. Troper, ‘Who needs a third party effect doctrine? – The case of France’, in A. Sajó and R. Uitz (eds.), The Constitution in Private Relations: Expanding Constitutionalism (Utrecht: Eleven International Publishing, 2005), pp. 115−28, at 119.

57 For more see M. Hunter-Henin, ‘Horizontal application of human rights in France: the triumph of the European Convention on Human Rights’, in O. Dawn and J. Fedtke (eds.), Human Rights and the Private Sphere – a Comparative Study (Cavendish, London, and New York: Routledge, 2007), pp. 98–124.

58 U. Preuß, ‘The German Drittwirkung doctrine and its socio-political background’, in A. Sajó and R. Uitz (eds.), The Constitution in Private Relations: Expanding Constitutionalism (Utrecht: Eleven International Publishing, 2005), pp. 23−32, at 23.

60 Bundesverfassungsgericht, Lüth, BVerfGE 7, 198–230.

61 For more see Thomas, Public Rights, Private Relations, p. 32; Gardbaum, ‘The “horizontal effect” of constitutional rights’, pp. 404–6; D. Looschelders and M. Makowsky, ‘The impact of human rights and basic rights in German private law’, in V. Trstenjak and P. Weingerl (eds.), The Influence of Human Rights and Basic Rights in Private Law (Cham, Heidelberg, New York: Springer, 2016), pp. 295–317, at 299.

62 C. Saunders, ‘Constitutional rights and the common law’, in A. Sajó and R. Uitz (eds.), The Constitution in Private Relations: Expanding Constitutionalism (Utrecht: Eleven International Publishing, 2005), pp. 183−216, at 195–200.

63 RWDSU v. Dolphin Delivery Ltd. [1986] 2 SCR 573, para. 39.

65 Saunders, ‘Constitutional rights and the common law’, p. 200.

66 The Civil Rights Cases, 109 US 3 (1883).

67 Footnote Ibid., 4. The Court also ruled that the denial of equal accommodations in inns, public conveyances, and places of public amusement, prohibited under the federal legislation under review, did not amount either to slavery or involuntary servitude, ‘but at most, infringes rights which are protected from State aggression by the XIV Amendment’. Footnote Ibid.

68 Gardbaum, ‘The “horizontal effect” of constitutional rights’, 412–14; J. Miller, ‘The influence of human rights and basic rights in private law in the United States’, in V. Trstenjak and P. Weingerl (eds.), The Influence of Human Rights and Basic Rights in Private Law (Cham, Heidelberg, New York: Springer, 2016), pp. 473–86, at 481.

69 For the criticism, see Chemerinsky, ‘Rethinking state action doctrine’; M. Kumm and V. Ferreres Comella, ‘What is so special about constitutional rights in private litigation? A comparative analysis of the function of state action requirements and indirect horizontal effect’, in A. Sajó and R. Uitz, The Constitution of Freedom: An Introduction to Legal Constitutionalism (Oxford: Oxford University Press, 2017), pp. 241–86; Gardbaum, ‘The “horizontal effect” of constitutional rights’, pp. 412–14.

70 Shelley v. Kraemer, 334 US 1 (1948). For comments, see Miller, ‘The influence of human rights’, p. 585.

71 Gardbaum, ‘The structure and scope of constitutional rights’, p. 396.

72 Handyside v.UK, Application no. 5493/72, Judgment of 7 December 1976, para. 49.

73 For more, see V. Beširević, ‘A short guide to militant democracy: some remarks on the Strasbourg jurisprudence’, in W. Benedek et al. (eds.), European Yearbook of Human Rights 2012 (Antwerp: Intersentia; Vienna: NW Verlag, 2012), pp. 243–58, at 248–52.

74 See in Abrams v. US, 250 US 616, 630/631 (1919).

75 See R.A.V. v. City of St. Paul, 505 US 377, 112 S.Ct 2538, 120 L.Ed.2d 305 (1992); Simon & Schuster, Inc. v. Members of New York State Crime Victims Board, 502 US 105, 112 S.Ct 501, 116 L.Ed.2d 476 (1991); Boos v. Barry, 485 US 312, 108 S.Ct 1157, 99 L.Ed.2d 333 (1988); Police Dept. v. Mosley, 408 US 92, 92 S.Ct 2286, 33 L.Ed.2d 212 (1972); Brandenburg v. Ohio, 395 US 444, 89 S.Ct 1827, 23 L.Ed.2d 430 (1969); Kingsley Int’l Pictures Corp. v. Regents, 360 US 684, 79 S.Ct 1362, 3 L.Ed.2d 1512 (1959).

76 Tribunal de Grande Instance de Paris, Ligue contre le racisme et l’antisémitisme et Union des étudiants juifs de France c. Yahoo! Inc. et Société Yahoo! France, RG 05308 (2000). See also in Yahoo!, Inc., v. La Ligue Contre Le Racisme Et L’antisemitisme, 169 F.Supp.2d 1181 (N.D.Cal.2001).

77 Footnote Ibid., 1184–5.

79 Footnote Ibid., 1192.

80 Footnote Ibid., 1194.

81 Footnote Ibid., 1189.

82 RENO V. ACLU, 521 US 844 (1997).

83 Section 230 provides: ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’ 47 USC § 230(c) (2018).

84 Dipp-Paz v. Facebook, No. 18-CV-9037, WL 3205842 (SDNY 2019).

87 Federal Agency of News LLC v. Facebook, Inc., 395 F. Supp. 3d 1295 (ND Cal 2019).

88 Footnote Ibid., 1309–13.

89 See Gonzalez v. Google LLC, 598 US 617 (2023) and Twitter Inc. v. Taamneh, 598 US 471 (2023). The Court delivered decisions on 18 May 2023.

90 C. Sunstein, One Case at a Time: Judicial Minimalism on the Supreme Court (Cambridge, MA: Harvard University Press, 1999), p. 5.

91 T. Wischmeyer, ‘What is illegal offline is also illegal online: the German Network Enforcement Act 2017’, in B. Petkova and T. Ojanen (eds.), Fundamental Rights Protection Online: The Future Regulation of Intermediaries (Cheltenham: Edward Elgar Publishing, 2020), pp. 28−56, at 34.

92 BGH, Urteil vom 29. Juli 2021 – III ZR 179/20.

96 C. Etteldorf, ‘[DE] Federal Supreme Court finds Facebook terms of use ineffective in relation to hate speech’, IRIS 2021-8:1/20, https://merlin.obs.coe.int/article/9273. For a discussion on NetzDG, see Wischmeyer, ‘What is illegal offline is also illegal online’, pp. 28–57.

97 Footnote Ibid. See also Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance).

98 See Trade Agreement at https://ustr.gov/trade-agreements/free-trade-agreements/united-states-mexico-canada-agreement/agreement-between. For comments, see, e.g., V. Krishnamurthy and J. Fjeld, ‘CDA 230 goes North American? Examining the impacts of the USMCA’s intermediary liability provisions in Canada and the United States’, http://dx.doi.org/10.2139/ssrn.3645462.

99 See, e.g., Carter v. B.C. Federation of Foster Parent Assn., 2005 BCCA 398 and Pritchard v. Van Nes, 2016 BCSC 686.

100 See Cool World Technologies Inc. v. Twitter Inc., 2022 ONSC 7156.

101 Footnote Ibid., paras. 7, 10–13.

102 Footnote Ibid., para. 13.

103 Footnote Ibid., paras. 10–11.

104 Footnote Ibid., para. 9.

105 Footnote Ibid., para. 18.

106 In 1986, the CJEU ruled that the Community treaties constituted the constitutional charter of the Community, based on the rule of law (see Case C-294/83, Partiécologiste ‘Les Verts’ v. European Parliament [1986] ECLI:EU:C:1986:166). Theoretically, several constitutional theories have explained the foundations of the EU’s uncodified constitutional structure, including constitutional pluralism, constitutional synthesis, multilevel constitutionalism, and constitutional tolerance. For more about the constitutional nature of the EU and its primary law, see, e.g., J. Habermas, ‘The crisis of the European Union in the light of a constitutionalization of international law’ (2012) 23 European Journal of International Law 2, 335–48; V. Beširević, ‘The constitution in the European Union: the state of affairs’, in A. Dupeyrix and G. Raulet (eds.), European Constitutionalism: Historical and Contemporary Perspectives (Brussels: Peter Lang, 2014), pp. 15–35.

107 Case C-43/75, Defrenne v. Sabena [1976] ECLI:EU:C:1976:56.

108 See Case C-176/12, Association de médiation sociale v. Union locale des syndicats CGT and Others [2014] ECLI:EU:C:2014:2.

109 See Case C-414/16, Vera Egenberger v. Evangelisches Werk für Diakonie und Entwicklung e.V. [2018] ECLI:EU:C:2018:257; Case C-68/17, IR v. JQ [2018] ECLI:EU:C:2018:696; Joined Cases C-569/16 and C-570/16, Stadt Wuppertal and Volker Willmeroth als Inhaber der TWI Technische Wartung und Instandsetzung Volker Willmeroth e.K. v. Maria Elisabeth Bauer and Martina Broßonn [2018] ECLI:EU:C:2018:871; Case C-684/16, Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. v. Tetsuji Shimizu [2018] ECLI:EU:C:2018:874. For a discussion, see A. C. Ciacchi, ‘The direct horizontal effect of EU fundamental rights: ECJ 17 April 2018, Case C-414/16, Vera Egenberger v Evangelisches Werk für Diakonie und Entwicklung e.V. and ECJ 11 September 2018, Case C-68/17, IR v JQ’ (2019) 15 European Constitutional Law Review 2, 294–305.

110 Case C-131/12, Google Spain SL and Google Inc. v Agencia Española de Protección de Datos (AEPD) and Mario Costeja González [2014] ECLI:EU:C:2014:317, hereafter Google Spain. For the general comments of the case, see, e.g., F. Fabbrini and E. Celeste, ‘The right to be forgotten in the digital age: the challenges of data protection beyond borders’ (2020) 21 German Law Journal S1, 55–65.

111 Google Spain, para. 20.

112 Footnote Ibid., para. 38.

113 Footnote Ibid., paras. 68–9.

114 Footnote Ibid., para. 97. In subsequent rulings, the CJEU determined both the territorial and the material scope of the right to be forgotten. See Case C-507/17, Google LLC v. CNIL [2019] EU:C:2019:772 and Case C-136/17, G.C. and others v. CNIL [2019] EU:C:2019:773.

115 Google Spain, para. 81.

116 Footnote Ibid., para. 83.

117 Footnote Ibid., para. 99.

118 See more in De Gregorio, ‘Digital constitutionalism across the Atlantic’; E. Frantziou, ‘The horizontal effect of the Charter: towards an understanding of horizontality as a structural constitutional principle?’ (2020) 22 Cambridge Yearbook of European Legal Studies, 208–32.

119 Case C-362/14, Maximillian Schrems v. Data Protection Commissioner [2015] ECLI:EU:C:2015:650, para. 38. For more see Pollicino, Judicial Protection of Fundamental Rights on the Internet, pp. 132–41.

120 J. Habermas, The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society (Cambridge, MA: The MIT Press, 1991).

121 Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Frank La Rue, Human Rights Council, P 2, U.N. Doc. A/HRC/17/27 (16 May 2011), www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf.

122 For more see M. L. Miller and C. Vaccari, ‘Digital threats to democracy: comparative lessons and possible remedies’ (2020) 25 The International Journal of Press/Politics 3, 333–56.

123 S. Holmes, Passions and Constraint: On the Theory of Liberal Democracy (Chicago and London: The University of Chicago Press, 1995), p. 160.

124 C. Sunstein, Designing Democracy: What Constitutions Do (Oxford: Oxford University Press,2001), p. 7.

125 See S. Moyn, ‘On human rights and majority politics’ (2019) 52 Vanderbilt Law Review 5, 1135–66.

126 Sunstein, Designing Democracy, p. 7.

127 D. R. Johnson and D. Post, ‘Law and borders: the rise of law in cyberspace’ (1996) 48 Stanford Law Review 5, 1367–402.

128 B. Haggart and C. I. Keller, ‘Democratic legitimacy in global platform governance’ (2021) 45 Telecommunications Policy 6, Article 102152.

129 Balancing is at the core of the proportionality doctrine that has origins in German and Canadian constitutional jurisprudence. For a discussion see, e.g., V. Jackson and M. Tushnet (eds.), Proportionality: New Frontiers, New Challenges (Cambridge: Cambridge University Press, 2017).

130 Editorial Board of Pravoye Delo and Shtekel v. Ukraine, Application no. 33014/05, Judgment of 5 May 2011, para. 63.

5 How Do We Decide Whether Moving Online Makes a Difference?

1 A. L. Goodhart, ‘Determining the ratio decidendi of a case’ (1930) 40 Yale Law Review 2, 161–83; A. L. Goodhart, ‘The ratio decidendi of a case’ (1959) 22 Modern Law Review 2, 117–30; T. Bustamante et al., On the Philosophy of Precedent: Proceedings of the 24th World Congress of the International Association for Philosophy of Law and Social Philosophy, Beijing, 2009 (Stuttgart: Franz Steiner Verlag, 2012); I. McLeod, Legal Method (London: Palgrave Macmillan, 2007); J. L. Montrose, ‘Ratio decidendi and the House of Lords’ (1957) 20 Modern Law Review 2, 124–30.

2 J. Stone, Precedent and Law: Dynamics of Common Law Growth (Sydney: Butterworths, 1985), p. 123.

3 R. Cross and J. W. Harris, Precedent in English Law (Oxford: Clarendon Press, 2004), p. 72; W. M. Landes and R. A. Posner, ‘Legal precedent: a theoretical and empirical analysis’ (1976) 19 Journal of Law and Economics 2, 249–307, at 250; L. Alexander and E. Sherwin, ‘Judges as rulemakers’ (2004) 15 University of San Diego Public Law and Legal Theory Research Paper Series, 1–36; L. Alexander and E. Sherwin, Demystifying Legal Reasoning (Cambridge: Cambridge University Press, 2008).

4 K. N. Llewellyn, Jurisprudence: Realism in Theory and Practice (New Brunswick: Transaction Publishers, 2008), p. 117; K. N. Llewellyn, The Bramble Bush: On Our Law and Its Study (New York: Oceana Publications, 1991), p. 189.

5 G. Lamond, ‘Do precedents create rules?’ (2005) 11 Legal Theory 1, 1–26, at 7; J. Raz, The Authority of Law. Essays on Law and Morality (Oxford: Clarendon Press, 1979), p. 203.

6 J. F. Horty, Rules and Reasons in the Theory of Precedent (Cambridge: Cambridge University Press, 2011), p. 6.

7 Muwema v. Facebook Ireland Limited [2018] IECA 104.

8 Foley v. Sunday Newspapers Ltd [2005] IEHC 14.

9 Google LLC v. Oracle America, Inc., 593 US 1 (2021).

10 Magyar Jeti Zrt v. Hungary, Application no. 11257/2016, Judgment of 4 December 2018.

11 Footnote Ibid., paras. 73–4.

12 Case C-264/14, Skatteverket v. David Hedqvist [2015] ECLI:EU:C:2015:718.

13 Case C-360/13, Public Relations Consultants Association Ltd. v. Newspaper Licensing Agency Ltd and Others [2014] ECLI:EU:C:2014:1195.

14 Case C-390/18, Airbnb Ireland UC [2019] ECLI:EU:C:2019:1112.

15 Case C-62/19, Star Taxi App SRL v. Unitatea Administrativ Teritorială Municipiul Bucureşti prin Primar General and others [2020] ECLI:EU:C:2020:980.

16 Savva Terentyev v. Russia, Application no. 10692/09, Judgment of 28 August 2018.

17 Beizaras and Levickas v. Lithuania, Application no. 41288/15, Judgment of 14 January 2020.

18 Magyar Kétfarkú Kutya Párt v. Hungary, Application no. 201/17, Judgment of 20 January 2020.

19 Case C‑131/12, Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González [2014] ECLI:EU:C:2014:317.

20 E.g., G. Borges and C. Sorge (eds.), Law and Technology in a Global Digital Society: Autonomous Systems, Big Data, IT Security and Legal Tech (Cham: Springer Nature, 2022); G. Borges, ‘Liability for AI systems under current and future law: an overview of the key changes envisioned by the Proposal of an EU Directive on Liability for AI’ (2023) 24 Computer Law Review International 1, 1–8; P. H. Padovan, C. M. Martins, and C. Reed, ‘Black is the new orange: how to determine AI liability’ (2023) 31 Artificial Intelligence and Law 1, 136–67.

21 E. K. Cortez (ed.), Data Protection Around the World: Privacy Laws in Action (The Hague: T.M.C. Asser Press, 2021); S. O’Leary, ‘Balancing rights in a digital age’ (2018) 59 Irish Jurist, 59–92; D. J. Solove, Understanding Privacy (Cambridge, MA: Harvard University Press, 2008).

22 L. Bently et al., Intellectual Property Law, sixth edition (Oxford: Oxford University Press, 2022); P. Goldstein and B. Hugenholtz, International Copyright: Principles, Law, and Practice (Oxford: Oxford University Press, 2019).

23 See, e.g., M. Jacob, Precedents and Case-Based Reasoning in the European Court of Justice (Cambridge: Cambridge University Press, 2014), p. 175; A. Ross, On Law and Justice (Clark, NJ: The Lawbook Exchange Ltd, 2004), p. 86.

24 S. Brenner and H. J. Spaeth, Stare Indecisis: The Alteration of Precedent on the Supreme Court, 1946–1992 (Cambridge: Cambridge University Press, 1995), p. 8.

25 C. Baudenbacher and S. Planzer (eds.), International Dispute Resolution. The Role of Precedent (Stuttgart: German Law Publishers, 2010), p. 17.

26 J. Baltrimas, ‘Judicial precedent: authority and functioning’, Summary of Doctoral Dissertation (2017), pp. 28–9.

27 C. Reed, ‘Online and offline equivalence: aspiration and achievement’ (2010) 18 International Journal of Law and Information Technology 3, 248–73.

28 Footnote Ibid., p. 256.

29 Pihl v. Sweden [ECHR], Application no. 74742/14, Judgment of 9 March 2017.

6 Some Reflections on the Non-coherence Theory of Digital Human Rights

1 M. Susi, The Non-Coherence Theory of Digital Human Rights (Cambridge: Cambridge University Press, 2024).

2 See for discussion an overview of how legal systems around the world have been impacted by legal transfers: J. Gillespie and P. Nicholson, ‘Taking the interpretation of legal transfers seriously: the challenge for law and development’, in J. Gillespie and P. Nicholson (eds.), Law and Development and the Global Discourses of Legal Transfers (Cambridge: Cambridge University Press, 2012), pp. 1–26.

3 S. Belmessous (ed.), Native Claims: Indigenous Law against Empire, 1500–1920 (Oxford: Oxford University Press, 2012).

4 Peter Reich has explored the impact of regime change upon legal change and has identified various outcomes. For example, he writes, regime change in the Second Empire in ancient Rome resulted in hybridization, in Canada regime change led to legal conundrums for decades, and in California gradual supplanting of the civil law system. See: P. L. Reich, ‘Regime change and legal change – the legacy of Mexico’s Second Empire’ (2015) Oxford University Comparative Law Forum 1.

5 R. G. Fuchs, Contested Paternity: Constructing Families in Modern France (Baltimore: Johns Hopkins University Press, 2008), pp. 34–41.

6 P. Varul and H. Pisuke, ‘Louisiana’s Contribution to the Estonian Civil Code’ (1999) 73 Tulane Law Review 4, 1027–34.

7 M. Susi, ‘Novelty in new human rights: the decrease of universality and abstractness thesis’, in A. von Arnaud, K. von der Decken, and M. Susi (eds.), Cambridge Handbook on New Human Rights of the 21st Century: Recognition, Novelty, and Rhetoric (Cambridge: Cambridge University Press, 2020), pp. 21−33, at 21–2.

8 M. C. Nussbaum, ‘Animal rights: the need for a theoretical basis’ (2001) 114 Harvard Law Review 5, 1506–52.

9 See for the discussion about multi-stakeholderism: J. Kulesza, ‘Multistakeholderism – meaning and implications’, in M. Susi (ed.), Human Rights, Digital Society and the Law: A Research Companion (London: Routledge, 2019), pp. 117–31.

10 A. Kovacs, ‘Moving multistakeholderism forward: lessons from the NETmundial’, Internet Policy Review, 12 May 2014, https://policyreview.info/articles/news/moving-multistakeholderism-forward-lessons-netmundial/281.

11 World Summit on the Information Society, Geneva 2003 – Tunis 2005, Document WSIS-03/GEBEVA7DOC74-E, 12 December 2003, Declaration of Principles, Building the Information Society: a global challenge in the new millennium, https://digitallibrary.un.org/record/533621?v=pdf.

12 Footnote Ibid., para. 3.

13 UN Human Rights Council, ‘The promotion, protection and enjoyment of human rights on the Internet’, HRC 20th Session, UN Doc. A/HRC/20/L.13, 29 June 2012.

14 Footnote Ibid., para. 1.

15 D. Dror-Shpoliansky and Y. Shany, ‘It’s the end of the (offline) world as we know it: from human rights to digital human rights – a proposed typology’ (2021) 32 The European Journal of International Law 4, 1249–82, at 1253.

16 Footnote Ibid., 1265.

17 ARTICLE 19 statement at the thirty-fifth session of the UN Human Rights Council on 14 June 2017, as part of the Item 3 General Debate, see www.article19.org/resources/article-19-at-the-unhrc-the-same-rights-that-people-have-offline-must-also-be-protected-online/.

18 S. D. Warren and L. D. Brandeis, ‘The right to privacy’ (1890) 4 Harvard Law Review 5, 193–220.

19 Sartre writes: ‘Man is condemned to be free. Condemned, because he did not create himself, in other respect is free: because, once thrown into the world, he is responsible for everything he does.’ See: J-P. Sartre, Existentialism and Human Emotions (New York: Philosophical Library Book, 1957), p. 15.

20 E. Michalkiewcz-Kadziela and E. Milczarek, ‘Legal boundaries of digital identity creation’ (2022) 11 Internet Policy Review 1, 1–13, at 10.

21 See, e.g., panel conclusions put forward by Geneva Internet platform digwatch in November 2018: S. Grottola, ‘The future of digital identity and human rights’, Geneva Internet platform digwatch, 13 November 2018, https://dig.watch/event/13th-internet-governance-forum/future-digital-identity-and-human-rights.

22 E.g., under the GDPR article 17. It is worth mentioning here that the heading of this article ‘right to erasure (“right to be forgotten”)’ is an example of how the various entitlements related to the reflection of offline privacy get mixed up.

23 C. Sullivan, Digital Identity: An Emergent Legal Concept. The Role and Legal Nature of Digital Identity in Commercial Transactions (Adelaide: University of Adelaide Press, 2011), p. 140.

24 A. Liu, ‘Theses on the epistemology of the digital: advice for the Cambridge Centre for Digital Knowledge’, 14 August 2014, https://liu.english.ucsb.edu/theses-on-the-epistemology-of-the-digital-page/.

25 M. Susi, ‘Internet balancing formula’ (2019) 25 European Law Journal, Special Issue: Internet and Human Rights Law 2, 198–212.

26 G. Sartor, ‘The right to be forgotten: balancing interests in the flux of time’ (2016) 24 International Journal of Law and Information Technology 1, 72–98.

27 Council of Europe, Appendix to Recommendation CM/Rec(2016)1, para 6.1, see also CM/Rec(2018)2 / Recommendation of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries.

28 UN Human Rights Council, ‘The promotion, protection and enjoyment of human rights on the Internet’, HRC 32nd Session, UN Doc A/HRC/32/L.20 (2016).

29 F. La Rue, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, HRC 17th session, UN Doc. A/HRC/17/27 (2011), para 48, see also Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, submitted to the United Nations General Assembly on 8 September 2015; the Summary has the following passage: ‘In many situations, sources of information and whistle-blowers make access to information possible, for which they deserve the strongest protection in law and in practice.’ See D. Kaye, Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, UN Doc. A/70/361 (2015), 2.

30 E. Marique and Y. Marique, ‘Sanctions on digital platforms: beyond the public-private divide’ (2019) 8 Cambridge University Law Journal 2, 258–81.

31 Footnote Ibid., p. 281.

32 F. Zufall, R. Kimura, and L. Peng, ‘A simple mathematical model for the legal concept of balancing of interests’ ICAIL ’21: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, June 2021, 270–1.

33 M. Kettemann and A. Peukert, ‘Conclusion’, in M. Kettemann, A. Peukert, and I. Spiecker gen. Döhmann (eds.), The Law of Global Digitality (London: Routledge, 2022), pp. 250–5.

34 G. P. Magarin, ‘The internet and social media’, in A. Stone and F. Schauer (eds.), The Oxford Handbook of Freedom of Speech (Oxford: Oxford University Press, 2021), pp. 350–70.

35 M. Merleau-Ponty, The Visible and the Invisible, edited by C. Lefort, translated by A. Lingis, Northwestern University Studies in Phenomenology and Existential Philosophy (Evanston, IL: Northwestern University Press, 1968), p. 130.

7 Internet Addiction as a Human Rights Issue

1 M. M. Vanden Abeele and V. Mohr, ‘Media addictions as Apparatgeist: what discourse on TV and smartphone addiction reveals about society’ (2021) 27 Convergence 6, 1536–57.

2 B. Dell’Osso and N. Fineberg, COST (European Cooperation for Science and Technology) Action CA16207, ‘learning to deal with problematic usage of the internet’, www.cost.eu/publication/learning-to-deal-with-problematic-usage-of-the-internet/, 48.

3 I. B. Mboya et al., ‘Internet addiction and associated factors among medical and allied health sciences students in northern Tanzania: a cross-sectional study’ (2020) 73 BMC Psychology, 8, Article 73.

4 Dell’Osso and Fineberg, ‘Learning to deal with problematic usage of the internet’.

5 N. A. Fineberg et al., ‘Manifesto for a European research network into problematic usage of the internet’ (2018) 28 European Neuropsychopharmacology 11, 1232–46.

6 Dell’Osso and Fineberg, ‘Learning to deal with problematic usage of the internet’, p. 11.

7 H.-J. Rumpf, T. Effertz, and C. Montag, ‘The cost burden of problematic internet usage’ (2022) 44 Current Opinion in Behavioral Sciences, 101107.

8 Dell’Osso and Fineberg, ‘Learning to deal with problematic usage of the internet’, pp. 8–9.

9 Rumpf, Effertz, and Montag, ‘The cost burden of problematic internet usage’.

10 C. Megele and A. Longfield, Safeguarding Children and Young People Online: A Guide for Practitioners (Bristol: Bristol University Press, 2017), p. 98.

11 A. W. Blum and J. E. Grant, ‘Legal aspects of problematic internet usage’ (2022) 45 Current Opinion in Behavioral Sciences, 101142.

13 Dell’Osso and Fineberg, ‘Learning to deal with problematic usage of the internet’, pp. 10–17.

14 K. T. Rahman and Z. U. Arif, ‘Impact of binge-watching on Netflix during the Covid-19 pandemic’ (2017) 2 South Asian Journal of Marketing 1, 97–112, at 98.

15 D. Columb, M. D. Griffiths, and C. O’Gara, ‘Fantasy football (soccer) playing and internet addiction among online fantasy football participants: a descriptive survey study’ (2022) 20 International Journal of Mental Health and Addiction 2, 1200–11.

16 Dell’Osso and Fineberg, ‘Learning to deal with problematic usage of the internet’, p. 27.

17 H. K. Lee and S. Chung, ‘Conceptualization of internet addiction based on the public health perspective’, in M. N. Potenza, K. A. Faust, and D. Faust (eds.), The Oxford Handbook of Digital Technologies and Mental Health (Oxford: Oxford University Press, 2020), pp. 87–96, at 89.

18 Dell’Osso and Fineberg, ‘Learning to deal with problematic usage of the internet’, p. 29.

19 C. Augner et al., ‘The association between problematic smartphone use and symptoms of anxiety and depression – a meta-analysis’ (2023) 45 Journal of Public Health 1, 193–201.

20 H. Allam et al., ‘Prevalence of problematic social media use among residents and teaching assistants in Ain Shams University Hospitals and faculty of medicine and its relationship to emotional distress’ (2021) 114 (Supplement 1) QJM: An International Journal of Medicine, i216.

21 N. Marengo et al., ‘Cyberbullying and electronic media communication problematic use in Piedmont. Data from HBSC study’ (2020) 30 (Supplement 5) European Journal of Public Health, v869.

22 L. Pansu, ‘Evaluation of “right to disconnect” legislation and its impact on employee’s productivity’ (2018) 5 International Journal of Management and Applied Research 3, 99–119.

23 Rahman and Arif, ‘Impact of binge-watching on Netflix’, p. 100.

24 Rumpf, Effertz and Montag, ‘The cost burden of problematic internet usage’.

25 Vanden Abeele and Mohr, ‘Media addictions as Apparatgeist’, pp. 1536–57.

26 M. Seo, J.-H. Kim, and P. David, ‘Always connected or always distracted? ADHD symptoms and social assurance explain problematic use of mobile phone and multicommunicating’ (2015) 20 Journal of Computer-Mediated Communication 6, 667–81.

27 Dell’Osso and Fineberg, ‘Learning to deal with problematic usage of the internet’, pp. 12–14.

28 Lee and Chung, ‘Conceptualization of internet addiction’, pp. 477–9.

29 H-J. Rumpf, ‘General population-based studies of problematic internet use: data from Europe’, in M. N. Potenza, K. A. Faust, and D. Faust (eds.), The Oxford Handbook of Digital Technologies and Mental Health (Oxford: Oxford University Press, 2020), pp. 57–64.

30 A. K. Tsitsika et al., ‘Association between problematic internet use, socio-demographic variables and obesity among European adolescents’ (2016) 26 European Journal of Public Health 4, 617–22.

31 Marengo et al., ‘Cyberbullying and electronic media communication’.

32 Dell’Osso and Fineberg, ‘Learning to deal with problematic usage of the internet’, p. 5.

33 Footnote Ibid., 28.

34 Footnote Ibid., 27–8.

35 J. Benka, O. Orosova, and L. Hricova, ‘Risk and protective factors of problematic internet use in the context of prevention: Jozef Benka’ (2016) 26 (Supplement 1) European Journal of Public Health, 367–8.

36 R. Pezoa-Jares and I. Espinoza-Luna, ‘629 – neurobiological findings associated with internet addiction: a literature review’ (2013) 28 (Supplement 1) European Psychiatry, 1.

37 D. L. King and P. H. Delfabbro, ‘The natural history of problematic internet use and gaming: recent findings, challenges, and future directions’, in M. N. Potenza, K. A. Faust, and D. Faust (eds.), The Oxford Handbook of Digital Technologies and Mental Health (Oxford: Oxford University Press, 2020), pp. 65–74.

38 D. W. Choi et al., ‘The association between parental depression and adolescent’s internet addiction in South Korea’ (2018) 17 Annals of General Psychiatry, Article 15.

39 E. Hargittai and Y. P. Hsieh, ‘Digital inequality’, in W. H. Dutton (ed.), The Oxford Handbook of Internet Studies (Oxford: Oxford University Press, 2013), pp. 129–50.

40 Lee and Chung, ‘Conceptualization of internet addiction’, pp. 94–5.

41 Hargittai and Hsieh, ‘Digital inequality’, pp. 129–50.

42 D. J. Stein and A. Hartford, ‘Health-policy approaches for problematic internet use: lessons from substance use disorders’ (2022) 45 Current Opinion in Behavioral Sciences, 101151.

43 Lee and Chung, ‘Conceptualization of internet addiction’, pp. 87–96.

44 G. Quaglio and S. Millar, ‘Potentially negative effects of internet use’ (2020), European Parliamentary Research Service, www.europarl.europa.eu/RegData/etudes/IDAN/2020/641540/EPRS_IDA(2020)641540_EN.pdf.

45 A. Cerulli-Harms et al., ‘Loot boxes in online games and their effect on consumers, in particular young consumers’ (2020), European Parliament Research Service, www.europarl.europa.eu/RegData/etudes/STUD/2020/652727/IPOL_STU(2020)652727_EN.pdf.

46 T. Phillips, ‘EA’s €10m Dutch FIFA loot box fine overturned’ (2020), www.eurogamer.net/eas-10m-dutch-fifa-loot-box-fine-has-been-overturned.

47 G. Van Mansfeld, ‘Consumer law arguments raised against gaming “loot boxes”’ (2022), **www.pinsentmasons.com/out-law/news/consumer-law-arguments-gaming-loot-boxes.

48 D. O’Boyle, ‘Germany to add loot boxes to video game age-rating criteria’ (2022), https://igamingbusiness.com/esports/video-gaming/germany-to-add-loot-boxes-to-video-game-age-rating-criteria/.

49 EP resolution of 18 January 2023 on consumer protection in online video games: a European single market approach (2022/2014(INI)).

50 European Parliament, ‘Protecting gamers and encouraging grow in the video games sector’, 18 January 2023, www.europarl.europa.eu/news/en/press-room/20230113IPR66646/protecting-gamers-and-encouraging-growth-in-the-video-games-sector.

51 World Health Organization, ‘To grow up healthy, children need to sit less and play more’, 24 April 2019, www.who.int/news/item/24-04-2019-to-grow-up-healthy-children-need-to-sit-less-and-play-more.

52 Megele and Longfield, Safeguarding Children and Young People Online, p. 116.

53 UN Committee on the Rights of the Child (CRC), ‘General comment No. 25 (2021) on children’s rights in relation to the digital environment’ (2 March 2021) CRC/C/GC/25.

54 CoE, ‘Guidelines to respect, protect and fulfil the rights of the child in the digital environment’, Recommendation CM/Rec(2018)7 of the Committee of Ministers.

55 Vanden Abeele and Mohr, ‘Media addictions as Apparatgeist’, pp. 1536–57.

56 S. Lomborg and B. Ytre-Arne, ‘Advancing digital disconnection research: introduction to the special issue’ (2021) 27 Convergence 6, 1529–35.

57 Vanden Abeele and Mohr, ‘Media addictions as Apparatgeist’.

58 European Observatory of Working Life, ‘Right to disconnect’ (2021), www.eurofound.europa.eu/observatories/eurwork/industrial-relations-dictionary/right-to-disconnect.

59 Eurofound, Right to Disconnect: Exploring Company Practices (Luxembourg: Publications Office of the European Union, 2021), p. 18.

60 P. Hesselberth, ‘Discourses on disconnectivity and the right to disconnect’ (2018) 20 New Media & Society 5, 1994–2010.

61 Eurofound, Right to Disconnect.

62 European Observatory of Working Life, ‘Right to disconnect’.

63 K. Müller, ‘The right to disconnect’ (2020) European Parliamentary Research Service, p. 4.

64 European Parliament resolution of 21 January 2021 with recommendations to the Commission on the right to disconnect, OJ C 456, 10.11.2021, pp. 161–76.

65 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, ‘EU strategic framework on health and safety at work 2021–2027, Occupational safety and health in a changing world of work’, COM(2021) 323 final.

66 European Commission, ‘Digital rights and principles: Presidents of the Commission, the European Parliament and the Council sign European Declaration’, 15 December 2022, https://ec.europa.eu/commission/presscorner/detail/en/ip_22_7683.

67 L. Mitrus, ‘Potential implications of the Matzak judgment (quality of rest time, right to disconnect)’ (2019) 10 European Labour Law Journal 4, 386–97.

68 M. Glowacka, ‘A little less autonomy? The future of working time flexibility and its limits’ (2021) 12 European Labour Law Journal 2, 113–33.

69 Müller, ‘The right to disconnect’.

70 Monkhouse Law, ‘Right to disconnect Ontario – the law in Ontario 2022’, 13 September 2022, www.monkhouselaw.com/right-to-disconnect-ontario/.

71 S. Ho, ‘Ontario’s ‘right to disconnect’ law: who qualifies and what are the loopholes?’, 7 June 2022, www.ctvnews.ca/business/ontario-s-right-to-disconnect-law-who-qualifies-and-what-are-the-loopholes-1.5936773.

72 L. Pansu, ‘Evaluation of ‘right to disconnect’ legislation and its impact on employee’s productivity’ (2018) 5 International Journal of Management and Applied Research 3, 99–119.

73 R. Jusienė et al., ‘Ilgalaikis ekranų poveikis vaikų fizinei ir psichikos sveikatai: mokslinio projekto ataskaita’ (Valstybinis visuomenės sveikatos stiprinimo fondas, 2022) [‘Long-term effects of screen exposure on children’s physical and mental health: scientific project report’ (State Public Health Strengthening Fund)], https://lt.mediavaikai.lt/copy-of-results, p. 39.

74 Mboya et al., ‘Internet addiction and associated factors among medical and allied health sciences students in northern Tanzania’, p. 8.

75 Vanden Abeele and Mohr, ‘Media addictions as Apparatgeist’.

76 D. Smahel et al., ‘EU Kids Online 2020: Survey results from 19 countries’ (2020), DOI: 10.21953/lse.47fdeqj01ofo, p. 6.

77 L.-P. Beland and R. Murphy, ‘Ill communication: technology, distraction & student performance’ (2015), https://cep.lse.ac.uk/pubs/download/dp1350.pdf.

78 République française, Code de l’éducation, Article L511-5, www.legifrance.gouv.fr/codes/article_lc/LEGIARTI000037286581.

79 R. Smith, ‘France bans smartphones from schools’, CNN, 31 July 2018, https://edition.cnn.com/2018/07/31/europe/france-smartphones-school-ban-intl/index.html.

80 Government of Victoria, Australia, ‘Mobile phones in schools’, www.vic.gov.au/mobile-phones-schools.

81 Government of Victoria, Australia, ‘Mobile phones – student use, a ministerial policy formally issued by the Minister for Education under section 5.2.1(2)(b) of the Education and Training Reform Act 2006 (Vic)’, last updated in 2022, www2.education.vic.gov.au/pal/students-using-mobile-phones/policy.

82 The Canadian Press, ‘Cellphone ban in Ontario classrooms comes into effect today’, CTV News, 4 November 2019, https://toronto.ctvnews.ca/cellphone-ban-in-ontario-classrooms-comes-into-effect-today-1.4668824.

83 S. Parent, ‘Québec rejette l’interdiction des cellulaires à l’école comme en Ontario’, 14 March 2019, www.rcinet.ca/fr/2019/03/14/quebec-rejette-linterdiction-des-cellulaires-a-lecole-comme-en-ontario/.

84 A. Ledsom, ‘The mobile phone ban in French schools, one year on. Would it work elsewhere?’, 30 August 2019, www.forbes.com/sites/alexledsom/2019/08/30/the-mobile-phone-ban-in-french-schools-one-year-on-would-it-work-elsewhere/.

85 UN Committee on Economic, Social and Cultural Rights (CESCR), ‘General Comment No. 3: The Nature of States Parties’ Obligations (Art. 2, Para. 1 of the Covenant)’ (14 December 1990) E/1991/23, para. 10.

86 Footnote Ibid., para. 1.

87 Ontario Human Rights Commission, ‘Policy on preventing discrimination based on mental health disabilities and addictions’ (2014), www3.ohrc.on.ca/en/policy-preventing-discrimination-based-mental-health-disabilities-and-addictions Chapter 4.

88 Explanations relating to the Charter of Fundamental Rights, OJ C 303, 14.12.2007, pp. 17–35.

8 Just Don’t Get Caught!

1 Department for Digital Culture, Media and Sport, ‘Online harms feasibility study’ (2021), https://assets.publishing.service.gov.uk/media/639b0511e90e072180b2a7c6/DCMS_Online_Harm_Feasibility_study_v2.pdf.

2 F. Cremer et al., ‘Cyber risk and cybersecurity: a systematic review of data availability’ (2022) 47 The Geneva Papers on Risk and Insurance – Issues and Practice, 698–736, at 698.

3 E. Decker, ‘Full count? Crime rate swings, cybercrime misses and why we don’t really know the score’ (2018) 5 Journal of National Security, Law and Policy, 583–604.

4 Cremer et al., ‘Cyber risk and cybersecurity’, p. 698.

5 Kantar Media, ‘Internet users’ experience of harm online: summary of survey research’ (2018), www.ofcom.org.uk/siteassets/resources/documents/research-and-data/online-research/online-harms/2018/internet-harm-research-2018-report.pdf?v=323453, 66.

6 J. Holdoš (ed.), ‘EU kids online Slovensko – správy z výskumu’ (2022), https://euko.ku.sk/wp-content/uploads/2022/04/EU-Kids-Online-Slovensko-spravy-z-vyskumu_ed_J_Holdos.pdf; V. Gill, L. Monk, and L. Day, ‘Qualitative research project to investigate the impact of online harms on children’ (2022), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1167838/Online_Harms_Study_Final_report_updated_51222_updated_290623.pdf; N. Hudson et al., ‘Content and activity that is harmful to children within scope of the Online Safety Bill: a rapid evidence assessment’, 27 May 2022, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1123421/Content_and_Activity_that_is_Harmful_to_Children_within_Scope_of_the_Online_Safety_Bill__REA__accessible_.pdf.

7 Of course, it depends also on the legal system: some systems are more focused on precedents and also consider the constitution as a living tree.

8 J. Locke, Two Treatises of Government (Cambridge: Cambridge University Press, 1967), p. 525.

9 See, e.g., L. B. Moses, ‘Adapting the law to technological change: a comparison of common law and legislation’ (2003) 26 UNSW Law Journal 2, 394–417; S. Greenstein, ‘Preserving the rule of law in the era of artificial intelligence’ (2022) 30 Artificial Intelligence and Law, 291–323, at 396.

10 T. Pajuste (ed.), ‘Specific threats to human rights protection from the digital reality: international responses and recommendations to core threats from the digitalised world’ (2022), https://graphite.page/gdhrnet-threats-to-human-rights-protection/, p. 9; A. Strowel and W. Vergote, ‘Digital platforms: to regulate or not to regulate? Message to regulators: fix the economics first, then focus on the right regulation’ (2016), https://ec.europa.eu/information_society/newsroom/image/document/2016-7/uclouvain_et_universit_saint_louis_14044.pdf.

11 See, e.g., Y. Shany, ‘Digital rights and the outer limits of international human rights law’ (2023) 24 German Law Journal 3, 461–72, at 464; J. Kleijssen and P. Perri, ‘Cybercrime, evidence and territoriality: issues and options’ (2017) 47 Netherlands Yearbook of International Law, 147–73, at 155.

12 Pajuste (ed.), ‘Specific threats to human rights protection’, p. 9; M. Greenberg, ‘Legal interpretation’ (2021), Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/legal-interpretation/.

13 A. Ivanov, D. Gorelik, and K. Prokofiev, ‘Law enforcement in the context of digitalization: problems and prospects for improving efficiency’ (2020) 171 Advances in Economics, Business and Management Research, volume, Proceedings of the 1st International Scientific Conference ‘Legal Regulation of the Digital Economy and Digital Relations: Problems and Prospects of Development’ (LARDER 2020); V. Ceccato, ‘Special issue: crime and control in the digital era’ (2019) 20 Criminal Justice Review 10, 1–6, at 3.

14 US Department of Justice, ‘Report of the Attorney General’s cyber digital task force’ (2018), www.justice.gov/archives/ag/page/file/1076696/dl.

15 Pajuste, ‘Specific threats to human rights protection’, p. 4.

16 Footnote Ibid., p. 9.

17 FBI, ‘Internet crime: IC3 a virtual complaint desk for online fraud’ (2017), www.fbi.gov/news/stories/ic3-virtual-complaint-desk-for-online-fraud.

18 See, e.g., M. L. Chiarella, ‘Digital Markets Act (DMA) and Digital Services Act (DSA): New rules for the EU digital environment’ (2023) 9 Athens Journal of Law 1, 33–58; European Commission, Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts (2021), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206; Council of the EU, Press Release, ‘DSA: Council gives final approval to the protection of users’ rights online’ (2022), www.consilium.europa.eu/en/press/press-releases/2022/10/04/dsa-council-gives-final-approval-to-the-protection-of-users-rights-online/.

19 This can be illustrated by the differences in the trust that citizens have in the police in different countries, see, e.g., D. Schaap and P. Scheeper, ‘Comparing citizens’ trust in the police across European countries: an assessment of cross-country measurement equivalence’ (2014) 24 International Criminal Justice Review 1, 82–98; R. I. Mawby, ‘Comparing police systems across the world’, in G. Bruinsma and D. Weisburd (eds.), Encyclopedia of Criminology and Criminal Justice (New York, NY: Springer, 2014), pp. 478–88.

20 See, e.g., Transparency International, ‘Corruption Perceptions Index’ (2022), www.transparency.org/en/cpi/2022.

21 See, e.g., A. M. Jansen et al., ‘The influence of the presentation of camera surveillance on cheating and pro-social behavior’ (2018) 9 Frontiers in Psychology, Article 1937; T. Golda et al., ‘Perception of risks and usefulness of smart video surveillance systems’ (2022) 12 Applied Sciences 20, Article 10435; C. Ziller and M. Helbling, ‘Public support for state surveillance’ (2021) 60 European Journal of Political Research 4, 994–1006.

22 P. Königs, ‘Government surveillance, privacy, and legitimacy’ (2022) 35 Philosophy & Technology, 1, Article 8.

23 For educational aspects regarding surveillance see workshop proposal related to the Erasmus+ project PLATO’S EU, that I got a chance to test at the University of Skovde, Sweden and Frederick University, Cyprus.

24 Königs, ‘Government surveillance, privacy, and legitimacy’, section 1.

25 See, e.g., S. Quach et al., ‘Digital technologies: tensions in privacy and data’ (2022) 50 Journal of the Academy of Marketing Science 6, 1299–323.

26 See, e.g., Königs, ‘Government surveillance, privacy, and legitimacy’.

27 Aristotle, The Nicomachean Ethics (London: Penguin Books, 2004).

28 See, e.g., A. Lareki et al., ‘Fake digital identity and cyberbullying’ (2022) 45 Media, Culture & Society 2, 338–53.

29 See, e.g., F. Gomez and M. Artigot i Golobardes, ‘Intrinsic and extrinsic motivations to comply with legal rules: comment on B. Deffains and D. Demougin, “Class Actions, Compliance, and Moral Cost”’, in S. Grundmann, F. Möslein, and K. Riesenhuber (eds.), Contract Governance: Dimensions in Law and Interdisciplinary Research (Oxford: Oxford Academic, online edn., 2015), pp. 241–8; L. S. Morris et al., ‘On what motivates us: a detailed review of intrinsic v. extrinsic motivation’ (2022) 52 Psychological Medicine 10, 1801–16; R. Ryan and E. L. Deci, ‘Intrinsic and extrinsic motivations: classic definitions and new directions’ (2000) 25 Contemporary Educational Psychology 1, 54–67; R. Bénabou and J. Tirole, ‘Intrinsic and extrinsic motivation’ (2003) 70 Review of Economic Studies 3, 489–520; A. Van den Broeck et al,. ‘Beyond intrinsic and extrinsic motivation: a meta-analysis on self-determination theory’s multidimensional conceptualization of work motivation’ (2021) 11 Organizational Psychology Review 3, 240–73; A. Turner, ‘How does intrinsic and extrinsic motivation drive performance culture in organizations?’ (2017) 4 Cogent Education 1, Article 1337543.

30 See, e.g., S. Larcom et al., ‘Follow the leader? Testing for the internalization of law’ (2019) 48 The Journal of Legal Studies 1, 217–44; J. Benka et al., ‘Internalization of rules and risk-behavior among early adolescents and relevance for public health’ (2019), 29 European Journal of Public Health, Issue Supplement 4, 367–8; M. Kim, ‘Legalization and norm internalization: an empirical study of international human rights commitments eliciting public support for compliance’ (2019) 7 Journal of Law and International Affairs 2, 337–81.

31 See, e.g., P. J. May, ‘Regulation and compliance motivations: examining different approaches’ (2005) 65 Public Administration Review 1, 31–44; P. J. May, ‘Compliance motivations: affirmative and negative bases’ (2004) 38 Law & Society Review 1, 41–68.

32 L. Congiu and I. Moscati, ‘A review of nudges: definitions, justifications, effectiveness’ (2022) 36 Journal of Economic Surveys 1, 188–213.

33 ‘Ethics is generally understood to be the study of “living well as a human being”. This is the topic of works such as Aristotle’s Nicomachean Ethics, in which the aim of human beings is to exemplify human excellence of character.’ – J. Driver, ‘Moral theory’ (2022), Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/moral-theory/. However, it is often understood more broadly as a discipline of practical philosophy dealing with right and wrong conduct.

34 Digital ethics can be understood as a branch of practical or applied ethics, which deals with ethical issues related to digital technologies. As in other areas of practical ethics it is highly interdisciplinary and overlaps with or includes other areas of investigation as well as other areas of practical ethics (e.g., ethics of technology, ethics of AI, environmental ethics, ethics of law). Digital ethics is sometimes falsely interpreted as dealing only with etiquette online, so called netiquette. However, ethics is not possible to conflate with etiquette. Both deal with questions regarding right conduct but ethics deals more with the idea of harm, which is not that important in etiquette.

35 L. Kohlberg, The Philosophy of Moral Development (New York, NY: Harper and Row, 1981).

36 In relation to AI digital ethics see, e.g., B. Wagner, ‘Ethics as an escape from regulation. From “ethics-washing” to ethics-shopping?’, in E. Bayamlioğlu, I. Baraliuc, and L. Janssens (eds.), Being Profiled: Cogitas Ergo Sum. 10 Years of Profiling the European Citizen (Amsterdam: Amsterdam University Press, 2018), pp. 84–8.

37 See, e.g., B. O’Brien, ‘Digital ethics in higher education’ (2020) Educause Review, https://er.educause.edu/articles/2020/5/digital-ethics-in-higher-education-2020; E. Zvereva, ‘Digital ethics in higher education: modernizing moral values for effective communication in cyberspace’ (2023) 13 Online Journal of Communication and Media Technologies 2, e202319; E. Wulandari and W. Triyanto, ‘Digital citizenship education: shaping digital ethics in society 5.0’ (2021) 9 Universal Journal of Educational Research 5, 948–56; D. Olcott, Jr et al., ‘Ethics and education in the digital age: global perspectives and strategies for local transformation in Catalonia’ (2015) 12 RUSC 2 Special Issue.

38 See, e.g., K. Moore, ‘The three-part harmony of adult learning, critical thinking, and decision-making’ (2010) 39 Journal of Adult Education 1, 1–10; H. Boxler, ‘Quest for the grail? Searching for critical thinking in adult education’ (2002) Adult Education Research Conference, Conference Proceedings; D. R. Garrison, ‘Critical thinking and adult education: a conceptual model for developing critical thinking in adult learners’ (1991) 10 International Journal of Lifelong Education 4, 287–303; H. Tinmaz et al., ‘A systematic review on digital literacy’ (2022) 9 Smart Learning Environments, Article 21; T. Ott and M. Tiozza, ‘Digital media ethics: benefits and challenges in school education’ (2022) 14 International Journal of Mobile and Blended Learning 2, 1–8.

39 See, e.g., the forthcoming book that I am editing with J. Irwin – Ethical Education Across European Systems – Concepts, Practices, Dilemmas (Lausanne: Peter Lang Publishing, 2025).

40 See, e.g., L. M. Murawski, ‘Critical thinking in the classroom … and beyond’ (2014) 10 Journal of Learning in Higher Education 1, 25–30; K. Larsson, ‘Understanding and teaching critical thinking – a new approach’ (2017) 84 International Journal of Educational Research, 32–42; O. L. U. Enciso et al., ‘Critical thinking and its importance in education: some reflections’ (2017) 19 Rastros Rostros 34, 78–88; Y. E. L. Zuluaga et al., ‘A study of critical thinking in higher education students’ (2020) 16 Revista Latinoamericana de Estudios Educativos (Colombia) 2, 256–79; L. Campo et al., ‘Methodologies for fostering critical thinking skills from university students’ points of view’ (2023) 13 Education Sciences 2, Article 132; J. Huang and G. Sang, ‘Conceptualising critical thinking and its research in teacher education: a systematic review’ (2022) 29 Teachers and Teaching 6, 638–60.

41 See, e.g., B. Sheehy et al., ‘Shifting from soft to hard law: motivating compliance when enacting mandatory corporate social responsibility’ (2023) 24 European Business Organization Law Review 4, 693–719; A. S. Rosenberg, ‘Motivational law’ (2008) 56 Cleveland State Law Review 1, 111–36.

42 Holdoš, ‘EU kids online Slovensko’, pp. 120, 137.

43 Plato’s EU website, https://platos-eu.org.

44 As presented also in earlier material on the topic under GDHR – Pajuste, ‘Specific threats to human rights protection’, pp. 20–2.

45 Pajuste, ‘Specific threats to human rights protection’, pp. 9–10.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×