To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the digital age, the proliferation of illegal content online remains a significant public concern for which no ideal solution has yet been found. In recent years, the European Union (EU) has developed a complex legal framework that relies on social media companies to implement both EU and Member State laws in addressing the issue. This paper examines this development through the lens of regulation studies, showing how the EU is mandating social media companies to act as regulatory intermediaries. On this basis, it argues that the EU is institutionalising the content moderation processes created by these companies, effectively integrating them into the legal enforcement landscape. The paper highlights how this institutionalisation raises important questions about the tension between the self-interest of private actors and the public interest in combating illegal content online. It further underlines how, by adopting such a legal framework, the EU continues to pursue governance frameworks grounded in market-based principles. From this perspective, the paper makes two main contributions to the literature on digital law in the EU. First, at a conceptual level, it demonstrates the value of regulation studies as a lens for interpreting recent legal developments. Second, from a critical standpoint, it questions the EU’s legal framework in light of the tension between companies’ self-interest and their intermediation mandate, as well as the persistence of market-based governance approaches within the EU. The paper also contributes to the literature on regulation studies by proposing a new case study of regulatory intermediation in the digital context.
In this paper, I argue that the 2021 update to Facebook’s Community Standards on hate speech that distinguishes between “attacks on people” and “attacks against concepts and institutions” represented a shift in Meta’s content moderation policies from a consequentialist United States First Amendment influenced view of free speech to a constitutive approach that is responsive to the “real-world” harm of the virality of hate speech online in contexts such as India. To illustrate this argument, I focus on Meta’s response to “attacks against Islam and The Prophet” that are regularly used to attack Muslims by Hindu nationalists and their supporters, in India and abroad. The weaponisation of hate speech and the virality of speech-acts afforded by Facebook and other platforms by Hindu nationalists and their supporters to subordinate minorities and dissenting voices is one of the many contemporary practices that have contributed to the production of majoritarian legality in India.
This Article discusses China’s content moderation in the age of artificial intelligence. It first introduces two long-overlooked features of China’s content moderation: the medium-based model and the “No-Dispute” Policy. The former emphasizes that content moderation in China varies based on different media, while the latter argues that China’s content moderation is often content-neutral rather than being driven by ideology or having an official stance. The Article then summarizes the three main challenges artificial intelligence presents to content moderation: a shift in structure from the traditional “state v. citizen” dichotomy to the “platform–government–citizen” triangle; a transition in means from human review to algorithm-based and machine-based moderation; and stimulating a reimagination of traditional theories and doctrines of freedom of speech in terms of standards and classification. Finally, the Article takes online violence, one of the most prominent issues in contemporary Chinese content moderation, as a case study to examine specific issues in China’s content moderation in the era of artificial intelligence.
This article examines the transformative impact of large language models (LLMs) on online content moderation, revealing a critical gap between platforms’ rule-based policies and their AI-driven enforcement mechanisms. Using Facebook’s hate speech moderation policies and practices as a case study, we identify a paradox: while content policies are increasingly rule-oriented, AI-driven enforcement seems to operate in a standard-like manner. This disconnect creates transparency, consistency and accountability challenges relating to the delineation of online freedom of expression that are not addressed in the literature, and require attention and mitigation. In this specific context, we introduce the concept of ‘rules by the millions’ to describe how AI systems actually operate through generating vast networks of micro-rules that evade traditional regulatory oversight. This phenomenon disrupts the conventional rules-versus-standards framework used in legal theory, raising urgent questions about the adequacy of current AI governance mechanisms. Indeed, the rapid adoption of LLMs in content moderation has outpaced the human capacity to monitor them, creating a pressing need for adaptive frameworks capable of managing the evolving capacities of AI.
Chapter 5 focuses on the regulation of social media platforms and platform architecture, with changes in EU perceptions regarding the reliability of these platforms and the values of their owners. It examines the shift from economically motivated self-regulatory regimes in these sectors based in logics of efficiency to a digital sovereignty-motivated move to a logic of security in regulation. It identifies the explicit linkage between economic and security concerns, particularly as it relates to disinformation and political advertising, with the promotion of co-regulatory regimes with significant levels of oversight provided by the Commission. It explores the approach to regulatory export adopted in these initiatives, with an emphasis on control of platforms regardless of where they are based, so long as they offer services in the EU.
Incels (short for “involuntarily celibate”) have recently gained notoriety for their aggressive, often violent, misogyny, yet incels were not always an antidemocratic social group. They thus pose a challenge for thinking about democracy and identity in (anonymous) digital environments: how can we create spaces for marginalized social groups while ensuring the resulting identities remain democratic? While many scholars point to technological affordances or corporate content moderation policies as providing some solutions, in this article I propose a more democratic approach. Drawing from incel wikis and archived forum posts from two early incel communities—IncelSupport and LoveShy—I argue that a community's social norms, and the moderation practices required to sustain them, are user-directed interventions that have outsized effects in shaping group identities in democratic ways.
Since the Internet’s mainstream inception in the mid-1990s, the global telecommunications network has transformed from one that offered egalitarian promise to a network that often compromises democratic norms. And, as its conceptual linchpin, Internet “openness” provided the potential for technological innovation that could revolutionize both communication and commerce. The regulatory schemes introduced in the early Internet era thus sought to advance openness and innovation in the fledging online world. But, while the times and technologies have since changed, the regulatory frameworks have largely remained the same. Accordingly, this review essay examines the Internet’s regulatory and cultural history to explore how the open values of the information age gave way to our current era of online disinformation. To do so, I reflect upon two early studies of the digital realm that have advanced discourse and scholarship on Internet openness: Lawrence Lessig’s Code: and Other Laws of Cyberspace, Version 2.0, and Christopher Kelty’s Two Bits: The Cultural Significance of Free Software. Informed by these works, along with related digital scholarship, this essay argues that remembering the history of Internet openness reveals how the free access ideals of the Internet’s foundational age have been transformed by the renewed proprietary conventions of our current disinformation era.
Section 230 of the Communications Decency Act is often called "The Twenty-Six Words That Created the Internet." This 1996 law grants platforms broad legal immunity against claims arising from both third-party content that they host, and good-faith content moderation decisions that they make. Most observers agree that without Section 230 immunity, or some variant of it, the modern internet and social media could not exist. Nonetheless, Section 230 has been subject to vociferous criticism, with both Presidents Biden and Trump having called for its repeal. Critics claim that Section 230 lets platforms have it both ways, leaving them free to host harmful content but also to block any content they object to. This chapter argues that criticisms of Section 230 are largely unwarranted. The diversity of the modern internet, and ability of ordinary individuals to reach broad audiences on the internet, would be impossible without platform immunity. As such, calls for repeal of or major amendments to Section 230 are deeply unwise. The chapter concludes by pointing to important limits on Section 230 immunity and identifying some narrow amendments to Section 230 that may be warranted.
As Chapter 1 discusses, one of the most consistent conservative critiques of social media platforms is that social media is biased against conservative content. A common policy proposal to address this is to regulate such platforms as common carriers. Doing so would require social media platforms to host, on a nondiscriminatory basis, all legal user content and to permit all users to access platforms on equal terms. While this seems an attractive idea – after all, who could object to nondiscrimination – it is not. For one thing, the Supreme Court has now recognized that social media platforms possess "editorial rights" under the First Amendment to control what content they carry, block, and emphasize in their feeds. So, regulating platforms as common carriers, as Texas and Florida have sought to do, is unconstitutional. It is also a terrible idea. Requiring platforms to carry all content on a nondiscriminatory basis, even if limited to legal content (which it would be hard to do) would flood user feeds with such lawful-but-awful content as pornography, hate speech, and terrorist propaganda. This in turn would destroy social media as a usable medium, to the detriment of everyone.
Killing the Messenger is a highly readable survey of the current political and legal wars over social media platforms. The book carefully parses attacks against social media coming from both the political left and right to demonstrate how most of these critiques are overblown or without empirical support. The work analyzes regulations directed at social media in the United States and European Union, including efforts to amend Section 230 of the Communications Decency Act. It argues that many of these proposals not only raise serious free-speech concerns, but also likely have unintended and perverse public policy consequences. Killing the Messenger concludes by identifying specific regulations of social media that are justified by serious, demonstrated harms, and that can be implemented without jeopardizing the profoundly democratizing impact social media platforms have had on public discourse. This title is also available as open access on Cambridge Core.
This article offers a Baradian–Butlerian reading of Arendse & 42 Others v Meta, a landmark Kenyan case on outsourced content moderation. Moving beyond structural and subjection-centred framings, it theorises law as a site of ontological reconfiguration – where labour, harm and personhood are co-constituted through intra-action. Drawing on diffraction as an onto-epistemological method, the paper examines how the Kenyan courts reclassified digital labour, pierced jurisdictional separability and temporarily unsettled transnational corporate insulation. Yet, this legal aperture also generated recursive violence: moderators lost employment, residency and psychiatric care, even as their trauma became juridically legible. The paper challenges linear emancipatory or subjection-based accounts of such cases, arguing instead that law functions as a diffractive apparatus – producing patterns of recognition and exclusion without closure. It contributes to the governance of content-moderation scholarship by showing how Kenya’s legal system intra-acts with global capital to generate contradictory but generative juridical formations.
The Digital Services Act (Regulation 2022/2065, “DSA”) creates a new national administrative authority to enforce the DSA across member states: the Digital Services Coordinator (“DSC”). DSCs perform a linchpin role in the DSA enforcement. DSCs have a number of tasks that interact with the content moderation process, such as certifying trusted flaggers or participating in drafting codes of conduct. They also have significant investigatory- and sanctioning powers to enforce the DSA vis-à-vis platforms, shaping content moderation processes and protecting users’ rights against platform misconduct. These interactions with content moderation affect users’ freedom of expression. This contribution scrutinises the role of the DSC in light of that freedom, describing how DSCs shape freedom of expression online through their powers in the DSA, and identifying instances where exercise of DSA powers can lead to different levels of protection for freedom of expression across Member States in the decentralised enforcement network. Finally, it proposes avenues in the DSA to anchor protection of freedom of expression in the application of the DSA by DSCs, through pursuing centralisation in cases with significant fundamental rights impact, and encouraging better usage of guideline competencies.
This article explores the human rights standards relevant to ensuring human involvement requirements in EU legislation related to automated content moderation. The opinions given by different experts and human rights bodies emphasise the human rights relevance of the way in which platforms distribute automated and human moderators in their services. EU secondary legislation establishes basic requirements for these structures that are called to be read under a human rights perspective. This article examines the justifications given for incorporating human involvement in content moderation, the different types of human involvement in content moderation, and the specific requirements for such involvement under EU secondary law. Additionally, it analyses the human rights principles concerning procedural safeguards for freedom of expression within this legal framework.
Reading or writing online user-reviews of places like a restaurant or a hair salon is a common information practice. Through its Local Guides Platform, Google calls on users to add reviews of places directly to Google Maps, as well as edit store hours and report fake reviews. Based on a case study of the platform, this chapter examines the governance structures that delineate the role Local Guides play in regulating the Google Maps information ecosystem and how it frames useful information vs. bad information. We track how the Local Guides Platform constructs a community of insiders who make Google Maps better vs. the misinformation that the platform positions as an exterior threat infiltrating Google Maps universally beneficial global mapping project. Framing our analysis through Kuo and Marwick’s critique of the dominant misinformation paradigm, one often based on hegemonic ideals of truth and authenticity. We argue that review and moderation practices on Local Guides further standardize constructions of misinformation as the product of a small group of outlier bad actors in an otherwise convivial information ecosystem. Instead, we consider how the platform’s governance of crowdsourced moderation, paired with Google Maps’ project of creating a single, universal map, helps to homogenize narratives of space that then further normalize the limited scope of Google’s misinformation paradigm.
In the digital age, “commercial sharenting” refers to parents excessively sharing their children’s images and data on social media for profit. Initially motivated by parental pride, this practice is now driven by child-to-child marketing, where young influencers shape their peers’ consumption habits. While regulations protect child influencers’ privacy, a significant gap remains regarding the rights of child viewers. We argue that commercial sharenting threatens children’s right to health under Article 24(1) of the UNCRC, potentially leading to harmful consumer behaviors and identity confusion. In response, China has adopted a fragmented regulatory approach to platform liability. This article advocates for a comprehensive legal framework incorporating content filtering, moderation, and reviewal to regulate commercial sharenting and safeguard children’s rights and interests in China.
This chapter discusses how AI technologies permeate the media sector. It sketches opportunities and benefits of the use of AI in media content gathering and production, media content distribution, fact-checking, and content moderation. The chapter then zooms in on ethical and legal risks raised by AI-driven media applications: lack of data availability, poor data quality, and bias in training datasets, lack of transparency, risks for the right to freedom of expression, threats to media freedom and pluralism online, and threats to media independence. Finally, the chapter introduces the relevant elements of the EU legal framework which aim to mitigate these risks, such as the Digital Services Act, the European Media Freedom Act, and the AI Act.
The conditional legal immunity for hosting unlawful content (safe harbour) provided by Section 79 of the Information Technology Act, 2000 (IT Act) is central to the regulation of online platforms in India for two reasons. First, absent this immunity, platforms in India risk being secondarily liable for a wide range of civil and criminal offences. Second, the Indian Government has recognised that legal immunity for user-generated content is key to platform operations and has sought to regulate platform behaviour by prescribing several pre-conditions to safe harbour. This chapter examines the different obligations set out in the Intermediary Guidelines and evaluates the efforts of the Indian government to regulate platform behaviour in India through the pre-conditions for safe harbour. This chapter finds that the obligations set out in the Intermediary Guidelines are enforced in a patchwork and inconsistent manner through courts. However, the Indian Government retains powerful controls over content and platform behaviour by virtue of its power to block content under Section 69A of the IT Act and the ability to impose personal liability on platform employees within India.
This paper considers the goals of regulators in different countries working on regulating online platforms and how those varied motivations influence the potential for international coordination and cooperation on platform governance. different policy debates and goals surrounding online platform responsibility. The analysis identifies different policy goals related to three different types of obligations that regulators may impose on online platforms: responsibilities to target particular categories of unwanted content, responsibilities for platforms that wield particularly significant influence, and responsibilities to be transparent about platform decision-making. Reviewing the proposals that have emerged in each of these categories across different countries, the paper examines which of these three policy goals present the greatest opportunities for international coordination and agreement and which of them actually require such coordination in order to be effectively implemented. Finally, it considers what lessons can be drawn from existing policy efforts for how to foster greater coordination around areas of common interest related to online platforms.
This paper summarizes the United States’ legal framework governing Internet “platforms” that publish third-party content. It highlights three key features of U.S. law: the constitutional protections for free speech and press, the statutory immunity provided by 47 U.S.C. § 230 (“Section 230”), and the limits on state regulation of the Internet. It also discusses US efforts to impose mandatory transparency obligations on Internet “platforms.”
Like information disseminated through online platforms, infectious diseases can cross international borders as they track the movement of people (and sometimes animals and goods) and spread globally. Hence, their control and management have major implications for international relations, and international law. Drawing on this analogy, this chapter looks to global health governance to formulate suggestions for the governance of online platforms. Successes in global health governance suggest that the principle of tackling low-hanging fruit first to build trust and momentum towards more challenging goals may extend to online platform governance. Progress beyond the low-hanging fruit appears more challenging: For one, disagreement on the issue of resource allocation in the online platform setting may lead to “outbreaks” of disinformation being relegated to regions of the world that may not be at the top of online platforms’ market priorities lists. Secondly, while there may be wide consensus on the harms of infectious disease outbreaks, the harms from the spread of disinformation are more contested. Relying on national definitions of disinformation would hardly yield coherent international cooperation. Global health governance would thus suggest that an internationally negotiated agreement on standards as it relates to disinformation may be necessary.