We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Digital Services Act (Regulation 2022/2065, “DSA”) creates a new national administrative authority to enforce the DSA across member states: the Digital Services Coordinator (“DSC”). DSCs perform a linchpin role in the DSA enforcement. DSCs have a number of tasks that interact with the content moderation process, such as certifying trusted flaggers or participating in drafting codes of conduct. They also have significant investigatory- and sanctioning powers to enforce the DSA vis-à-vis platforms, shaping content moderation processes and protecting users’ rights against platform misconduct. These interactions with content moderation affect users’ freedom of expression. This contribution scrutinises the role of the DSC in light of that freedom, describing how DSCs shape freedom of expression online through their powers in the DSA, and identifying instances where exercise of DSA powers can lead to different levels of protection for freedom of expression across Member States in the decentralised enforcement network. Finally, it proposes avenues in the DSA to anchor protection of freedom of expression in the application of the DSA by DSCs, through pursuing centralisation in cases with significant fundamental rights impact, and encouraging better usage of guideline competencies.
This article explores the human rights standards relevant to ensuring human involvement requirements in EU legislation related to automated content moderation. The opinions given by different experts and human rights bodies emphasise the human rights relevance of the way in which platforms distribute automated and human moderators in their services. EU secondary legislation establishes basic requirements for these structures that are called to be read under a human rights perspective. This article examines the justifications given for incorporating human involvement in content moderation, the different types of human involvement in content moderation, and the specific requirements for such involvement under EU secondary law. Additionally, it analyses the human rights principles concerning procedural safeguards for freedom of expression within this legal framework.
Reading or writing online user-reviews of places like a restaurant or a hair salon is a common information practice. Through its Local Guides Platform, Google calls on users to add reviews of places directly to Google Maps, as well as edit store hours and report fake reviews. Based on a case study of the platform, this chapter examines the governance structures that delineate the role Local Guides play in regulating the Google Maps information ecosystem and how it frames useful information vs. bad information. We track how the Local Guides Platform constructs a community of insiders who make Google Maps better vs. the misinformation that the platform positions as an exterior threat infiltrating Google Maps universally beneficial global mapping project. Framing our analysis through Kuo and Marwick’s critique of the dominant misinformation paradigm, one often based on hegemonic ideals of truth and authenticity. We argue that review and moderation practices on Local Guides further standardize constructions of misinformation as the product of a small group of outlier bad actors in an otherwise convivial information ecosystem. Instead, we consider how the platform’s governance of crowdsourced moderation, paired with Google Maps’ project of creating a single, universal map, helps to homogenize narratives of space that then further normalize the limited scope of Google’s misinformation paradigm.
In the digital age, “commercial sharenting” refers to parents excessively sharing their children’s images and data on social media for profit. Initially motivated by parental pride, this practice is now driven by child-to-child marketing, where young influencers shape their peers’ consumption habits. While regulations protect child influencers’ privacy, a significant gap remains regarding the rights of child viewers. We argue that commercial sharenting threatens children’s right to health under Article 24(1) of the UNCRC, potentially leading to harmful consumer behaviors and identity confusion. In response, China has adopted a fragmented regulatory approach to platform liability. This article advocates for a comprehensive legal framework incorporating content filtering, moderation, and reviewal to regulate commercial sharenting and safeguard children’s rights and interests in China.
This chapter discusses how AI technologies permeate the media sector. It sketches opportunities and benefits of the use of AI in media content gathering and production, media content distribution, fact-checking, and content moderation. The chapter then zooms in on ethical and legal risks raised by AI-driven media applications: lack of data availability, poor data quality, and bias in training datasets, lack of transparency, risks for the right to freedom of expression, threats to media freedom and pluralism online, and threats to media independence. Finally, the chapter introduces the relevant elements of the EU legal framework which aim to mitigate these risks, such as the Digital Services Act, the European Media Freedom Act, and the AI Act.
The conditional legal immunity for hosting unlawful content (safe harbour) provided by Section 79 of the Information Technology Act, 2000 (IT Act) is central to the regulation of online platforms in India for two reasons. First, absent this immunity, platforms in India risk being secondarily liable for a wide range of civil and criminal offences. Second, the Indian Government has recognised that legal immunity for user-generated content is key to platform operations and has sought to regulate platform behaviour by prescribing several pre-conditions to safe harbour. This chapter examines the different obligations set out in the Intermediary Guidelines and evaluates the efforts of the Indian government to regulate platform behaviour in India through the pre-conditions for safe harbour. This chapter finds that the obligations set out in the Intermediary Guidelines are enforced in a patchwork and inconsistent manner through courts. However, the Indian Government retains powerful controls over content and platform behaviour by virtue of its power to block content under Section 69A of the IT Act and the ability to impose personal liability on platform employees within India.
This paper considers the goals of regulators in different countries working on regulating online platforms and how those varied motivations influence the potential for international coordination and cooperation on platform governance. different policy debates and goals surrounding online platform responsibility. The analysis identifies different policy goals related to three different types of obligations that regulators may impose on online platforms: responsibilities to target particular categories of unwanted content, responsibilities for platforms that wield particularly significant influence, and responsibilities to be transparent about platform decision-making. Reviewing the proposals that have emerged in each of these categories across different countries, the paper examines which of these three policy goals present the greatest opportunities for international coordination and agreement and which of them actually require such coordination in order to be effectively implemented. Finally, it considers what lessons can be drawn from existing policy efforts for how to foster greater coordination around areas of common interest related to online platforms.
This paper summarizes the United States’ legal framework governing Internet “platforms” that publish third-party content. It highlights three key features of U.S. law: the constitutional protections for free speech and press, the statutory immunity provided by 47 U.S.C. § 230 (“Section 230”), and the limits on state regulation of the Internet. It also discusses US efforts to impose mandatory transparency obligations on Internet “platforms.”
Like information disseminated through online platforms, infectious diseases can cross international borders as they track the movement of people (and sometimes animals and goods) and spread globally. Hence, their control and management have major implications for international relations, and international law. Drawing on this analogy, this chapter looks to global health governance to formulate suggestions for the governance of online platforms. Successes in global health governance suggest that the principle of tackling low-hanging fruit first to build trust and momentum towards more challenging goals may extend to online platform governance. Progress beyond the low-hanging fruit appears more challenging: For one, disagreement on the issue of resource allocation in the online platform setting may lead to “outbreaks” of disinformation being relegated to regions of the world that may not be at the top of online platforms’ market priorities lists. Secondly, while there may be wide consensus on the harms of infectious disease outbreaks, the harms from the spread of disinformation are more contested. Relying on national definitions of disinformation would hardly yield coherent international cooperation. Global health governance would thus suggest that an internationally negotiated agreement on standards as it relates to disinformation may be necessary.
In order to manage the issue of diversity of regulatory vision, States may, to some extent, harmonize substantive regulation—eliminating diversity. This is less likely than States determining unilaterally or multilaterally to develop manageable rules of jurisdiction, so that their regulation applies only in limited circumstances. The fullest realization of this “choice of law” solution would involve geoblocking or other technology that divides up regulatory authority according to a specified, and a perhaps agreed, principle. Geoblocking may be costly and ultimately porous, but it would allow different communities to effectuate their different visions of the good in the platform context. To the extent that the principles of jurisdiction are agreed, and are structured to be exclusive, platforms would have the certainty of knowing the requirements under which they must operate in each market. Of course, different communities may remain territorial states, but given the a-territorial nature of the internet, it may be possible for other divisions of authority and responsibility to develop. Cultural affinity, or political perspective, may be more compelling as an organizational principle to some than territorial co-location.
On October 27, 2022, the Digital Services Act (DSA) was published in the Official Journal of the European Union (EU). The DSA, which has been portrayed as the Europe’s new “Digital Constitution”, sets out a cross-sector regulatory framework for online services and regulates the responsibility of online intermediaries for illegal content. Against this background, this chapters provides a brief overview of recent regulatory developments regarding platform responsibility in the EU. The chapter seeks to add a European perspective to the global debate about platform regulation. Section 3.1 provides an overview of the regulatory framework in the EU and recent legislative developments. Section 3.2 analyses different approaches regarding the enforcement of rules on platform responsibility. Section 3.3 takes a closer look at the regulation of content moderation by digital platforms in the EU. Finally, Section 3.4 adds some observations on the international effects of EU rules on platform responsibility.
While the social media and digital platforms started with an objective to enhance social connectivity and information sharing, they also present a significant challenge in content moderation resulting in spreading disinformation. Disinformation Paradox is a phenomenon where an attempt to regulate harmful content online can inadvertently amplifies it. The social media platforms often serve as breeding grounds for disinformation. This chapter discusses the inherent difficulties in moderating content at a large scale, different responses of these platforms and potential solutions.
This chapter examines China’s approach to platform responsibility for content moderation. It notes that China’s approach is rooted in its overarching goal of public opinion management, which requires platforms to proactively monitor, moderate, and sometimes censor content, especially politically sensitive content. Despite its patchy and iterative approach, China’s platform regulation is consistent and marked by its distinct characteristics, embodied in its defining of illegal and harmful content, its heavy platform obligations, and its strong reliance on administrative enforcement measures. China’s approach reflects its authoritarian nature and the asymmetrical power relations between the government and private platforms. This chapter also provides a nuanced understanding of China’s approach to platform responsibility, including Chinese platforms’ "conditional liability" for tort damages and the regulators’ growing emphasis on user protection and personal information privacy. This chapter includes a case study on TikTok that shows the interplay between the Chinese approach, oversees laws and regulations and the Chinese online platform’s content moderation practices.
Platform governance and regulation have been salient political issues in Brazil for years, particularly as part of Congress’ response to democratic threats posed by former President Bolsonaro. The question became even more important after the January 8th attempted insurrection in Brasília, which many blame on social media. This includes the newly installed Lula administration. In a letter read on the February 2023 UNESCO “Internet for Trust” global conference, the President, now in his third (non-consecutive) term in office wrote that the attack on the nation’s seats of power was “the culmination of a campaign, initiated much before, and that used, as ammunition, lies and disinformation,” which “was nurtured, organized, and disseminated through several digital platforms and messaging apps.” The new administration has made platform regulation a policy priority, with regulatory and administrative pushes across the board. Brazil has been a battleground where proposals for platform responsibility have been advanced — and disputed.
Fifteen years ago in All Politics is Global, I developed a typological theory of global economic governance, arguing that globalization had not transformed international relations but merely expanded the arenas of contestation to include policy arenas that had previously been the exclusive province of domestic politics. In my model, what truly mattered to global governance was the distribution of preferences among the great powers. When great power coordination was achieved, then effective governance would be the outcome. When great power coordination was not, then global governance would exist in name only. Demands for greater content moderation across platforms have increased as the modern economy has become increasingly data-driven. Can any standards be negotiated at the global level? The likeliest result will be a hypocritical system of “sham governance.” Under this system, a few token agreements might be negotiated at the global level. Even these arrangements, however, will lack enforcement mechanisms and likely be honored only in the breach. The regulatory center of gravity will remain at the national level. Changes at the societal and global levels over the past fifteen years only reinforce the dynamics that lead to such an outcome.
Global platforms present novel challenges. They serve as powerful conduits of commerce and global community. Yet their power to influence political and consumer behavior is enormous. Their responsibility for the use of this power – for their content – is statutorily limited by national laws such as Section 230 of the Communications Decency Act in the US. National efforts to demand and guide appropriate content moderation, and to avoid private abuse of this power, is in tension with concern in liberal states to avoid excessive government regulation, especially of speech. Diverse and sometimes contradictory national rules responding to these tensions on a national basis threaten to splinter platforms, and reduce their utility to both wealthy and poor countries. This edited volume sets out to respond to the question whether a global approach can be developed to address these tensions while maintaining or even enhancing the social contribution of platforms.
There is a potentially correct analogy between international tax regulation and platform content regulation because there is an homology between capital and information. On this basis, this chapter foregrounds three resemblances between tax regulation and content moderation. First, non-State actors access, manage and regulate through platforms flows of capital and similarly flows of information exploiting regulatory differentials, so that there is the need for regulatory alignment in both cases. Second, since both capital and information escape the regulatory reach of States, a common standard must be achieved in both cases. Third, such common standard can be achieved only if home States of Global Actors owning platforms assume together the obligation to moderate profit diversion as well as immoderate use of platform content through procedural accountability. The chapter explains the scope of the global tax problem, and then details the process by which policies have been developed and describes the tax implications of platforms. The chapter concludes suggesting lessons that can be learned from tax regulation for platform responsibility rules: the homology between capital and information points to regulatory structures that reduce excessive opportunism and immoderation in the use of computational capital by platforms.
Global platforms present novel challenges. They are powerful conduits of commerce and global community, and their potential to influence behavior is enormous. Defeating Disinformation explores how to balance free speech and dangerous online content to reduce societal risks of digital platforms. The volume offers an interdisciplinary approach, drawing upon insights from different geographies and parallel challenges of managing global phenomena with national policies and regulations. Chapters also examine the responsibility of platforms for their content, which is limited by national laws such as Section 230 of the Communications Decency Act in the US. This balance between national rules and the need for appropriate content moderation threatens to splinter platforms and reduce their utility across the globe. Timely and expansive, Defeating Disinformation develops a global approach to address these tensions while maintaining, and even enhancing, the social contribution of platforms. This title is also available as open access on Cambridge Core.
Malgré l'attention accordée à l'enjeu de la mésinformation au cours des dernières années, peu d’études ont examiné l'appui des citoyens pour les mesures visant à y faire face. À l'aide de données récoltées lors des élections québécoises de 2022 et de modèles par blocs récursifs, cet article montre que l'appui aux interventions contre la mésinformation est élevé en général, mais que les individus ayant une idéologie de droite, appuyant le Parti conservateur du Québec et n'ayant pas confiance dans les médias et les scientifiques sont plus susceptibles de s'y opposer. Ceux qui ne sont pas préoccupés par l'enjeu, priorisent la protection de la liberté d'expression ou adhèrent aux fausses informations y sont aussi moins favorables. Les résultats suggèrent que dépolitiser l'enjeu de la mésinformation et travailler à renforcer la confiance envers les institutions pourraient augmenter la légitimité perçue et l'efficacité de notre réponse face à la mésinformation.
Small claim. Regulation 861/2007 (European Small Claims Procedure Regulation). Unfair term in terms of service agreement. Breach of contract through shadowbanning. Infringement on Articles 12 and 17 of Regulation 2022/2065 (Digital Services Act).