To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
On October 27, 2022, the Digital Services Act (DSA) was published in the Official Journal of the European Union (EU). The DSA, which has been portrayed as the Europe’s new “Digital Constitution”, sets out a cross-sector regulatory framework for online services and regulates the responsibility of online intermediaries for illegal content. Against this background, this chapters provides a brief overview of recent regulatory developments regarding platform responsibility in the EU. The chapter seeks to add a European perspective to the global debate about platform regulation. Section 3.1 provides an overview of the regulatory framework in the EU and recent legislative developments. Section 3.2 analyses different approaches regarding the enforcement of rules on platform responsibility. Section 3.3 takes a closer look at the regulation of content moderation by digital platforms in the EU. Finally, Section 3.4 adds some observations on the international effects of EU rules on platform responsibility.
While the social media and digital platforms started with an objective to enhance social connectivity and information sharing, they also present a significant challenge in content moderation resulting in spreading disinformation. Disinformation Paradox is a phenomenon where an attempt to regulate harmful content online can inadvertently amplifies it. The social media platforms often serve as breeding grounds for disinformation. This chapter discusses the inherent difficulties in moderating content at a large scale, different responses of these platforms and potential solutions.
This chapter examines China’s approach to platform responsibility for content moderation. It notes that China’s approach is rooted in its overarching goal of public opinion management, which requires platforms to proactively monitor, moderate, and sometimes censor content, especially politically sensitive content. Despite its patchy and iterative approach, China’s platform regulation is consistent and marked by its distinct characteristics, embodied in its defining of illegal and harmful content, its heavy platform obligations, and its strong reliance on administrative enforcement measures. China’s approach reflects its authoritarian nature and the asymmetrical power relations between the government and private platforms. This chapter also provides a nuanced understanding of China’s approach to platform responsibility, including Chinese platforms’ "conditional liability" for tort damages and the regulators’ growing emphasis on user protection and personal information privacy. This chapter includes a case study on TikTok that shows the interplay between the Chinese approach, oversees laws and regulations and the Chinese online platform’s content moderation practices.
There is a growing concern about the evolution of violent extremism in the digital era. This chapter presents historical progression and current state of how extremists have used digital advancements to increase their reach and influence for their own nefarious purposes. This chapter also discusses the challenges due to encryption and the need for a strategic collaboration and comprehensive whole-of-society approach to combat the threats effectively.
Platform governance and regulation have been salient political issues in Brazil for years, particularly as part of Congress’ response to democratic threats posed by former President Bolsonaro. The question became even more important after the January 8th attempted insurrection in Brasília, which many blame on social media. This includes the newly installed Lula administration. In a letter read on the February 2023 UNESCO “Internet for Trust” global conference, the President, now in his third (non-consecutive) term in office wrote that the attack on the nation’s seats of power was “the culmination of a campaign, initiated much before, and that used, as ammunition, lies and disinformation,” which “was nurtured, organized, and disseminated through several digital platforms and messaging apps.” The new administration has made platform regulation a policy priority, with regulatory and administrative pushes across the board. Brazil has been a battleground where proposals for platform responsibility have been advanced — and disputed.
Fifteen years ago in All Politics is Global, I developed a typological theory of global economic governance, arguing that globalization had not transformed international relations but merely expanded the arenas of contestation to include policy arenas that had previously been the exclusive province of domestic politics. In my model, what truly mattered to global governance was the distribution of preferences among the great powers. When great power coordination was achieved, then effective governance would be the outcome. When great power coordination was not, then global governance would exist in name only. Demands for greater content moderation across platforms have increased as the modern economy has become increasingly data-driven. Can any standards be negotiated at the global level? The likeliest result will be a hypocritical system of “sham governance.” Under this system, a few token agreements might be negotiated at the global level. Even these arrangements, however, will lack enforcement mechanisms and likely be honored only in the breach. The regulatory center of gravity will remain at the national level. Changes at the societal and global levels over the past fifteen years only reinforce the dynamics that lead to such an outcome.
Global platforms present novel challenges. They serve as powerful conduits of commerce and global community. Yet their power to influence political and consumer behavior is enormous. Their responsibility for the use of this power – for their content – is statutorily limited by national laws such as Section 230 of the Communications Decency Act in the US. National efforts to demand and guide appropriate content moderation, and to avoid private abuse of this power, is in tension with concern in liberal states to avoid excessive government regulation, especially of speech. Diverse and sometimes contradictory national rules responding to these tensions on a national basis threaten to splinter platforms, and reduce their utility to both wealthy and poor countries. This edited volume sets out to respond to the question whether a global approach can be developed to address these tensions while maintaining or even enhancing the social contribution of platforms.
The world has muddled through with limited and ambiguous understandings of the scope of national jurisdiction in a number of private and public law areas. In order to reduce the barriers of legal difference in the field of platform responsibility, states may begin by reducing areas of overlapping application of law, by agreeing on rules of exclusive jurisdiction. They may also agree on rules of national treatment, most favored nation treatment, and proportionality, or they may agree to harmonize rules. These incursions on national regulatory autonomy will require detailed, sector-specific negotiations, recognizing both the importance of global communications, and the importance of national regulatory autonomy.
There is a potentially correct analogy between international tax regulation and platform content regulation because there is an homology between capital and information. On this basis, this chapter foregrounds three resemblances between tax regulation and content moderation. First, non-State actors access, manage and regulate through platforms flows of capital and similarly flows of information exploiting regulatory differentials, so that there is the need for regulatory alignment in both cases. Second, since both capital and information escape the regulatory reach of States, a common standard must be achieved in both cases. Third, such common standard can be achieved only if home States of Global Actors owning platforms assume together the obligation to moderate profit diversion as well as immoderate use of platform content through procedural accountability. The chapter explains the scope of the global tax problem, and then details the process by which policies have been developed and describes the tax implications of platforms. The chapter concludes suggesting lessons that can be learned from tax regulation for platform responsibility rules: the homology between capital and information points to regulatory structures that reduce excessive opportunism and immoderation in the use of computational capital by platforms.
Global platforms present novel challenges. They are powerful conduits of commerce and global community, and their potential to influence behavior is enormous. Defeating Disinformation explores how to balance free speech and dangerous online content to reduce societal risks of digital platforms. The volume offers an interdisciplinary approach, drawing upon insights from different geographies and parallel challenges of managing global phenomena with national policies and regulations. Chapters also examine the responsibility of platforms for their content, which is limited by national laws such as Section 230 of the Communications Decency Act in the US. This balance between national rules and the need for appropriate content moderation threatens to splinter platforms and reduce their utility across the globe. Timely and expansive, Defeating Disinformation develops a global approach to address these tensions while maintaining, and even enhancing, the social contribution of platforms. This title is also available as open access on Cambridge Core.
Legal design is a rapidly growing field that seeks to improve the legal system's accessibility, usability, and effectiveness through human-centered design methods and principles. This book provides a comprehensive introduction to legal design, covering fundamental concepts, definitions, and theories. Chapters explore the role of legal design in promoting dignity, equity, and justice in the legal system. Contributors present a range of community-driven projects and method-focused case studies that demonstrate the potential of legal design to transform how people experience the law. This book is an essential resource for anyone interested in the future of law and the intersection of design and justice.
To the extent that internet cures are not online health misinformation, they resist the logic of intervention as problem-solving precisely because they provide resolution to problems that otherwise remain indeterminate: digital inscription of miracle cures as record-making and record-keeping, transnational networked sociality that emerges out of the increasingly datafied environment of executable text, the reconfiguration of downtime into connectedness and belonging, and the creation of an alternative miraculous space for therapy as a playful activity. The crowdsourcing of miracle cures happened organically via social media as an intermediary for matching community needs with community capacity; that the longevity of these online groups enables post hoc creation of datasets that can be explored computationally, that the dynamic knowledge-making processes that unfold on these groups become fully open to view thanks to platform affordances – are secondary to the pre-digital social dynamics that drove these practices forward. These secondary utilities, however, came to solidify and legitimize these practices in an ecology of datafied behaviour; in this process, these utilities also transformed the expectation around what it means to engage with miracle cures. If seeking herbal cures for cancer, for example, used to mean coming to a lương y (‘doctor of good conscience’) for advice, or to a herbal store to purchase thuốc gia truyền recipes (‘family transmission’ recipes) and coming away with instructions that are based on socially sanctioned expertise, increasingly people are taking to social media to work out the details of these bodies of knowledge both in response to emergent health concerns and to enact the work of care. That it became acceptable and even desirable to carry out this kind of work in such a digital context is a by-product of the historical continuation of practices that never quite ceased to exist in the first place – and also of emerging forms of sociality as compositions of meaning via digital platform affordances.
Miracle cures proliferate at the digital edge in ways that are very important to their survival: in languages that skirt the technical capabilities and political will of regimes of automated platform content moderation, as esoteric discourse that defies easy categorization, in formats that are prioritized for the imperative of platform profit, and at a temporality constantly recalibrated to accommodate self-time/ eigenzeit.