To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Misinformation on social media is a recognized threat to societies. Research has shown that social media users play an important role in the spread of misinformation. It is crucial to understand how misinformation affects user online interaction behavior and the factors that contribute to it. In this study, we employ an AI deep learning model to analyze emotions in user online social media conversations about misinformation during the COVID-19 pandemic. We further apply the Stimuli–Organism–Response framework to examine the relationship between the presence of misinformation, emotions, and social bonding behavior. Our findings highlight the usefulness of AI deep learning models to analyze emotions in social media posts and enhance the understanding of online social bonding behavior around health-related misinformation.
In Chapter 2, the classification of data processed by MDTs under the General Data Protection Regulation (GDPR) is examined. While the data processed by MDTs is typically linked to the category of biometric data, accurately classifying the data as special category biometric data is complex. As a result, substantial amounts of data lack the special protections afforded by the GDPR. Notably, data processed by text-based MDTs falls entirely outside the realm of special protection unless associated with another protected category. The book advocates for a shift away from focusing on the technological or biophysical parameters that render mental processes datafiable. Instead, it emphasises the need to prioritise the protection of the information itself. To address this, Chapter 2 proposes the inclusion of a new special category of ‘mind data’ within the GDPR. The analysis shows that classifying mind data as a sui generis special category aligns with the rationale and tradition of special category data in data protection law.
Generative AI based on large language models (LLM) currently faces serious privacy leakage issues due to the wide range of parameters and diverse data sources. When using generative AI, users inevitably share data with the system. Personal data collected by generative AI may be used for model training and leaked in future outputs. The risk of private information leakage is closely related to the inherent operating mechanism of generative AI. This indirect leakage is difficult to detect by users due to the high complexity of the internal operating mechanism of generative AI. By focusing on the private information exchanged during interactions between users and generative AI, we identify the privacy dimensions involved and develop a model for privacy types in human–generative AI interactions. This can provide a reference for generative AI to avoid training private data and help it provide clear explanations of relevant content for the types of privacy users are concerned about.
Use case 1 in Chapter 4 explores the regulation of MDTs in the context of mental health and well-being under the General Data Protection Regulation (GDPR), the Medical Devices Regulation (MDR), the Artificial Intelligence Act (AIA), and the European Health Data Space (EHDS) Regulation. The analysis reveals that data protection issues in this sector are not primarily due to deficiencies in the law, but rather stem from significant compliance weaknesses, particularly in applications extending beyond the traditional medical sector. Consumer mental health and well-being devices could greatly benefit from co-regulatory measures, such as a sector-specific data protection certification. Additionally, legislators need to tackle the issue of manufacturers circumventing MDR certification due to ambiguities in the classification model. The EU’s regulatory approach to non-medical Brain–Computer Interfaces (BCIs) within medical devices legislation is highlighted as a potential blueprint and should be advocated in ongoing global policy discussions concerning neurotechnologies.
As generative AI technologies continue to advance at a rapid pace, they are fundamentally transforming the dynamics of human–AI interaction and collaboration, a phenomenon that was once relegated to the realm of science fiction. These developments not only present unprecedented opportunities but also introduce a range of complex challenges. Key factors such as trust, transparency, and cultural sensitivity have emerged as essential considerations in the successful adoption and efficacy of these systems. Furthermore, the intricate balance between human and AI contributions, the optimization of algorithms to accommodate diverse user needs, and the ethical implications of AI’s role in society pose significant challenges that require careful navigation. This chapter will delve into these multifaceted issues, analyzing both user-level concerns and the underlying technical and psychological dynamics that are critical to fostering effective human–AI interaction and collaboration.
The fast-paced evolution of emotion technology and neurotechnology, along with their commercial potential, raises concerns about the adequacy of existing legal frameworks. International organizations have begun addressing these technologies in policy papers, and initial legislative responses are underway. This book offers a comprehensive legal analysis of EU legislation regulating these technologies. It examines four key use cases frequently discussed in media, civil society, and policy debates: mental health and well-being, commercial advertising, political advertising, and workplace monitoring. The book assesses current legal frameworks, highlighting the gaps and challenges involved. Building on this analysis, it presents potential policy responses, exploring a range of legal instruments to address emerging issues. Ultimately, the book aims to offer valuable insights for legal scholars, policymakers, and other stakeholders, contributing to ongoing governance debates and fostering the responsible development of these technologies.
This chapter examines conservative attacks on social media, and their validity. Conservatives have long accused the major social media platforms of left-leaning bias, claiming that platform content moderation policies unfairly target conservative content for blocking, labeling, and deamplification. They point in particular to events during the COVID-19 lockdowns, as well as President Trump’s deplatforming, as proof of such bias. In 2021, these accusations led both Florida and Texas to adopt laws regulating platform content moderation in order to combat the alleged bias. But a closer examination of the evidence raises serious doubts about whether such bias actually exists. An equally plausible explanation for why conservatives perceive bias is that social media content moderation policies, in particular against medical disinformation and hate speech, are more likely to affect conservative than other content. For this reason, claims of platform bias remain unproven. Furthermore, modern conservative attacks on social media are strikingly inconsistent with the general conservative preference not to interfere with private businesses.
Section 230 of the Communications Decency Act is often called "The Twenty-Six Words That Created the Internet." This 1996 law grants platforms broad legal immunity against claims arising from both third-party content that they host, and good-faith content moderation decisions that they make. Most observers agree that without Section 230 immunity, or some variant of it, the modern internet and social media could not exist. Nonetheless, Section 230 has been subject to vociferous criticism, with both Presidents Biden and Trump having called for its repeal. Critics claim that Section 230 lets platforms have it both ways, leaving them free to host harmful content but also to block any content they object to. This chapter argues that criticisms of Section 230 are largely unwarranted. The diversity of the modern internet, and ability of ordinary individuals to reach broad audiences on the internet, would be impossible without platform immunity. As such, calls for repeal of or major amendments to Section 230 are deeply unwise. The chapter concludes by pointing to important limits on Section 230 immunity and identifying some narrow amendments to Section 230 that may be warranted.
As Chapter 1 discusses, one of the most consistent conservative critiques of social media platforms is that social media is biased against conservative content. A common policy proposal to address this is to regulate such platforms as common carriers. Doing so would require social media platforms to host, on a nondiscriminatory basis, all legal user content and to permit all users to access platforms on equal terms. While this seems an attractive idea – after all, who could object to nondiscrimination – it is not. For one thing, the Supreme Court has now recognized that social media platforms possess "editorial rights" under the First Amendment to control what content they carry, block, and emphasize in their feeds. So, regulating platforms as common carriers, as Texas and Florida have sought to do, is unconstitutional. It is also a terrible idea. Requiring platforms to carry all content on a nondiscriminatory basis, even if limited to legal content (which it would be hard to do) would flood user feeds with such lawful-but-awful content as pornography, hate speech, and terrorist propaganda. This in turn would destroy social media as a usable medium, to the detriment of everyone.
This brief conclusion summarizes the main thesis of the book, noting that both conservative and progressive critiques of social media lack strong empirical justifications, and that many if not most of the regulatory proposals directed at social media are not only likely to be found unconstitutional, but are also wrong-headed. It then argues that it is time we all accept that the old, pre-social media world of gatekeepers is over; and further, that this development has important, positive implications for the democratization of public discourse in ways that free speech theory supports. Finally, the Conclusion analogizes the modern hysteria over the growth of social media to earlier panics over changes in communications technology, such as the inventions of the printing press and of moving pictures. As with those earlier panics, this one too is overblown and ignores the positive potential impacts of technological change.
Critics from across the political spectrum attack social media platforms for invading personal privacy. Social media firms famously suck in huge amounts of information about individuals who use their services (and sometimes others as well), and then monetize this data, primarily by selling targeted advertising. Many privacy advocates object to the very collection and use of this personal data by platforms, even if not shared with third parties. In addition, there is the ongoing (and reasonable) concern that the very existence of Big Data creates a risk of leaks. Further, aside from the problem of Big Data, the very existence of social media enables private individuals to invade the privacy of others by widely disseminating personal information. That social media firms’ business practices compromise privacy cannot be seriously doubted. But it is also true that Big Data lies at the heart of social media firms’ business models, permitting them to provide users with free services in exchange for data which they can monetize via targeted advertising. So unless regulators want to take free services away, they must tread cautiously in regulating privacy.
After having argued against most current regulatory reform proposals directed at social media, this final chapter considers some regulatory initiatives worthy of consideration. It begins, however, with a call for caution. The principle of "First, do no harm" in medical ethics is highly relevant here. Social media is too new, and too rapidly evolving, for regulators to be able to confidently predict either the current impact of regulation or its long term effects, so regulators must act with humility. That said, social media also is not a law-free zone. Long-standing bodies of law, such as antitrust, contract, tort, and even family law, can and should be applied to social media firms in the same way as other private actors. Furthermore, even Section 230 in its current form should not be sacrosanct, and there is also room to consider granting platform users modest procedural protections against arbitrary content moderation decisions. Finally, there are strong arguments for a federal data privacy law, not directed at social media in particular but certainly applicable to it. In short, social media should not be above the law – but nor should it be the target of lawfare.
In contrast to conservatives, progressives argue that platforms don’t block enough content. In particular, progressive critics point to the prevalence of allegedly harmful content on social media platforms, including politically manipulative content, mis- and disinformation (especially about medical issues), harassment and doxing, and hate speech. They argue that social media algorithms actively promote such content to increase engagement, resulting in many forms of social harm including greater political polarization. And they argue (along with conservatives) that social media platforms have been especially guilty of permitting materials harmful to children to remain accessible. As with conservative attacks however, the progressive war on social media is rife with exaggerations and rests on shaky empirical grounds. In particular, there is very little proof that that platform algorithms increase political polarization, or even proof that social media harms children. Moreover, while not all progressive attacks on social media lack a foundation, they are all rooted in an entirely unrealistic expectation that perfect content moderation is possible.
The primary progressive model for curing the perceived ills of social media – the failure to block harmful content – is to encourage or require social media platforms to act as gatekeepers. On this view, the institutional media, such as newspapers, radio, and television, historically ensured that the flow of information to citizens and consumers was "clean," meaning cleansed of falsehoods and malicious content. This in turn permitted a basic consensus to exist on facts and basic values, something essential for functional democracies. The rise of social media, however, destroyed the ability of institutional media to act as gatekeepers, and so, it is argued, it is incumbent on platforms to step into that role. This chapter argues that this is misguided. Traditional gatekeepers shared two key characteristics: scarcity and objectivity. Neither, however, characterizes the online world. And in any event, social media lack either the economic incentives or the expertise to be effective gatekeepers of information. Finally, and most fundamentally, the entire model of elite gatekeepers of knowledge is inconsistent with basic First Amendment principles and should be abandoned.
The area where social media has undoubtedly been most actively regulated is in their data and privacy practices. While no serious critic has proposed a flat ban on data collection and use (since that would destroy the algorithms that drive social media), a number of important jurisdictions including the European Union and California have imposed important restrictions on how websites (including social media) collect, process, and disclose data. Some privacy regulations are clearly justified, but insofar as data privacy laws become so strict as to threaten advertising-driven business models, the result will be that social media (and search and many other basic internet features) will stop being free, to the detriment of most users. In addition, privacy laws (and related rules such as the “right to be forgotten”) by definition restrict the flow of information, and so burden free expression. Sometimes that burden is justified, but especially when applied to information about public figures, suppressing unfavorable information undermines democracy. The chapter concludes by arguing that one area where stricter regulation is needed is protecting children’s data.