To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Cambridge Handbook of Behavioural Data Science offers an essential exploration of how behavioural science and data science converge to study, predict, and explain human, algorithmic, and systemic behaviours. Bringing together scholars from psychology, economics, computer science, engineering, and philosophy, the Handbook presents interdisciplinary perspectives on emerging methods, ethical dilemmas, and real-world applications. Organised into modular parts-Human Behaviour, Algorithmic Behaviour, Systems and Culture, and Applications—it provides readers with a comprehensive, flexible map of the field. Covering topics from cognitive modelling to explainable AI, and from social network analysis to ethics of large language models, the Handbook reflects on both technical innovations and the societal impact of behavioural data, and reinforces concepts in online supplementary materials and videos. The book is an indispensable resource for researchers, students, practitioners, and policymakers who seek to engage critically and constructively with behavioural data in an increasingly digital and algorithmically mediated world.
For far too long, tech titans peddled promises of disruptive innovation - fabricating benefits and minimizing harms. The promise of quick and easy fixes overpowered a growing chorus of critical voices, driving a sea of private and public investments into increasingly dangerous, misguided, and doomed forms of disruption, with the public paying the price. But what's the alternative? Upgrades - evidence-based, incremental change. Instead of continuing to invest in untested, high-risk innovations, constantly chasing outsized returns, upgraders seek a more proven path to proportional progress. This book dives deep into some of the most disastrous innovations of recent years - the metaverse, cryptocurrency, home surveillance, and AI, to name a few - while highlighting some of the unsung upgraders pushing real progress each day. Timely and corrective, Move Slow and Upgrade pushes us past the baseless promises of innovation, towards realistic hope.
The core topics at the intersection of human-computer interaction (HCI) and US law -- privacy, accessibility, telecommunications, intellectual property, artificial intelligence (AI), dark patterns, human subjects research, and voting -- can be hard to understand without a deep foundation in both law and computing. Every member of the author team of this unique book brings expertise in both law and HCI to provide an in-depth yet understandable treatment of each topic area for professionals, researchers, and graduate students in computing and/or law. Two introductory chapters explaining the core concepts of HCI (for readers with a legal background) and U.S. law (for readers with an HCI background) are followed by in-depth discussions of each topic.
Being Human in the Digital World is a collection of essays by prominent scholars from various disciplines exploring the impact of digitization on culture, politics, health, work, and relationships. The volume raises important questions about the future of human existence in a world where machine readability and algorithmic prediction are increasingly prevalent and offers new conceptual frameworks and vocabularies to help readers understand and challenge emerging paradigms of what it means to be human. Being Human in the Digital World is an invaluable resource for readers interested in the cultural, economic, political, philosophical, and social conditions that are necessary for a good digital life. This title is also available as Open Access on Cambridge Core.
In recent years, the use of AI has skyrocketed. The introduction of widely available generative AI, such as ChatGPT, has reinvigorated concerns for harm caused to users. Yet so far government bodies and scholarly literature have failed to determine a governance structure to minimize the risks associated with AI and big data. Despite the recent consensus among tech companies and governments that AI needs to be regulated, there has been no agreement regarding what a framework of functional AI governance should look like. This volume assesses the role of law in governing AI applications in society. While exploring the intersection of law and technology, it argues that getting the mix of AI governance structures correct-both inside and outside of the law-while balancing the importance of innovation with risks to human dignity and democratic values, is one of the most important legal-social determination of our times.
This chapter examines conservative attacks on social media, and their validity. Conservatives have long accused the major social media platforms of left-leaning bias, claiming that platform content moderation policies unfairly target conservative content for blocking, labeling, and deamplification. They point in particular to events during the COVID-19 lockdowns, as well as President Trump’s deplatforming, as proof of such bias. In 2021, these accusations led both Florida and Texas to adopt laws regulating platform content moderation in order to combat the alleged bias. But a closer examination of the evidence raises serious doubts about whether such bias actually exists. An equally plausible explanation for why conservatives perceive bias is that social media content moderation policies, in particular against medical disinformation and hate speech, are more likely to affect conservative than other content. For this reason, claims of platform bias remain unproven. Furthermore, modern conservative attacks on social media are strikingly inconsistent with the general conservative preference not to interfere with private businesses.
Section 230 of the Communications Decency Act is often called "The Twenty-Six Words That Created the Internet." This 1996 law grants platforms broad legal immunity against claims arising from both third-party content that they host, and good-faith content moderation decisions that they make. Most observers agree that without Section 230 immunity, or some variant of it, the modern internet and social media could not exist. Nonetheless, Section 230 has been subject to vociferous criticism, with both Presidents Biden and Trump having called for its repeal. Critics claim that Section 230 lets platforms have it both ways, leaving them free to host harmful content but also to block any content they object to. This chapter argues that criticisms of Section 230 are largely unwarranted. The diversity of the modern internet, and ability of ordinary individuals to reach broad audiences on the internet, would be impossible without platform immunity. As such, calls for repeal of or major amendments to Section 230 are deeply unwise. The chapter concludes by pointing to important limits on Section 230 immunity and identifying some narrow amendments to Section 230 that may be warranted.
As Chapter 1 discusses, one of the most consistent conservative critiques of social media platforms is that social media is biased against conservative content. A common policy proposal to address this is to regulate such platforms as common carriers. Doing so would require social media platforms to host, on a nondiscriminatory basis, all legal user content and to permit all users to access platforms on equal terms. While this seems an attractive idea – after all, who could object to nondiscrimination – it is not. For one thing, the Supreme Court has now recognized that social media platforms possess "editorial rights" under the First Amendment to control what content they carry, block, and emphasize in their feeds. So, regulating platforms as common carriers, as Texas and Florida have sought to do, is unconstitutional. It is also a terrible idea. Requiring platforms to carry all content on a nondiscriminatory basis, even if limited to legal content (which it would be hard to do) would flood user feeds with such lawful-but-awful content as pornography, hate speech, and terrorist propaganda. This in turn would destroy social media as a usable medium, to the detriment of everyone.
This brief conclusion summarizes the main thesis of the book, noting that both conservative and progressive critiques of social media lack strong empirical justifications, and that many if not most of the regulatory proposals directed at social media are not only likely to be found unconstitutional, but are also wrong-headed. It then argues that it is time we all accept that the old, pre-social media world of gatekeepers is over; and further, that this development has important, positive implications for the democratization of public discourse in ways that free speech theory supports. Finally, the Conclusion analogizes the modern hysteria over the growth of social media to earlier panics over changes in communications technology, such as the inventions of the printing press and of moving pictures. As with those earlier panics, this one too is overblown and ignores the positive potential impacts of technological change.
Critics from across the political spectrum attack social media platforms for invading personal privacy. Social media firms famously suck in huge amounts of information about individuals who use their services (and sometimes others as well), and then monetize this data, primarily by selling targeted advertising. Many privacy advocates object to the very collection and use of this personal data by platforms, even if not shared with third parties. In addition, there is the ongoing (and reasonable) concern that the very existence of Big Data creates a risk of leaks. Further, aside from the problem of Big Data, the very existence of social media enables private individuals to invade the privacy of others by widely disseminating personal information. That social media firms’ business practices compromise privacy cannot be seriously doubted. But it is also true that Big Data lies at the heart of social media firms’ business models, permitting them to provide users with free services in exchange for data which they can monetize via targeted advertising. So unless regulators want to take free services away, they must tread cautiously in regulating privacy.
After having argued against most current regulatory reform proposals directed at social media, this final chapter considers some regulatory initiatives worthy of consideration. It begins, however, with a call for caution. The principle of "First, do no harm" in medical ethics is highly relevant here. Social media is too new, and too rapidly evolving, for regulators to be able to confidently predict either the current impact of regulation or its long term effects, so regulators must act with humility. That said, social media also is not a law-free zone. Long-standing bodies of law, such as antitrust, contract, tort, and even family law, can and should be applied to social media firms in the same way as other private actors. Furthermore, even Section 230 in its current form should not be sacrosanct, and there is also room to consider granting platform users modest procedural protections against arbitrary content moderation decisions. Finally, there are strong arguments for a federal data privacy law, not directed at social media in particular but certainly applicable to it. In short, social media should not be above the law – but nor should it be the target of lawfare.
In contrast to conservatives, progressives argue that platforms don’t block enough content. In particular, progressive critics point to the prevalence of allegedly harmful content on social media platforms, including politically manipulative content, mis- and disinformation (especially about medical issues), harassment and doxing, and hate speech. They argue that social media algorithms actively promote such content to increase engagement, resulting in many forms of social harm including greater political polarization. And they argue (along with conservatives) that social media platforms have been especially guilty of permitting materials harmful to children to remain accessible. As with conservative attacks however, the progressive war on social media is rife with exaggerations and rests on shaky empirical grounds. In particular, there is very little proof that that platform algorithms increase political polarization, or even proof that social media harms children. Moreover, while not all progressive attacks on social media lack a foundation, they are all rooted in an entirely unrealistic expectation that perfect content moderation is possible.
The primary progressive model for curing the perceived ills of social media – the failure to block harmful content – is to encourage or require social media platforms to act as gatekeepers. On this view, the institutional media, such as newspapers, radio, and television, historically ensured that the flow of information to citizens and consumers was "clean," meaning cleansed of falsehoods and malicious content. This in turn permitted a basic consensus to exist on facts and basic values, something essential for functional democracies. The rise of social media, however, destroyed the ability of institutional media to act as gatekeepers, and so, it is argued, it is incumbent on platforms to step into that role. This chapter argues that this is misguided. Traditional gatekeepers shared two key characteristics: scarcity and objectivity. Neither, however, characterizes the online world. And in any event, social media lack either the economic incentives or the expertise to be effective gatekeepers of information. Finally, and most fundamentally, the entire model of elite gatekeepers of knowledge is inconsistent with basic First Amendment principles and should be abandoned.
The area where social media has undoubtedly been most actively regulated is in their data and privacy practices. While no serious critic has proposed a flat ban on data collection and use (since that would destroy the algorithms that drive social media), a number of important jurisdictions including the European Union and California have imposed important restrictions on how websites (including social media) collect, process, and disclose data. Some privacy regulations are clearly justified, but insofar as data privacy laws become so strict as to threaten advertising-driven business models, the result will be that social media (and search and many other basic internet features) will stop being free, to the detriment of most users. In addition, privacy laws (and related rules such as the “right to be forgotten”) by definition restrict the flow of information, and so burden free expression. Sometimes that burden is justified, but especially when applied to information about public figures, suppressing unfavorable information undermines democracy. The chapter concludes by arguing that one area where stricter regulation is needed is protecting children’s data.
This brief introduction argues that the current, swirling debates over the ills of social media are largely a reflection of larger forces in our society. Social media is accused of creating political polarization, yet polarization long predates social media and pervades every aspect of our society. Social media is accused of a liberal bias and “wokeness”; but in fact, conservative commentators accuse every major institution of our society, including academia, the press, and corporate America, of the same sin. Social media is said to be causing psychological harm to young people, especially young women. But our society’s tendency to impose image-consciousness on girls and young women, and to sexualize girls at ever younger ages, pervades not just social but also mainstream media, the clothing industry, and our culture more generally. And as with polarization, this phenomenon long predates the advent of social media. In short, the supposed ills of social media are in fact the ills of our broader culture. It is just that the pervasiveness of social media makes it the primary mirror in which we see ourselves; and apparently, we do not much like what we see.
The fast-paced evolution of emotion technology and neurotechnology, along with their commercial potential, raises concerns about the adequacy of existing legal frameworks. International organizations have begun addressing these technologies in policy papers, and initial legislative responses are underway. This book offers a comprehensive legal analysis of EU legislation regulating these technologies. It examines four key use cases frequently discussed in media, civil society, and policy debates: mental health and well-being, commercial advertising, political advertising, and workplace monitoring. The book assesses current legal frameworks, highlighting the gaps and challenges involved. Building on this analysis, it presents potential policy responses, exploring a range of legal instruments to address emerging issues. Ultimately, the book aims to offer valuable insights for legal scholars, policymakers, and other stakeholders, contributing to ongoing governance debates and fostering the responsible development of these technologies.