To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This conclusion weaves together the wide-ranging contributions of this volume by considering data-driven personalisation as an internally self-sustaining (autopoietic) system. It observes that like other self-sufficient social systems, personalisation incorporates and processes new data and thereby redefines itself. In doing so it redefines the persons who participate in it, transforming them into ‘digital’ components of this new systems, as well as influencing social arrangements more broadly. The control that elite corporate and governmental entities have over systems of personalisation – which have been diversely described by contributors to this volume – reveals challenges in the taming of personalisation, specifically the limits of traditional means by which free persons address new phenomena – through consent as individuals, and democratic process collectively.
The development of data-driven personalisation in medicine, as exemplified by the ‘P4’ approach formulated by Leroy Hood and colleagues, may be viewed as consistent with a particular understanding of law’s role in respect of health, and with the dominant ethical principle of autonomy which underpins this. This chapter maps the direction of travel of health law in the UK in recent times against the evolution of personalised medicine. It notes, however, that this offers merely a partial account of the function of law in this context, as well as of the reach of this sub-discipline as a scholarly endeavour.
In the European Union, regulatory analysis of artificial intelligence in general and personalisation more specifically often starts with data protection law, more specifically the General Data Protection Regulation (GDPR). This is unsurprising due to the fact that training data often contains personal data and that the output of these systems can also take the form of personal data. There are, however, limits to data protection’s ability to function as a general AI law. This chapter highlights the importance of being realistic about the GDPR’s opportunities and limitations in this respect. It examines the application of certain elements of the GDPR to data-driven personalisation and highlights that whereas the Regulation indeed applies to the processing of personal data, it would be erroneous to frame it as a general ‘AI law’ capable of addressing all normative concerns around personalisation.
Data-driven personalisation is emerging as a central force in political communication. Political micro-targeting has the potential to enhance political engagement and to make it easier and more effective for political parties and movements to communicate with potential voters and supporters. However, the collection and use of personal information about voters also affects their privacy rights and can interfere with personal autonomy essential for democracy.
This chapter argues that the rise of data-driven communications requires a revaluation of the role of information privacy in political campaigns. Data protection laws have an important role to play in limiting the processing of personal data and requiring data practices to be designed in a manner that balances privacy and competing rights. In our view, there is no longer a good case for the retention in data protection laws of exemptions for political parties or actors, or overly broad provisions permitting data processing in political contexts.
Subjecting political parties and digital intermediaries to the general requirements of fair, transparent and lawful processing would go some way towards moderating political micro-targeting. The imposition of any privacy-based restrictions on political actors would enhance voter privacy, engender more trust in political communication and, ultimately, protect democratic discourse.
Drawing upon Foucauldian ideas this chapter explores how the ‘datafication’ of modern life shifts the modes of power acting upon the individual and social body. Through a brief exploration of three banal everyday social practices (driving, health, gambling) it argues that the construction of the data-self marks an emergent algorithmic govermentality centred simultaneously upon intimate knowledge of the individual (subjectivities) and the population. The intersection of technology, data and subjectivation, reproduces a ‘neoliberal subject’ – one closely monitored and policed to freely perform ‘correct’ forms of action or behaviour, and one increasingly governed by the imperatives of private capital. This chapter explores how this nexus between power and knowledge is central to debates about the relocation (or appropriation) of personal and population data from state to non-state institutions, with private corporations increasingly managing the health and wellbeing of individuals and society. It makes an argument for critical engagement with the complex interactions, intersections, effects and unintended consequences of multiple technologies that, through the use of data, make the simultaneous government of individuals and populations their targets of action.
Should we regulate artificial intelligence? Can we? From self-driving cars and high-speed trading to algorithmic decision-making, the way we live, work, and play is increasingly dependent on AI systems that operate with diminishing human intervention. These fast, autonomous, and opaque machines offer great benefits – and pose significant risks. This book examines how our laws are dealing with AI, as well as what additional rules and institutions are needed – including the role that AI might play in regulating itself. Drawing on diverse technologies and examples from around the world, the book offers lessons on how to manage risk, draw red lines, and preserve the legitimacy of public authority. Though the prospect of AI pushing beyond the limits of the law may seem remote, these measures are useful now – and will be essential if it ever does.
The most fascinating and profitable subject of predictive algorithms is the human actor. Analysing big data through learning algorithms to predict and pre-empt individual decisions gives a powerful tool to corporations, political parties and the state. Algorithmic analysis of digital footprints, as an omnipresent form of surveillance, has already been used in diverse contexts: behavioural advertising, personalised pricing, political micro-targeting, precision medicine, and predictive policing and prison sentencing. This volume brings together experts to offer philosophical, sociological, and legal perspectives on these personalised data practices. It explores common themes such as choice, personal autonomy, equality, privacy, and corporate and governmental efficiency against the normative frameworks of the market, democracy and the rule of law. By offering these insights, this collection on data-driven personalisation seeks to stimulate an interdisciplinary debate on one of the most pervasive, transformative, and insidious socio-technical developments of our time.
It is fitting that the last example we introduced in the book was about the Internet Research Agency’s (IRA) use of social media, analytics, and recommendation systems to wage disinformation campaigns and sow anger and social discord on the ground. At first glance, it seems odd to think of that as primarily an issue of technology. Disinformation campaigns are ancient, after all; the IRA’s tactics are old wine in new boxes. That, however, is the point. What matters most is not particular features of technologies. Rather, it is how a range of technologies affect things of value in overlapping ways. The core thesis of our book is that understanding the moral salience of algorithmic decision systems requires understanding how such systems relate to an important value, viz., persons’ autonomy. Hence, the primary through line of the book is the value itself, and we have organized it to emphasize distinct facets of autonomy and used algorithmic systems as case studies.
A little after 2 a.m. on February 11, 2013, Michael Vang sat in a stolen car and fired a shotgun twice into a house in La Crosse, Wisconsin. Shortly afterward, Vang and Eric Loomis crashed the car into a snowbank and fled on foot. They were soon caught, and police recovered spent shell casings, live ammunition, and the shotgun from the stolen and abandoned car. Vang pleaded no contest to operating a motor vehicle without the owner’s consent, attempting to flee or elude a traffic officer, and possession of methamphetamine. He was sentenced to ten years in prison.
This chapter connects our arguments about agency and autonomy in chapters 2-4 to conceptions of freedom and its value. We argue that freedom has two fundamental conditions: that persons be undominated by others and that they have an adequate degree of autonomy and agency. We then explain that algorithmic systems can threaten both the domination-based and the agency-based requirements, either by facilitating domination or by exploiting weaknesses in human agency. We explicate these types of threats as three sorts of challenges to freedom. The first are “affective challenges,” which involve the role of affective, nonconscious processes (such as fear, anger, and addiction) in human behavior and decision-making. These processes, we argue, interfere with our procedural independence, thereby threatening persons’ freedom by undermining autonomy. The second are “deliberative challenges.” These involve strategic exploitation of the fact that human cognition and decision-making are limited. These challenges also relate to our procedural independence, but they do not so much interfere with it as they exploit its natural limits. A third sort of challenge, which we describe as “social challenges,” involve toxic social and relational environments. These threaten our substantive independence and thus, our freedom.
This chapter outlines the conception of autonomy that grounds the arguments throughout the book. We begin with a basic definition of autonomy as self-government, distinguish global and local autonomy, and explain how autonomy may be understood as a capacity, as the exercise of that capacity, as successful self-government, and as a right. We then describe a key split in the philosophical literature between psychological autonomy and personal autonomy. We offer an ecumenical view of autonomy that incorporates facets of both psychological and personal autonomy. Finally, we rehearse some key objections to traditional conceptions of autonomy, and explain how contemporary accounts address those criticisms.
In this chapter, we address some distinctively epistemic problems that algorithms pose in the context of social media and argue that in some cases that epistemic problems warrant paternalistic interventions. Our paternalistic proposal to these problems is compatible with respect for freedom and autonomy; in fact, we argue that freedom and autonomy demand some kinds of paternalistic interventions. The chapter proceeds as follows. First, we discuss an intervention that Facebook has run in hopes of demoting the spread of fake news on the site. We explain why the intervention is paternalistic and then, using the framework of this book, defend the intervention. We argue that while Facebook’s intervention is defensible, it is limited. It is an intervention that may pop some epistemic bubbles but will likely be powerless against echo chambers. We then discuss heavier-handed interventions that might be effective enough to dismantle some echo chambers, and we argue that at least some heavier-handed epistemically paternalistic interventions are permissible.