To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter examines how existing laws can and should apply to emerging technology through attribution of responsibility. Legal systems typically seek to deter identifiable persons – natural or juridical – from certain forms of conduct, or to allocate losses to those persons. Responsibility may be direct or indirect: key questions are how the acts and omissions of AI systems can and should be understood. Given the complexity of those systems, novel approaches to responsibility have been proposed, including special applications of product liability, agency, and causation. More important and less studied is the role that insurance can play in compensating harm but also structuring incentives for action. Another approach is to limit the ability to avoid responsibility, drawing on the literature on outsourcing and the prohibition on transferring certain forms of responsibility – most notably the exercise of discretion in the public sector.
Since computers entered the mainstream in the 1960s, the efficiency with which data can be processed has raised regulatory questions. This is well understood with respect to privacy. Data that was notionally public – divorce proceedings, say – had long been protected through the ‘practical obscurity’ of paper records. When such material was available in a single hard copy in a government office, the chances of one’s acquaintances or employer finding it were remote. Yet when it was computerized and made searchable through what ultimately became the Internet, practical obscurity disappeared. Today, high-speed computing poses comparable threats to existing regulatory models in areas from securities regulation to competition law, merely by enabling lawful activities – trading in stocks, or comparing and adjusting prices, say – to be undertaken more quickly than previously conceived possible. Many of these questions are practical rather than conceptual and apply to technologies other than AI. Nevertheless, current approaches to slowing down decision-making – through circuit-breakers to stop trading, for example – will not address all of the problems raised by the speed of AI systems.
Data about consumers has long been a prized asset of organizations. As Paul Schwartz has observed, the “monetary value” of consumer data continues to grow significantly and companies eagerly profit from consumer data.1 The IoT will foster an exponential growth in the volume, quality, and variety of consumer-generated data. As a result, there will be more of our data available for companies to analyze, exploit, and extract value from. As we have seen in previous chapters, several legal scholars have highlighted the limits of companies’ privacy policies and conditions of use, and the role of these documents in enabling data disclosures.
Data-driven algorithms are increasingly used by penal systems across western jurisdictions to predict risks of recidivism. This chapter draws on Foucauldian analysis of the epistemic power of discourse to demonstrate how the algorithms are operating as truth or knowledge producers through the construction of risk labels that determine degrees of penal governance and control. Some proponents emphasise the technical fairness of the algorithms, highlighting their predictive accuracy. But in its exploration of the algorithms and their design configurations as well as their structural implications, this paper unravels the distinctions between a criminal justice and a social justice perspective on algorithmic fairness. It argues that whilst the former focuses on the technical, the latter emphasises broader structural consequences. These include impositions of algorithmic risk labels that operate as self-fulfilling prophesies, triggering future criminalisation and consequently undermining the perceived legitimacy of risk assessment and prediction. Through its theorisation of these issues, the chapter expands the parameters of current scholarship on the predictive algorithms applied by penal systems.
As we have seen so far in this book, the IoT comprises various connected devices, services, and systems. Connecting regular devices to the Internet has made it much easier for companies to protect their interests in consumer transactions. New technologies allow companies to continue to wield significant control over us and our devices beyond the point of sale, license, or lease. As Aaron Perzanowski and Jason Schultz have observed, the IoT “threatens our sense of control over the devices we purchase.”1 Of chief concern is companies’ use of technology to control our devices and actions and digitally restrain our activities in lending transactions.
Cities around the world try to set standards for their digital agendas, and constructing smart city ‘roadmaps’. Taking stock of emerging approaches that seek to apply or supplement existing rules on privacy and data protection and to sustain public confidence and support in the face of innovation and change, challenges are explored. Drawing on a growing critical literature, in law, planning, and other fields, that seeks to identify the nature and implications of these developments beyond the promotional language of ‘smart’, the work of Sidewalk’s masterplan for a site in Toronto, and the Los Angeles Department of Transportation’s new and influential approach to mobility data (data associated with ridesharing, ‘micromobility’ such as e-scooters, and in time autonomous vehicles), is assessed. Two facets of urban technology that relate to (and ultimately enable) the delivery of personalised services by public authorities and others are then considered: ratings and reputation (highlighting Chinese cities deploying aspects of the emerging 'social credit’ systems) and facial recognition (noting that the ability to recognise individuals in this way, without a conventional and more deliberate identification (e.g. supplying a name, entering a password, or older biometric systems such as fingerprint scanning), is a key part of many proposed personalised services).
Unlike privacy law discourse, which has primarily explored questions related to others’ knowledge, access, and use of information about us, commercial law’s central focus has been on issues related to trade involving persons, merchants, and entities. In the commercial law context, questions about knowledge and information are primarily connected to the exchange and disclosure of information needed to facilitate transactions between parties.1 This distinct historical focus has likely contributed to commercial law’s failure to adequately account for and address privacy, security, and digital domination harms. In some cases, commercial law also defers to corporate commercial practices as well.
Most of the existing privacy and security legal frameworks at both the federal and state level provide incomplete safeguards against many of the privacy and information security harms highlighted in earlier chapters. Many of these frameworks have long been critiqued by privacy law experts for their lack of effectiveness. The IoT amplifies these inadequacies as it compounds existing privacy and security challenges.
At the state level, the patchwork of privacy and security legislation creates varying obligations for businesses without consistently ensuring that individuals receive adequate privacy and cybersecurity protection. State legislation also suffers from several shortcomings and is often replete with gaping privacy and security holes. Even the CCPA, the first privacy statute of its kind in the United States, has several limitations. Further, varying state privacy and security legislation also enables unequal access to privacy and security between citizens of different states.
The legal approach to regulating data-driven personalisation has relied heavily on extending and reusing legal categories and concepts – in particular, the idea of privacy of personal information and the legitimating role of consent in permitting the use of personal information – that were originally devised to deal with a very different problem. This chapter argues that this approach is fundamentally flawed for two reasons. Firstly, data-driven personalisation – unlike the traditional core of privacy – is deeply enmeshed in contractual relationships, and both the gathering and the use of data are mediated by contractual terms. As this chapter shows, the result is that ‘privacy’ and ‘consent’ do not provide an adequate evaluative framework to model or mitigate the deleterious impact of data-driven personalisation on individuals. Secondly, consent derives its normative force from the presumption that it is necessarily autonomy-enhancing. As this chapter shows, however, data-driven personalisation has a strong derelationalising effect which erodes rather than enhances the data subject’s autonomy, calling into question the assumptions underpinning privacy-based approaches. The chapter concludes by arguing that dealing with these problems requires adopting a new, more substantive approach, which works to explicitly restrict the processes, structures, and purposes through which and for which personalisation is used.
In this chapter, I discuss the role of personalisation in a wider narrative of the development of democratic societies, in terms of digital modernity, driven by a vision of data-driven innovation over networked structures facilitating socio-environmental control. This chapter deals with narratives of how modernity plays out and is implemented by institutions and technologies, which are inevitably partial, and selective in what they foreground and ignore. It begins with a discussion of digital modernity, showing how data-driven personalisation is central to it, and how privacy not only loses its traditional role as a support for individuality, but becomes a blocker for the technologies that will realise the digitally modern vision. The chapter develops the concept of the subjunctive world, in which individuals’ choices are replaced by what they would have chosen if only they had sufficient data and insight. Furthermore, the notions of what is harmful for the individual, and the remedies that can be applied to these, become detached from the individual’s lived experience, and reconnected, in the policy space, to the behaviour and evolution of models of the individual and his or her environment.
A core claim of big-data-algorithm enthusiasts – producers, champions, consumers – is that big-data algorithms are able to deliver insightful and accurate predictions about human behaviour. This chapter challenges this claim. I make three contributions: First, I perform a conceptual analysis and argue that big-data analytics is by design a-theoretical and does not provide process-based explanations of human behaviour, making it unfit to support insight and deliberation, which is transparent to both legal experts and non-experts. Second, I review empirical evidence from dozens of data sets, which suggests that the predictive accuracy of mathematically sophisticated algorithms is not consistently higher than that of simple rules (rules that tap on available domain knowledge or observed human decision-making); rather, big-data algorithms are less accurate across a range of problems, including predicting election results and criminal profiling (this work presented here refer to understanding and predicting human behaviour in legal and regulatory contexts). Third, I synthesize the above points in order to conclude that simple, process-based, domain-grounded theories of human behaviour should be put forth as benchmarks, which big-data algorithms, if they are to be considered as tools for personalization, should match in terms of transparency and accuracy.
Predictive technologies are now used across the criminal justice system to inform risk-based decisions regarding bail, sentencing and parole as well as offender-management in prisons and in the community. However, public protection and risk considerations also provoke enduring concerns about ensuring proportionality in sentencing and about preventing unduly draconian, stigmatising and marginalising impacts on particular individuals and communities. If we are to take seriously the principle of individualised justice as desert in the liberal retributive sense, then we face serious (potentially intractable) difficulties in justifying any sort of role for predictive risk profiling and assessment, let alone sentencing based on automated algorithms drawing on big data analytics. In this respect, predictive technologies present us, not with genuinely new problems, but merely a more sophisticated iteration of established actuarial risk assessment (ARA) techniques. This chapter describes some of the reasons why principled and social justice objections to predictive, risk-based sentencing make so elusive any genuinely synthetic resolution or compromise. The fundamental question as regards predictive technologies therefore is how it might even be possible to conceive such a thing without seriously undermining fundamental principles of justice and fairness.
There are various definitions of privacy, and for some time now, privacy harms have been characterized as intractable and ambiguous. In this chapter, I argue that regardless of how one conceptualizes privacy the ubiquitous nature of IoT devices and the data they generate, together with corporate data business models and programs, create significant privacy concerns for all of us. The brisk expansion of the IoT has increased “the volume, velocity, variety and value of data.”1 The IoT has made new types of data that were never before widely available to organizations more easily accessible. IoT devices and connected mobile apps and services observe and collect many types of data about us, including health-related and biometric data.
The IoT allows corporate entities to colonize and obtain access to traditionally private areas and activities while simultaneously reducing our public and private anonymity.