To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 3 shows why the contracts model doesn’t work: consent is absent in the information economy. Privacy harm can’t be seen as a risk that people accept in exchange for a service. Inferences, relational data, and de-identified data aren’t captured by consent provisions. Consent is unattainable in the information economy more broadly because the dynamic between corporations and users is plagued with uneven knowledge, inequality, and a lack of choices. Data harms are collective and unknowable, making individual choices to reduce them impossible. Worse, privacy has a moral hazard problem: corporations have incentives to behave against our best interests, creating profitable harms after obtaining agreements. Privacy’s moral hazard leads to informational exploitation. One manifestation of valid consent in the information economy are consent refusals. We can consider them by thinking of people’s data as part of them, as their bodies are.
Social welfare has long been a priority area for digitisation and more recently for ADM. Digitisation and ADM can either advance or threaten socio-economic rights of the marginalised. Current Australian examples include the roll-out of on-line and apps-based client interfaces and compliance technologies in Centrelink. Others include work within the National Disability Insurance Scheme (NDIS) on development of virtual assistants or use of AI to leverage existing data sets to aid or displace human decision-making. Drawing on these examples and other recent experience, this chapter reviews the adequacy of traditional processes of public policy development, public administration, and legal regulation/redress in advancing and protecting the socio-economic rights of the marginalised in the rapidly emerging automated welfare state. It is argued that protections are needed against the power of ADM to collapse program design choices so that outliers, individualisation, complexity, and discretion are excluded or undervalued. It is suggested that innovative new processes may be needed, such as genuine co-design and collaborative fine-tuning of ADM initiatives, new approaches to (re)building citizen trust and empathy in an automated welfare state, and creative new ways of ensuring equal protection of the socio-economic rights of the marginalised in social services and responsiveness to user interests.
Chapter 2 shows the falseness of two ideas that underlie the central elements of privacy law: that people make fully rational privacy choices and that they don’t care about their privacy. These notions create a dissonance between law and reality, which prevents laws from providing meaningful privacy protections. Contrary to rationality, context has an outsized impact on our privacy decisions and we can’t understand what risks are involved in our privacy “choices,” particularly with AI inferences. The notion that we’re apathetic is prevalent in popular discourse about how much people share online and the academic literature about “the privacy paradox.” Dismantling the myth of apathy shows there’s no privacy paradox. People simply face uncertainty and unknowable risks. People make privacy choices in a context of anti-privacy design, such as dark patterns. In this process, we’re manipulated by corporations, who are more aware of our biases than regulators are.
Chapter 1 ties together the problems of central elements of privacy law: the individual choice-based system, the fair information principles that originated it, the view that privacy is about secrecy, and dichotomies such as public versus private. We don’t have actual choices about our data beyond mechanically agreeing to privacy policies because we lack outside options and information such as what the choice means and what risk we’re taking on by agreeing. The choice-based approach creates a false binary of secret and open information when, in reality, privacy is a spectrum. The idea that someone, at any given time, has either total privacy or no privacy at all is unfounded. Additionally, data are bundled: you can’t reveal just one thing without letting companies infer other things. Reckoning with this reality defeats the popular “I have nothing to hide” argument, which traces back to Joseph Goebbels.
There is a broad consensus that human supervision holds the key to sound automated decision-making: if a decision-making policy uses the predictive outputs of a statistical algorithm, but those outputs form only part of a decision that is made ultimately by a human actor, use of those outputs will not (per se) fall foul of the requirements for due process in public and private decision-making. Thus, the focus in academic and judicial spheres has been on making sure that humans are equipped and willing to wield this ultimate decision-making power. Yet, proprietary software obscures the reasons for any given prediction; this is true both for machine learning and deterministic algorithms. And without these reasons, the decision-maker cannot accord appropriate weight to that prediction in their reasoning process. Thus, a policy of using opaque statistical software to make decisions about how to treat others is unjustified, however involved humans are along the way.
This chapter closes Part 1 by analysing how the opacity surrounding the use of AI and ADM tools by financial corporations is enabled, and even encouraged by the law. As other chapters in the book demonstrate, such opacity brings about significant risks to fundamental rights, consumer rights, and the rule of law. Analysing examples from jurisdictions including the US, UK, EU, and Australia, Bednarz and Przhedetsky unpack how financial entities often rely on rules and market practices protecting corporate secrecy such as complex credit scoring systems, proprietary rights to AI models and data, as well as the carve out of ‘non-personal’ information from data and privacy protection laws. The authors then focus on the rules incentivising the use of AI and ADM tools by financial entities, showing how they provide a shield behind which corporations can hide their consumer scoring and rating practices. The authors also explore potential regulatory solutions that could break the opacity and ensure transparency, introducing direct accountability and scrutiny of ADM and AI tools, and reducing the control of financial corporations over people’s data.
Chapter 7 proposes how the liability framework should be implemented. Harm liability can flow from a statutory standard or local tort law. This focus allows liability to complement, rather than replicate, public enforcement. The quantum of liability should depend on the harm incurred by the victim, rather than on the wrongfulness of the perpetrator’s conduct or the consequences that the perpetrator foresaw. Privacy liability is most effective as part of a mechanism of collective redress, such as class actions. A robust notion of loss and harm can address problems of insufficient compensation and uncertainties in class certification. Considering privacy problems at scale, we need a framework recognizing mass privacy effects for regulators and courts.
Chapter 7 analyses the legal challenges that incorporation of AI-systems in the Automated State will bring. The starting point is that legal systems have coped relatively well so far with the use of computers by public authorities. The critical disruption of the Automated State predicted by Robert McBride in 1967 has not been materialised and, therefore, we have not been forced to substantively rethink the adequacy of how administrative law deals with machines. However, the incorporation of AI in automation may be that disruption. In this chapter, Bello y Villarino offers a counterpoint to those who believe that existing principles and rules can be easily adapted to address the use of AI in the public sector. He discusses the distinct elements of AI, through an exploration of the dual role of public authorities: a state that executes policy and a state that designs policy. The use of AI systems in both contexts are of a different regulatory order. Until now there has been an assumption that policy design should be allowed a broad margin of discretion, especially when compared to the state as an executor of policies and rules. Yet, the automation of policy design will require that public authorities make explicit decisions about objectives, boundary conditions, and preferences. Discretion for humans can remain, but AI systems analysing policy choices may suggest that certain options are superior to others. This could justify employing different legal lenses to approach the regulation of automated decision-making and decision-support systems used by the State. The reasoning, to some extent, could also be extrapolated to Automated Banks. Each perspective is analysed in reference to the activity of modern states. The main argument is that the AI-driven Automated State is not suited for the one-size-fits-all approach often claimed to apply to administrative law. The final part of the chapter explores some heuristics that could facilitate the regulatory transition.
Artificial intelligence (AI) and automated decision-making (ADM) tools promise money and unmatched power to banks and governments alike. As the saying goes, they will know everything about their citizens and customers and will also be able to predict their behaviour, preferences, and opinions. Global consulting firm McKinsey estimates that AI technologies will unlock $1 trillion in additional value for the global banking industry every year.1 Governments around the world are getting on the AI bandwagon, expecting increased efficiency, reduced costs, and better insights into their populations.
Tech companies bypass privacy laws daily, creating harm for profit. The information economy is plagued with hidden harms to people’s privacy, equality, finances, reputation, mental wellbeing, and even to democracy, produced by data breaches and data-fed business models. This book explores why this happens and proposes what to do about it. Legislators, policymakers, and judges are trapped into ineffective approaches to tackle digital harms because they work with tools unfit to deal with the unique challenges of data ecosystems that leverage AI. People are powerless towards inferences about them that they can’t anticipate, interfaces that manipulate them, and digital harms they can’t escape. Adopting a cross-jurisdictional scope, this book describes how laws and regulators can and should respond to these pervasive and expanding harms. In a world where data is everywhere, one of society’s most pressing challenges is addressing power discrepancies between the companies that profit from personal data and the people whose data produces profit. Doing so requires creating accountability for the consequences of corporate data practices—not the practices themselves. Laws can achieve this by creating a new type of liability that recognizes the social value of privacy, uncovering dynamics between individual and collective digital harms.
The potential of AI solutions to enhance effective decision-making, reduce costs, personalise offers and products, and improve risk management have not gone unnoticed by the financial industry. On the contrary, the characteristics of AI systems seem to perfectly accommodate to the features of financial services and to masterly address their most distinctive and challenging needs. Thus, the financial industry proves to provide a receptive and conducive environment to the growing application of AI solutions in a variety of tasks, activities, and decision-making processing. The aim of this paper is to examine the current state of the legal regime applicable in the European Union to the use of AI systems in the financial sector and to reflect on the need to formulate principles and rules that ensure responsible automation of decision-making and that serve as a guide for widely and extensively implementing AI solutions in banking activity.
This chapter offers a synthesis on the role the law has to play in Automated States. Arguing for a new research and regulatory agenda on AI and ADM beyond the artificial ‘public’ and ‘private’ divide, it seeks to identify new approach and safeguards necessary to make AI companies and the Automated States accountable to their customers, citizens and communities. I argue that emphasis on procedural safeguards alone – or what I call procedural fetishism – is not enough to counter the unprecedented levels of AI power in the Automated States. Only by shifting our perspective from procedural to substantive, we can search for new ways to regulate the future in the Automated States. The chapter concludes the collection with an elaboration of what more substantive regulation should look like: create a global instrument on data privacy, redistribute wealth and power by breaking and taxing AI companies, increasing public scrutiny and adopting prohibitive laws; democratizing AI companies by making them public utilities, and giving people a say how these companies should be governed. Crucially, we must also decolonize future AI regulation by recognizing colonial practices of extraction and exploitation and paying attention to the voices of Indigenous peoples and communities of the so-called Global South. With all these mutually reinforcing efforts, the new AI regulation will debunk the corporate and state agenda of procedural fetishism and establish a new social contract in the age of AI.