We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 5 examines traditional data protection law’s regulatory outcomes. It shows why data rights and rules, while desirable, don’t address the core problems of the contracts model and can’t work well without the liability model. Data rights unintendedly impose administrative burdens on those they protect. Mandatory rules better address power asymmetries and manipulation than defaults. But our procedural rules overregulate while they underprotect: they benefit large players by adversely affecting new players and they allow companies to comply merely by following box-ticking exercises. Against this backdrop, laws legitimize exploitation that can be executed while remaining compliant. A risk-reduction approach based on standards would reduce informational exploitation.
Chapter 3 shows why the contracts model doesn’t work: consent is absent in the information economy. Privacy harm can’t be seen as a risk that people accept in exchange for a service. Inferences, relational data, and de-identified data aren’t captured by consent provisions. Consent is unattainable in the information economy more broadly because the dynamic between corporations and users is plagued with uneven knowledge, inequality, and a lack of choices. Data harms are collective and unknowable, making individual choices to reduce them impossible. Worse, privacy has a moral hazard problem: corporations have incentives to behave against our best interests, creating profitable harms after obtaining agreements. Privacy’s moral hazard leads to informational exploitation. One manifestation of valid consent in the information economy are consent refusals. We can consider them by thinking of people’s data as part of them, as their bodies are.
Chapter 2 shows the falseness of two ideas that underlie the central elements of privacy law: that people make fully rational privacy choices and that they don’t care about their privacy. These notions create a dissonance between law and reality, which prevents laws from providing meaningful privacy protections. Contrary to rationality, context has an outsized impact on our privacy decisions and we can’t understand what risks are involved in our privacy “choices,” particularly with AI inferences. The notion that we’re apathetic is prevalent in popular discourse about how much people share online and the academic literature about “the privacy paradox.” Dismantling the myth of apathy shows there’s no privacy paradox. People simply face uncertainty and unknowable risks. People make privacy choices in a context of anti-privacy design, such as dark patterns. In this process, we’re manipulated by corporations, who are more aware of our biases than regulators are.
Chapter 1 ties together the problems of central elements of privacy law: the individual choice-based system, the fair information principles that originated it, the view that privacy is about secrecy, and dichotomies such as public versus private. We don’t have actual choices about our data beyond mechanically agreeing to privacy policies because we lack outside options and information such as what the choice means and what risk we’re taking on by agreeing. The choice-based approach creates a false binary of secret and open information when, in reality, privacy is a spectrum. The idea that someone, at any given time, has either total privacy or no privacy at all is unfounded. Additionally, data are bundled: you can’t reveal just one thing without letting companies infer other things. Reckoning with this reality defeats the popular “I have nothing to hide” argument, which traces back to Joseph Goebbels.
Chapter 7 proposes how the liability framework should be implemented. Harm liability can flow from a statutory standard or local tort law. This focus allows liability to complement, rather than replicate, public enforcement. The quantum of liability should depend on the harm incurred by the victim, rather than on the wrongfulness of the perpetrator’s conduct or the consequences that the perpetrator foresaw. Privacy liability is most effective as part of a mechanism of collective redress, such as class actions. A robust notion of loss and harm can address problems of insufficient compensation and uncertainties in class certification. Considering privacy problems at scale, we need a framework recognizing mass privacy effects for regulators and courts.
Tech companies bypass privacy laws daily, creating harm for profit. The information economy is plagued with hidden harms to people’s privacy, equality, finances, reputation, mental wellbeing, and even to democracy, produced by data breaches and data-fed business models. This book explores why this happens and proposes what to do about it. Legislators, policymakers, and judges are trapped into ineffective approaches to tackle digital harms because they work with tools unfit to deal with the unique challenges of data ecosystems that leverage AI. People are powerless towards inferences about them that they can’t anticipate, interfaces that manipulate them, and digital harms they can’t escape. Adopting a cross-jurisdictional scope, this book describes how laws and regulators can and should respond to these pervasive and expanding harms. In a world where data is everywhere, one of society’s most pressing challenges is addressing power discrepancies between the companies that profit from personal data and the people whose data produces profit. Doing so requires creating accountability for the consequences of corporate data practices—not the practices themselves. Laws can achieve this by creating a new type of liability that recognizes the social value of privacy, uncovering dynamics between individual and collective digital harms.
Chapter 4 delves into two efforts to reinforce consent: opt-in and informed choice. It illustrates why, in the information economy, they also fail. Power asymmetries enable systemic manipulation in the design of digital products and services. Manipulation by design thwarts improved consent provisions, interfering with people’s decision-making. People’s choices regarding their privacy are determined by the designs of the systems with which they interact. European and American attempts to regulate manipulation by changing tracking from ‘opt-out’ to ‘opt-in’ and reinforcing information crash on the illusion of consent. Contract law doctrines that aim to reduce manipulation are unsuitable because they assume mutually beneficial agreements, and privacy policies are neither. Best efforts to strengthen meaningful consent and choice, even where policies are specifically intended to protect users, ultimately are insufficient because of the environment in which privacy “decisions” take place.
Tech companies bypass privacy laws daily, creating harm for profit. The information economy is plagued with hidden harms to people’s privacy, equality, finances, reputation, mental wellbeing, and even to democracy, produced by data breaches and data-fed business models. This book explores why this happens and proposes what to do about it. Legislators, policymakers, and judges are trapped into ineffective approaches to tackle digital harms because they work with tools unfit to deal with the unique challenges of data ecosystems that leverage AI. People are powerless towards inferences about them that they can’t anticipate, interfaces that manipulate them, and digital harms they can’t escape. Adopting a cross-jurisdictional scope, this book describes how laws and regulators can and should respond to these pervasive and expanding harms. In a world where data is everywhere, one of society’s most pressing challenges is addressing power discrepancies between the companies that profit from personal data and the people whose data produces profit. Doing so requires creating accountability for the consequences of corporate data practices—not the practices themselves. Laws can achieve this by creating a new type of liability that recognizes the social value of privacy, uncovering dynamics between individual and collective digital harms.
Chapter 6 explores a different path: building privacy law on liability. Liability for material and immaterial privacy would improve the protection system. To achieve meaningful liability, though, laws must compensate privacy harm, not just the material consequences that stem from it. Compensation for financial and physical harms produced by the collection, processing, or sharing of data is important but insufficient. The proposed liability framework would address informational exploitation by making companies internalize risk. It would deter and remedy socially detrimental data practices, rather than chasing elusive individual control aims. Courts can distinguish harmful losses from benign ones by examining them on the basis of contextual and normative social values. By focusing on harm, privacy liability would overcome its current problems of causation quagmires and frivolous lawsuits.