To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The IoT raises several questions germane to traditional products liability law and the UCC’s warranty provisions. These include how best to evaluate and remedy consumer harms related to insecure devices, malfunctioning devices, and the termination of services and software integral to a device’s operations. Consider that the modern IoT vehicle with an infotainment system generates massive quantities of data about drivers, and that mobile applications can be used to impact the operations of these vehicles.
Over recent years, economists, lawyers and regulators have become increasingly interested in the role played by ‘network effects’ in the digital economy: namely, the phenomenon whereby a platform becomes increasingly valuable to its users, the more users it succeeds in recruiting. Whether user-generated content on Youtube and Facebook, proprietorial messaging services such as Whatsapp, or two-sided markets such as Uber and Airbnb, it is now widely recognised that many of today’s most successful technology businesses enjoy a dominance based upon achieving a critical mass of users, which makes it near-impossible for less well-used platforms to compete. What is less widely recognised is that data-driven personalisation operates in a comparable (albeit not identical) manner: as the volume of users increases, personalisation becomes ever more sophisticated, generating a ‘second-order’ network effect that can also have significant implications for the viability of competition. This paper unpacks the distinction between first-order and second-order network effects, showing how both can create significant barriers to competition. It analyses what second-order network effects imply for how governments can and should regulate data-driven personalisation, and how states might help their citizens to regain control over the value that they create.
In 2015, the US Senate passed a resolution recommending the adoption of a national strategy for IoT development (IoT Resolution).1 Currently, the proposed Developing Innovation and Growing the Internet of Things Act (DIGIT) would establish a federal working group and a steering committee within the Department of Commerce.2 If the act is adopted, the working group, under the guidance of the steering committee, would be charged with evaluating and providing a report containing recommendations to Congress on multiple IoT aspects.3 These areas include identifying federal statutes and regulations that could inhibit IoT growth and impact consumer privacy and security.4
The argument set out in this chapter is that personalisation technologies are fundamentally inimical to the way we have built our legal and political traditions: the building blocks, or the raw materials if you will, that make up the sources of the ‘self’. The advances in the use personalisation technologies and the implications for how we understand our political and social lives through law (constitutionalism) hinge on the importance of language and the risks posed by personalisation technologies to the building of personality and forms of social solidarities. This chapter explores the centrality of language to agency – how this relationship builds our legal and political traditions and the risks posed by personalisation technologies.
Privacy and information security are distinct but related fields.1 Security focuses on questions surrounding the extent to which related products, systems, and processes can effectively defend against “attacks on confidentiality, integrity and availability of code and information.”2 The field of information security often involves inquiries about the legal consequences of security failures.3 In 2018, The Economist reported that “more than ninety percent of the world’s data appeared in just the past two years.”4 In the last decade there have been multiple large-scale data breaches and inadvertent data exposures that have resulted in the disclosure of millions of our data.
We now live in a world where we can obtain current information about a global pandemic from our smartphones and Internet of Things (IoT) devices.1 The recent novel coronavirus (COVID-19) outbreak is not just a public health emergency. The pandemic has forced us to further evaluate the extent to which privacy should give way to public health threats and resulting technological innovations.2 It directly raises questions about whether legal frameworks governing our privacy should be relaxed to address public health concerns, and if any such relaxation will continue post pandemic to permanently undermine our privacy.3
As we have seen, the law wields considerable influence over the rights and remedies available to us as consumers. Several areas of commercial law are ill-equipped to sufficiently protect our consumer interests in the IoT age. This is because various legal frameworks governing commercial practices have not been sufficiently reformulated to account for the growing connections between the world of privacy and the world of commercial law. As earlier sections of this book have demonstrated, there are multiple legal frameworks impacting commercial practices at the federal and state level that are ripe for significant legal reform. These sources of law include contract law, the FAA, products liability law, the CDA, debt collection law, the Bankruptcy Code, and secured financing laws.
Personalisation can provide notable efficiencies and economic gains, but also unintended negative effects. Most accounts focus on potential negative impacts on individuals or categories of individuals and not the broader consequences or ripple effects of incorporating AI into existing social systems. This chapter explores such issues via an ‘AI ethics’ perspective, the dominant overarching discourse for ‘regulating’ AI for the good of society, commonly characterised as self-policing of AI system use by private corporate actors, sanctioned by government. The discussion critiques that self-policing by locating AI ethics within established traditions of corporate social responsibility and institutional ethical frameworks whose shortcomings translate into a systemic inability to be truly Other-regarding. It shows, referencing the recent EU AI ethics initiative, that even well-intentioned initiatives may miss their target by assuming the desirability of AI applications, regardless of their wider impacts. This approach simply tinkers with system details of minor consequence compared to the broader impacts of AI within social systems, captured by the idea of ‘algorithmic assemblage’.
Credit-score models provide one of the many contexts through which the big data micro-segmentation or ‘personalisation’ phenomenon can be analysed and critiqued. This chapter approaches the issue through the lens of anti-discrimination law, and in particular the concept of indirect discrimination. The argument presented is that, despite its initial promise based on its focus on impact, ‘indirect discrimination’ is after all unlikely to deliver a mechanism to intervene and curb the excesses of the personalised service model. The reason for its failure does not lie in its inherent weaknesses but rather in the 'shortcomings' (entrenched biases) of empirical reality itself which any 'accurate' (or useful) statistical analysis cannot but reflect. Still, the anti-discrimination context offers insights that are valuable beyond its own disciplinary boundaries. For example, the opportunities for oversight and review based on correlations within outputs rather than analysis of inputs is fundamentally at odds with the current trend that demands greater transparency of AI but may after all be more practical and realistic considering the ‘natural’ opacity of learning algorithms and businesses’ ‘natural’ secrecy. The credit risk score context also provides a low-key yet powerful illustration of the oppressive potential of a world in which individual behaviour from ANY sphere or domain may be used for ANY purpose; where a bank, insurance company, employer, health care provider, or indeed any government authority can tap into our social DNA to pre-judge us, should it be considered appropriate and necessary for their manifold objectives.
This is the introductory chapter to the edited collection on 'Data-Driven Personalisation in Markets, Politics and Law' (Cambridge University Press, 2021) that explores the emergent pervasive phenomenon of algorithmic prediction of human preferences, responses and likely behaviours in numerous social domains – ranging from personalised advertising and political microtargeting to precision medicine, personalised pricing and predictive policing and sentencing. This chapter reflects on such human-focused use of predictive technology, first, by situating it within a general framework of profiling and defends data-driven individual and group profiling against some critiques of stereotyping, on the basis that our cognition of the external environment is necessarily reliant on relevant abstractions or non-universal generalisations. The second set of reflections centres around the philosophical tradition of empiricism as a basis of knowledge or truth production, and uses this tradition to critique data-driven profiling and personalisation practices in its numerous manifestations.
An online seller or platform is technically able to offer every consumer a different price for the same product, based on information it has about the customers. Such online price discrimination exacerbates concerns regarding the fairness and morality of price discrimination, and the possible need for regulation. In this chapter, we discuss the underlying basis of price discrimination in economic theory, and its popular perception. Our surveys show that consumers are critical and suspicious of online price discrimination. A majority consider it unacceptable and unfair, and are in favour of a ban. When stores apply online price discrimination, most consumers think they should be informed about it. We argue that the General Data Protection Regulation (GDPR) applies to the most controversial forms of online price discrimination, and not only requires companies to disclose their use of price discrimination, but also requires companies to ask customers for their prior consent. Industry practice, however, does not show any adoption of these two principles.