To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
If law is to promote justice and welfare, it must respond to changes in society. In much the same way, the tools that government uses to make, implement, and enforce laws also need to adapt in the face of societal changes as well as in light of changes in technology. In this spirit, governments around the world increasingly look to the promise of one of the newest technological innovations made possible by modern computing power: machine-learning algorithms.
Technological advances continue to produce massive amounts of information from a variety of sources about our everyday lives. The simple use of a smartphone, for example, can generate data on individuals through telephone records (including location data), social media activity, Internet browsing, e-commerce transactions, and email communications. Much attention has been given to expectations of privacy in light of this data collection, especially consumer privacy. Much attention has also been given to how and when government agencies collect and use this data to monitor the activities of individuals.
Software-related inventions have had an uneasy relationship with the patent-eligible subject matter requirement of Section 101 of the Patent Act. In applying the requirement, the Supreme Court has historically characterized mathematical algorithms and formulas simpliciter as sufficiently analogous to laws of nature to warrant judicial exclusion as abstract ideas. The Court has also found “the mere recitation of a generic computer” in a patent claim as tantamount to “adding the words ‘apply it with a computer,’” a mere drafting effort that does not relieve “the pre-emption concern that undergirds our § 101 jurisprudence.” Lower courts, patent counsel, and commentators have struggled to apply these broad principles to specific software-related inventions, a difficulty largely rooted in the many forms and levels of abstraction in which mathematical algorithms can be situated, both in the computing context and in the terms of a patent claim. Consequently, widely varying approaches to claiming inventions that involve algorithms in their use have perennially complicated efforts to develop a coherent doctrine of unpatentable abstract ideas.
The development of a policy framework for the sustainable and ethical use of artificial intelligence (AI) techniques has gradually become one of the top policy priorities in developed countries as well as in the international context, including the G7 and G20 and work within the Organisation for Economic Cooperation and Development (OECD), the World Economic Forum, and the International Telecommunications Union. Interestingly, this mounting debate has taken place with very little attention to the definition of what AI is and its phenomenology in the real world, as well as its expected evolution. Politicians evoke the imminent takeover of smart autonomous robots; entrepreneurs announce the end of mankind, or the achievement of immortality through brain upload; and academics fight over the prospects of Artificial General Intelligence, which appears inevitable to some, and preposterous to others. In all this turmoil, governments developed the belief that, as both Vladimir Putin and Xi Jinping recently put it, the country that will lead in AI will, as a consequence, come to dominate the world. As AI gains positions in the ranking of top government priorities, a digital arms race has also emerged, in particular between the United States and China. This race bears far-reaching consequences when it comes to earmarking funds for research, innovation, and investment on AI technologies: gradually, AI becomes an end, rather than a means, and military and domestic security applications are given priority over civilian use cases, which may contribute more extensively to social and environmental sustainability. As of today, one could argue that the top priority in US AI policy is contrasting the rise of China, and vice versa.1
Rapid, recent technological change has brought forward a new form of “algorithmic competition.” Firms can and do draw on supercharged connectivity, mass data collection, algorithmic processing, and automated pricing to engage in what can be called “robo-selling.” But algorithmic competition can also produce results that harm consumers. Notably, robo-selling may make anticompetitive collusion more likely, all things being equal. Additionally, the possibility of new forms of algorithmic price discrimination may also cause consumers to suffer. There are no easy solutions, particularly because algorithmic competition also promises significant benefits to consumers. As a result, this chapter sets forth some approaches to each of these issues, necessarily tentative, to address the changes that algorithmic competition is likely to bring.
This chapter addresses whether and when content generated by an algorithm should be considered “expression” deserving of legal protection. Free expression has been described as “the matrix, the indispensable condition, of nearly every other form of freedom.”1 It receives extensive protection in many countries through legislation, constitutional rights, and the common law.2 Despite its deep roots, however, freedom of expression has unsettled boundaries. At their cutting edge lies the problem of “speech” produced by algorithms, a phenomenon that challenges traditional accounts of freedom of expression and impacts the balance of power between producers and consumers of algorithmically generated content.3
As technology continues to advance and specifically as algorithms proliferate into society, the law is increasingly confronted with the task of determining who is responsible when property is damaged or people are harmed. From a historical perspective, the Industrial Revolution resulted in machines that were able to automate tasks that were previously performed manually by humans. However, despite the superiority of these early automated machines, their use could cause physical damage; due to, for example, machine malfunctioning, poor machine design, or misuse by their users. The legal framework traditionally applied to machine-induced damages is comprised of two doctrines: that of general negligence and that of product liability. In this chapter, with algorithmic-based entities, I focus primarily on the reasonable person standard for actors.
A body of law is currently being developed in response to algorithms which are designed to control increasingly smart machines, to replace humans in the decision-making loops of systems, or to account for the actions of algorithms that make decisions which affect the legal rights of people. Such algorithms are ubiquitous; they are being used to guide commercial transactions, evaluate credit and housing applications, by courts in the criminal justice system, and to control self-driving cars and robotic surgeons. However, while the automation of decisions typically made by humans has resulted in numerous benefits to society, the use of algorithms has also resulted in challenges to established areas of law. For example, algorithms may exhibit the same human biases in decision-making that have led to violations of people’s constitutional rights, and algorithms may collectively collude to price fix, thus violating antitrust law.
From a historical context, the design and use of algorithms pre-dates the current proliferation of algorithms throughout society. For example, culminating in the Industrial Revolution, automated machines and tools have for centuries been assisting humans in performing physical tasks. Drills, engravers, weaving machines, and the like employed machines’ physical advantages to free human beings from repetitive physical labor. In a similar manner, algorithms are now taking on the decision-making and human supervisory control of systems which require human cognitive skills.
In recent years, algorithms have been incorporated into practically every aspect of our lives. They have come to determine whether you will be approved for a mortgage – as well as the interest, how much you will pay for insurance, the likelihood you will commit a crime, the terms of your sentencing, and the number of police patrols in your neighborhood. It is therefore difficult to imagine a more important consideration than the manner in which we are comfortable with algorithms making these decisions. As their presence in our daily lives grows, attention must be paid to their influence on society. Without these conversations we will increasingly rely on this technology, and it will become more difficult to disentangle the legal and ethical pitfalls from systems that have become necessary in our daily lives.