To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Could robots be recognized as legal persons? Should they? Much of the discussion of these topics is distorted by fictional representations of what form true artificial intelligence (AI) might take – in particular that it would be of human-level intellect and be embodied in humanoid form. Such robots are the focus of this volume, with the possibility that external appearance and its echoes in science fiction may shape debate over their “rights.” Most legal systems would be able to grant some form of personality, yet early considerations of whether they should conflate two discrete rationales. The first is instrumental, analogous to the economic reasons why corporations are granted personality. The second is inherent, linked to the manner in which human personality is recognized. Neither is sufficient to justify legal personality for robots today. A third reason, which may become more pressing in the medium term, is tied to the possibility of AI systems that far surpass humans in terms of ability. In the event that such entities are created, the question may shift from whether we recognize them under the law, to whether they recognize us.
The answers that each political community finds to the law reform questions posed by AI may differ, but a near-term threat is that AI systems capable of causing harm will not be confined to one jurisdiction – indeed, it may be impossible to link them to a specific jurisdiction at all. This is not a new problem in cybersecurity, but different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy, and opacity of AI systems. For that reason, some measure of collective action is needed. Lessons may be learned from efforts to regulate the global commons as well as moves to outlaw certain products (weapons and drugs, for example) and activities (such as slavery and child sex tourism). The argument advanced here is that regulation, in the sense of public control, requires active involvement of states. To co-ordinate those activities and enforce global ‘red lines’, this chapter posits a hypothetical International Artificial Intelligence Agency (IAIA), modelled on the agency created after the Second World War to promote peaceful uses of nuclear energy, while deterring or containing its weaponization and other harmful effects.
The increasing autonomy of AI systems is exposing gaps in regulatory regimes that assume the centrality of human actors. Yet surprisingly little attention is given to what is meant by ‘autonomy’ and its relationship to those gaps. Driverless vehicles and autonomous weapon systems are the most widely studied examples, but related issues arise in algorithms that allocate resources or determine eligibility for programmes in the private or public sector. This chapter develops a novel typology that distinguishes three lenses through which to view the regulatory issues raised by autonomy: the practical difficulties of managing risk associated with new technologies, the morality of certain functions being undertaken by machines at all, and the legitimacy gap that is created when public authorities delegate their powers to algorithms.
The rule of law is the epitome of anthropocentrism: humans are the primary subject and object of norms that are created, interpreted, and enforced by humans – made manifest in government of the people, by the people, for the people. Though legal constructs such as corporations may have rights and obligations, these in turn are traceable back to human agency in their acts of creation, their daily conduct overseen to varying degrees by human agents. Even international law, which governs relations among states, begins its foundational text with the words ‘We the peoples…’. The emergence of fast, autonomous, and opaque AI systems forces us to question this assumption of our own centrality, though it is not yet time to relinquish it.
As AI systems operate with greater autonomy, the idea that they might themselves be held responsible has gained credence. On its face, the idea of giving those systems a form of independent legal personality may seem attractive. Yet this chapter argues that this is both too simple and too complex. It is simplistic in that it lumps a wide range of technologies together in a single, ill-suited legal category; it is overly complex in that it implicitly or explicitly embraces the anthropomorphic fallacy that AI systems will eventually assume full legal personality in the manner of the ‘robot consciousness’ arguments mentioned earlier in the book. Though the emergence of general AI is a conceivable future scenario – and one worth taking precautions against – it is not a sound basis for regulation today.
This chapter turns to the possibility that the AI systems challenging the legal order may also offer at least part of the solution. Here China, which has among the least developed rules to regulate conduct by AI systems, is at the forefront of using that same technology in the courtroom. This is a double-edged sword, however, as its use implies a view of law that is instrumental, with parties to proceedings treated as means rather than ends. That, in turn, raises fundamental questions about the nature of law and authority: at base, whether law is reducible to code that can optimize the human condition, or if it must remain a site of contestation, of politics, and inextricably linked to institutions that are themselves accountable to a public. For many of the questions raised, the rational answer will be sufficient; but for others, what the answer is may be less important than how and why it was reached, and whom an affected population can hold to account for its consequences.
Transparency has been embraced as a means of limiting the risks associated with AI. This chapter considers the manner in which transparency and the related concept of ‘explainability’ are being elaborated, notably the ‘right to explanation’ in the European Union and a move towards explainable AI (XAI) among developers. These are more promising than the arguments for legal personality, but the limits of transparency are already beginning to show as AI systems demonstrate abilities that even their programmers struggle to understand. That is leading regulators to cede ground and settle for explanations of adverse decisions rather than transparency of decision-making processes themselves. Such a backward-looking approach relies on individuals knowing that they have been harmed – which will not always be the case – and should be supplemented with forward-looking mechanisms like impact assessments, audits, and an ombudsperson.
As computer programs become ever more complex, the ability of non-specialists to understand them diminishes. Opacity may also be built into programs by companies seeking to protect proprietary interests. Both such systems are capable of being explained, albeit with recourse to experts or an order to reveal their internal workings. Yet a third kind of system may be naturally opaque: some machine learning techniques are difficult or impossible to explain in a manner that humans can comprehend. This raises concerns when the process by which a decision is made is as important as the decision itself. For example, a sentencing algorithm might produce a ‘just’ outcome for a class of convicted persons. Unless the justness of that outcome for an individual defendant can be explained in court, however, it is, quite rightly, subject to legal challenge. Separate concerns are raised by the prospect that AI systems may mask or reify discriminatory practices or outcomes.