To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this book, I examined how public authorities’ reliance on algorithmic regulation can affect the rule of law and erode its protective role. I conceptualised this threat as algorithmic rule by law and evaluated the EU legal framework’s safeguards to counter it. In this chapter, I summarise my findings, conclude that this threat is insufficiently addressed (Section 6.1) and provide a number of recommendations (Section 6.2). Finally, I offer some closing remarks (Section 6.3). Algorithmic regulation promises simplicity and a route to avoid the complex tensions of legal rules that are continuously open to multiple interpretations. Yet the same promise also threatens liberal democracy today, as illiberal and authoritarian tendencies seek to eliminate plurality in favour of simplicity. The threat of algorithmic rule by law is hence the same that also threatens liberal democracy: the elimination of normative tensions by essentialising a single view. The antidote is hence to accept not only the normative tensions that are inherent in law but also the tensions inherent in a pluralistic society. We should not essentialise the law’s interpretation, but embrace its normative complexity.
This chapter introduces the main research themes of this book, which explores two current global developments. The first concerns the increased use of algorithmic systems by public authorities in a way that raises significant ethical and legal challenges. The second concerns the erosion of the rule of law and the rise of authoritarian and illiberal tendencies in liberal democracies, including in Europe. While each of these developments is worrying as such, in this book, I argue that the combination of their harms is currently underexamined. By analysing how the former development might reinforce the latter, this book seeks to provide a better understanding of how algorithmic regulation can erode the rule of law and lead to algorithmic rule by law instead. It also evaluates the current EU legal framework which is inadequate to counter this threat, and identifies new pathways forward.
In Chapter 3, I developed this book’s normative analytical framework by concretising the six principles that can be said to constitute the rule of law in the EU legal order. Drawing on this framework, in this chapter I now revisit each of these principles and carry out a systematic assessment of how public authorities’ reliance on algorithmic regulation can adversely affect them (Section 4.1). I then propose a theory of harm that conceptualises this threat, by juxtaposing the rule of law to algorithmic rule by law (Section 4.2). Finally, I summarise my findings and outline the main elements that should be considered when evaluating the aptness of the current legal framework to address this threat (Section 4.3).
The risks emanating from algorithmic rule by law lie at the intersection of two regulatory domains: regulation pertaining to the rule of law’s protection (the EU’s rule of law agenda), and regulation pertaining to the protection of individuals against the risks of algorithmic systems (the EU’s digital agenda). Each of these domains consists of a broad range of legislation, including not only primary and secondary EU law, but also soft law. In what follows, I confine my investigation to those areas of legislation that are most relevant for the identified concerns. After addressing the EU’s competences to take legal action in this field (Section 5.1), I respectively examine safeguards provided by regulation pertaining to the rule of law (Section 5.2), to personal data (Section 5.3) and to algorithmic systems (Section 5.4), before concluding (Section 5.5).
In this chapter, I first examine how the rule of law has been defined in legal theory, and how it has been distinguished from the rule by law, which is a distortion thereof (Section 3.1). Second, I assess how the rule of law has been conceptualised in the context of the European Union, as this book focuses primarily on the EU legal order (Section 3.2). In this regard, I also draw on the acquis of the Council of Europe. The Council of Europe is a distinct jurisdictional order, yet it heavily influenced the ‘EU’ conceptualisation of the rule of law, and the EU regularly relies on Council of Europe sources in its own legal practices. Finally, I draw on these findings to identify the rule of law’s core principles and to distil the concrete requirements that public authorities must fulfil to comply therewith (Section 3.3). Identifying these requirements – and the inherent challenges to achieve them – will subsequently allow me to build a normative analytical framework that I can use as a benchmark in Chapter 4 to assess how algorithmic regulation impacts the rule of law.
This volume provides a unique perspective on an emerging area of scholarship and legislative concern: the law, policy, and regulation of human-robot interaction (HRI). The increasing intelligence and human-likeness of social robots points to a challenging future for determining appropriate laws, policies, and regulations related to the design and use of AI robots. Japan, China, South Korea, and the US, along with the European Union, Australia and other countries are beginning to determine how to regulate AI-enabled robots, which concerns not only the law, but also issues of public policy and dilemmas of applied ethics affected by our personal interactions with social robots. The volume's interdisciplinary approach dissects both the specificities of multiple jurisdictions and the moral and legal challenges posed by human-like robots. As robots become more like us, so too will HRI raise issues triggered by human interactions with other people.
The use of care robots can reduce the demands for manpower in long-term care facilities. Further, care robots serve the needs of both the elders residing in care facilities and the staff of the facilities. This chapter considers the following issues for care robots. Whether long-term care robots should be required to meet the high standards for the use of medical devices found in current regulations. How should standards of use be developed for care robots based on the characteristics of the robots? For this question, I note that in Japan, a public–private partnership has shown success in the regulation of care robots. In addition, with care robots, how should we protect the privacy of elders and their relatives or friends in contact with care robots given that the elderly may have reduced cognitive ability. And lastly, what legal and ethical concerns apply to the design of the interfaces between care robots and elders?
When a robot harms humans, are there any grounds for holding it criminally liable for its misconduct? Yes, provided that the robot has the ability to form, act on, and explain its moral decisions. If such a robot falls short of the basic moral standards expected by society, labeling it as a criminal can serve criminal law’s function of censuring wrongful conduct and ease the emotional harm suffered by human victims. Moreover, imposing criminal liability on robots could have significant instrumental value in certain cases, such as in identifying culpable humans. However, this does not exempt the manufacturers, trainers, or owners of the robots from any potential criminal liability.