To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To answer the question of what responsible AI means, the authors, Jaan Tallinn and Richard Ngo, propose a framework for the deployment of AI which focuses on two concepts: delegation and supervision. The framework aims towards building ‘delegate AIs’ which lack goals of their own but can perform any task delegated to them. However, AIs trained with hardcoded reward functions, or even human feedback, often learn to game their reward signal instead of accomplishing their intended tasks. Thus, Tallinn and Ngo argue that it will be important to develop more advanced techniques for continuous high-quality supervision – for example, by evaluating the reasons which AIs give for their choices of actions. These supervision techniques might be made scalable by training AIs to generate reward signals for more advanced AIs. Given their current limitations, however, Tallinn and Ngo call for caution when developing new AI: we must be aware of the risks and overcome self-interest and dangerous competitive incentives in order to avoid them.
The philosopher Wilfried Hinsch focuses on statistical discrimination by means of computational profiling. He defines statistical profiling as an estimate of what individuals will do by considering the group of people they can be assigned to. The author explores which criteria of fairness and justice are appropriate for the assessment of computational profiling. According to Hinsch, grounds of discrimination such as gender or ethnicity do not explain when or why it is wrong to discriminate. Thus, Hinsch argues that discrimination constitutes a rule-guided social practice that imposes unreasonable burdens on specific people. He argues that, on the one hand, statistical profiling is a part of human nature and not by itself wrongful discrimination. However, on the other hand, even statistically correct profiles can be unacceptable considering reasons of procedural fairness or substantive justice. Because of this, Hinsch suggests a fairness index for profiles to determine procedural fairness; and argues that because AI systems do not rely on human stereotypes or rather limited data, computational profiling may be a better safeguard of fairness than humans.
The chapter aims to serve as a conceptual sketch for the intricacies involved in autonomous algorithmic collusion, including the notion of concerted practices for cases that would otherwise elude the cartel prohibition. Stefan Thomas, a law scholar, starts by assessing how algorithms can influence competition in markets before dealing with the traditional criteria of distinction between explicit and tacit collusion, which might reveal a potential gap in the existing legal framework regarding algorithmic collusion. Finally, he analyses whether the existing cartel prohibition can be construed in a way that captures the phenomenon appropriately. This chapter shows how enforcement paradigms that hinge on descriptions of the inner sphere and conduct of human beings may collapse when applied to the effects precipitated by independent AI based computer agents.
In this chapter the law scholars Haksoo Ko, Sangchul Park, and Yong Lim, analyse the way South Korea has been dealing with the COVID-19 pandemic and its legal consequences. Instead of enforcing strict lockdowns, South Korea imposed several other measures, such as a robust AI-based contact tracing scheme. The chapter provides an overview of the legal framework and the technology which allowed South Korea to employ its technology-based contact tracing scheme. Additionally, the authors showcase the information system South Korea implemented, as well as the actual use of the data. The authors argue that South Korea has a rather stringent data-protection regime, which proved to be the biggest hurdle in implementing the contact tracing scheme. However, the country introduced a separate legal framework for extensive contact tracing after its bruising encounter with the Middle East Respiratory Syndrome (MERS) in 2015 which was reactivated and provided government agencies with extensive authority to process personal data for epidemiological purposes. The AI-based technology built in the process of creating smart cities also proved handy as it was repurposed for contact tracing purposes.
The law scholar Dustin Lewis explores the requirements of international law with regard to the employments of AI-related tools and techniques in armed conflict. The scope of this chapter is not limited to Lethal Autonomous Weapons (AWS) but also encompasses other AI-related tools and techniques related to warfighting, detention, and humanitarian services. After providing an overview of international law applicable to armed conflict, the author outlines some preconditions necessary to respect international law. According to Lewis, current international law essentially presupposes humans – and not artificial, non-humans – as legal agents. From that premise, the author argues that any employment of AI-related tools or techniques in an armed conflict needs to be susceptible to being administered, discerned, attributed, understood, and assessed by human agents.
In the past decade, Artificial Intelligence (AI) as a general-purpose tool has become a disruptive force globally. By leveraging the power of artificial neural networks, deep learning frameworks can now translate text from hundreds of languages, enable real-time navigation for everyone, recognise pathological medical images, as well as enable many other applications across all sectors in society. However, the enormous potential for innovation and technological advances and the chances that AI systems provide come with hazards and risks that are not yet fully explored, let alone fully understood. One can stress the opportunities of AI systems to improve healthcare, especially in times of a pandemic, provide automated mobility, support the protection of the environment, protect our security, and otherwise support human welfare. Nevertheless, we must not neglect that AI systems can pose risks to individuals and societies; for example by disseminating biases, by undermining political deliberation, or by the development of autonomous weapons. This means that there is an urgent need for responsible governance of AI systems. This Handbook shall be a basis to spell out in more detail what could become relevant features of Responsible AI and how we can achieve and implement them at the regional, national, and international level. Hence, the aim of this Handbook is to address some of the most pressing philosophical, ethical, legal, and societal challenges posed by AI.
In this chapter, the philosophers Oliver Mueller and Boris Essmann address AI-supported neurotechnology, especially Brain–Computer Interfaces (BCIs) that may in the future supplement and restore functioning in agency-limited individuals or even augment or enhance capacities for natural agency. The authors propose a normative framework for the evaluation of neurotechnological and AI-assisted agency based on ‘cyberbilities’. These are capabilities that emerge from human–machine interactions in which agency is distributed across human and artificial elements. The authors conclude by providing a list of cyberbilities that is meant to support the well-being of individuals.
The chapter by the philosopher Catrin Misselhorn provides an overview of the most central debates in artificial morality and machine ethics. Artificial moral agents are AI systems which are able to recognise the morally relevant aspects of a situation and take them into account in their decisions and actions. Misselhorn shows that artificial morality is not just a matter of Science Fiction scenarios but rather an issue that has to be considered today. She lays the conceptual foundations of artificial morality and discusses the ethical issues that arise. She addresses questions like: which morality should be part of an AI system? Can AI systems be aligned with human morality, or do they need a machine-specific morality? Are there decisions, which should never be transferred to machines? Could artificial morality have impacts on human morality if it becomes more pervasive? These and other questions relating to AI are discussed and answered.
In this chapter, the law scholar Ralf Poscher sets out to show how AI challenges the traditional understanding of the right to data protection and presents an outline of an alternative conception that better deals with emerging AI technologies. Firstly, Poscher explains how the traditional conceptualisation of data protection as an independent fundamental right on its own collides with AI’s technological development, given that AI systems do not provide the kind of transparency required by the traditional approach. Secondly, the author proposes an alternative model, a no-right thesis, which shifts the focus from data protection as an independent right to other existing fundamental rights, such as liberty and equality. He argues that this allows us to step back from the idea that each and every instance of personal data processing concerns a fundamental right. Instead, it is important to assess how an AI system ‘behaves’, what type of risks it generates, and which substantive fundamental rights are being affected.
In this chapter, the law scholar Jan von Hein analyses and evaluates the European Parliament’s proposal on a civil liability regime for artificial intelligence against the background of the already existing European regulatory framework on private international law, in particular the Rome I and II Regulations. The draft regulation (DR) proposed by the European Parliament is noteworthy from a private international law perspective because it introduces new conflicts rules for AI. In this regard, the proposed regulation distinguishes between a rule delineating the spatial scope of its autonomous rules on strict liability for high-risk AI systems (Article 2 DR) on the one hand, and a rule on the law applicable to fault-based liability for low-risk systems (Article 9 DR) on the other hand. The latter rule refers to the domestic laws of the Member State in which the harm or damage occurred. In sum, compared with Rome II, the conflicts approach of the draft regulation would be a regrettable step backwards in many ways.
The law scholars Weixing Shen and Yun Liu focus on China’s efforts in the field of AI regulation and spell out recent legislative actions. While there is no unified AI law today in China, many provisions from Chinese data protection law are in part applicable to AI systems. The authors particularly analyse the rights and obligations from the Chinese Data Security Law, the Chinese Civil Code, the E-Commerce Law, and the Personal Information Protection Law and explain the relevance of these regulations with regard to responsible AI and algorithm governance. The authors introduce as well the Draft Regulation in Internet Information Service Based on Algorithm Recommendation Technology. This adopts many AI specific principles such as transparency, fairness, and reasonableness. Regarding the widely discussed field of facial recognition by AI systems, they introduce a Draft Regulation, and a judicial Opinion by the Supreme People’s Court of China. Finally, Weixing Shen and Yun Liu refer to the AI Act proposed by the European Commission, which could also inspire future Chinese regulatory approaches.
In this chapter, the law scholar Christine Wendehorst analyses the different potential risks posed by AI as part of two main categories, safety risks and fundamental rights risks. Based on this, the author considers why AI challenges existing liability regimes. She spells out the main solutions put forward so far and evaluates them. This chapter highlights the fact that liability for fundamental rights risks is largely unchartered while being AI-specific. Such risks are now being addressed at the level of AI safety law, by way of prohibiting certain AI practices and by imposing strict legal requirements concerning data governance, transparency, and human oversight. Wendehorst nevertheless argues that a number of changes have to be made for the emerging AI safety regime to be used as a ‘backbone’ for the future AI liability regime if this is going to help address liability for fundamental rights risks. As a result, she suggests that further negotiations about the AI Act proposed by the European Commission should be closely aligned with the preparatory work on a future AI liability regime.
In this chapter, the philosopher Thomas Metzinger lists five main problem domains related to AI systems. For each problem field, he proposes several measures which should be taken. Firstly, there should be worldwide safety standards concerning the research and development of AI. If not, Metzinger fears a ‘race to the bottom’ in safety standards. Additionally, a possible AI arms race must be prevented as early as possible. Thirdly, he stresses that any creation of artificial consciousness should be avoided, as it is highly problematic from an ethical point of view. He argues that synthetic phenomenology could lead to non-biological forms of suffering and might lead to a vast increase of suffering in the universe, as AI can be copied rapidly. While AI might improve different kinds of governance, there is the risk of unknown risks, the ‘unknown unknowns’. Accordingly, as a fourth problem domain, the author proposes allocating resources to research and prepare for unexpected and long-term risks. Finally, Metzinger highlights the need for a concrete code of ethical conduct for anyone researching AI.