To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Artificial intelligence (AI) is becoming increasingly important in our daily lives and so is academic research on its impact on various legal domains. One of the fields that has attracted much attention is extra-contractual or tort liability, as AI will inevitably cause damage. Reference can also be made to accidents involving autonomous vehicles. In this chapter, we will discuss some major and general challenges that arise in this context. We will thereby illustrate the remaining importance of national law to tackle these challenges and focus on procedural elements, including disclosure requirements and rebuttable presumptions. We will also illustrate how existing tort law concepts are being challenged by AI characteristics and provide an overview of regulatory answers.
Can we develop machines that exhibit intelligent behavior? And how can we build machines that perform a task without being explicitly programmed but by learning from examples or experience? Those are central questions for the domain of artificial intelligence. In this chapter, we introduce this domain from a technical perspective and dive deeper into machine learning and reasoning, which are essential for the development of AI.
There are several reasons to be ethically concerned about the development and use of AI. In this contribution, we focus on one specific theme of concern: moral responsibility. In particular, we consider whether the use of autonomous AI causes a responsibility gap and put forward the thesis is that this is not the case. Our argument proceeds as follows. First, we provide some conceptual background by discussing respectively what autonomous systems are, how the notion of responsibility can be understood, and what the responsibility gap is about. Second, we explore to which extent it could make sense to assign responsibility to artificial systems. Third, we argue that the use of autonomous systems does not necessarily lead to a responsibility gap. In the fourth and last section of this chapter, we set out why the responsibility gap – even if it were to exist – is not necessarily problematic.
There is no doubt that AI systems, and the large-scale processing of personal data that often accompanies their development and use, has put a strain on individuals’ fundamental rights and freedoms. Against that background, this chapter aims to walk the reader through a selection of key concerns arising from the application of the GDPR to the training and use of such systems. First, it clarifies the position and role of the GDPR within the broader European data protection regulatory framework. Next, it delineates its scope of application by delving into the pivotal notions of “personal data,” “controller,” and “processor.” Lastly, it highlights some friction points between the characteristics inherent to most AI systems and the general principles outlined in Article 5 GDPR, including lawfulness, transparency, purpose limitation, data minimization, and accountability.
AI has the potential to support many of the proposed solutions to solve sustainability concerns. However, AI itself is also unsustainable in many ways, as its development and use are for example linked with high carbon emissions, discrimination based on biased training data, surveillance practices, and the influence on elections through microtargeting. Addressing the long-term sustainability of AI is crucial, as it impacts social, personal, and natural environments for future generations. The “sustainable” approach is one that is inclusive in both time and space; where the past, present, and future of human societies, the planet, and environment are considered equally important to protect and secure, including the integration of all countries in economic and social change. Furthermore, our use of the concept “sustainable” demands we ask what practices in the current development and use of AI we want to maintain and alternatively what practices we want to repair and/or change. This chapter explores the ethical dilemma of AI for sustainability, balancing its potential to address many sustainable development challenges while at the same time causing harm to the environment and society.
This chapter discusses the interface of artificial intelligence (AI) and intellectual property (IP) law. It focuses on the protection of AI technology, the contentious qualification of AI systems as authors and/or inventors, and the question of ownership of AI-assisted and AI-generated output. The chapter also treats a number of miscellaneous topics, including the question of liability for IP infringement that takes place by or through the intervention of an AI system. More generally, it notes the ambivalent relationship between AI and the IP community, which appears to drift between apparent enthusiasm for the use of AI in IP practice and a clear hesitancy toward catering for additional incentive creation in the AI sphere by amending existing IP laws.
The integration of AI into business models and workplaces has a profound impact on society, legal systems, and organizational structures. It also became intrinsically intertwined with the concept of work and worker, and with the assignment of jobs, the measurement of performance and the evaluation of tasks, and decisions related to disciplinary measures or dismissals. The objective of this chapter is to provide an overview of the multifaceted aspects of AI and labor law, focusing on the profound legal questions arising from this intersection, including its implications for employment relationships, the exercise of labor rights, and social dialogue.
This chapter discusses how AI technologies permeate the media sector. It sketches opportunities and benefits of the use of AI in media content gathering and production, media content distribution, fact-checking, and content moderation. The chapter then zooms in on ethical and legal risks raised by AI-driven media applications: lack of data availability, poor data quality, and bias in training datasets, lack of transparency, risks for the right to freedom of expression, threats to media freedom and pluralism online, and threats to media independence. Finally, the chapter introduces the relevant elements of the EU legal framework which aim to mitigate these risks, such as the Digital Services Act, the European Media Freedom Act, and the AI Act.
In virtually all societal domains, algorithmic systems, and AI more particularly, have made a grand entrance. Their growing impact renders it increasingly important to understand and assess the challenges and opportunities they raise – an endeavor to which this book aims to contribute. In this chapter, I start by putting the current “AI hype” into context. I emphasize the long history of human fascination with artificial beings; the fact that AI is but one of many powerful technologies that humanity has grappled with over time; and the fact that its uptake is inherently enabled by our societal condition. Subsequently, I introduce the chapters of this book, dealing with AI, ethics and philosophy (Part I); AI, law and policy (Part II); and AI across sectors (Part III). Finally, I discuss some conundrums faced by all scholars in this field, concerning the relationship between law, ethics and policy and their roles in AI governance; the juxtaposition between protection and innovation; and law’s (in)ability to regulate a continuously evolving technology. While their solutions are far from simple, I conclude there is great value in acknowledging the complexity of what is at stake and the need for more nuance in the AI governance debate.
The central aim of this book is to provide an accessible and comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems more broadly. As these technologies have a growing impact on all domains of our lives, it is increasingly important to map, understand, and assess the challenges and opportunities they raise. This requires an interdisciplinary approach, which is why this book brings together contributions from a stellar set of authors from different disciplines, with the goal of advancing the understanding of AI’s impact on society and how such impact is and should be regulated. Beyond covering theoretical insights and concepts, the book also provides practical examples of how AI systems are used in society today and which questions are raised thereby, covering both horizontal and sectoral themes. Finally, the book also offers an introduction into the various legal and policy instruments that govern AI, with a particular focus on Europe.
Despite their centrality within discussions on AI governance, fairness, justice, and equality remain elusive and essentially contested concepts: even when some shared understanding concerning their meaning can be found on an abstract level, people may still disagree on their relation and realization. In this chapter, we aim to clear up some uncertainties concerning these notions. Taking one particular interpretation of fairness as our point of departure (fairness as nonarbitrariness), we first investigate the distinction between procedural and substantive conceptions of fairness (Section 4.2). We then discuss the relationship between fairness, justice, and equality (Section 4.3). Starting with an exploration of Rawls’ conception of justice as fairness, we then position distributive approaches toward issues of justice and fairness against socio-relational ones. In a final step, we consider the limitations of techno-solutionism and attempts to formalize fairness by design (Section 4.4). Throughout this chapter, we illustrate how the design and regulation of fair AI systems is not an insular exercise: attention must not only be paid to the procedures by which these systems are governed and the outcomes they produce, but also to the social processes, structures, and relationships that inform, and are co-shaped by, their functioning.
The rules of war, formally known as international humanitarian law, have been developing for centuries, reflecting society’s moral compass, the evolution of its values, and technological progress. While humanitarian law has been successful in prohibiting the use of certain methods and means of warfare, it is nevertheless destined to remain in a constant catch-up cycle with the atrocities of war. Nowadays, the widespread development and adoption of digital technologies in warfare, including AI, are leading to some of the biggest changes in human history. Is international humanitarian law up to the task of addressing the threats those technologies can present in the context of armed conflicts? This chapter provides a basic understanding of the system, principles, and internal logic of this legal domain, which is necessary to evaluate the actual or potential role of AI systems in (non-)international armed conflicts. The chapter aims to contribute to the discussion of the ex-ante regulation of AI systems used for military purposes beyond the scope of lethal autonomous weapons, as well as to recognize the potential that AI carries for improving the applicability of the basic principles of international humanitarian law, if used in an accountable and responsible way.
Public administrations are increasingly deploying algorithmic systems to facilitate the application, execution, and enforcement of regulation, a practice that can be denoted as algorithmic regulation. While their reliance on digital technology is not new, both the scale at which they automate administrative acts and the importance of the decisions they delegate to algorithmic tools is on the rise. In this chapter, I contextualize this phenomenon and discuss the implementation of algorithmic regulation across several public sector domains. I then assess some of the ethical and legal conundrums that public administrations face when outsourcing their tasks to such systems and provide an overview of the legal framework that governs this practice, with a particular focus on the European Union. This framework encompasses not only constitutional and administrative law but also data protection law and AI-specific law. Finally, I offer some take-aways for public administrations to consider when seeking to deploy algorithmic regulation.
Firms use algorithms for important decisions in areas from pricing strategy to product design. Increased price transparency and availability of personal data, combined with ever more sophisticated machine learning algorithms, has turbocharged their use. Algorithms can be a procompetitive force, such as when used to undercut competitors or to improve recommendations. But algorithms can also distort competition, as when firms use them to collude or to exclude competitors. EU competition law, in particular its provisions on restrictive agreements and abuse of dominance (Articles 101–102 TFEU), prohibits such practices, but novel anticompetitive practices – when algorithms collude autonomously for example – may escape its grasp. This chapter assesses to what extent anticompetitive algorithmic practices are covered by EU competition law, examining horizontal agreements (collusion), vertical agreements (resale price maintenance), exclusionary conduct (ranking), and exploitative conduct (personalized pricing).
The actors that are active in the financial world process vast amounts of information, starting from customer data and account movements over market trading data to credit underwriting or money-laundering checks. It is one thing to collect and store these data, yet another challenge to interpret and make sense of them. AI helps with both, for example, by checking databases or crawling the Internet in search of relevant information, by sorting it according to predefined categories or by finding its own sorting parameter. It is hence unsurprising that AI has started to fundamentally change many aspects of finance. This chapter takes AI scoring and creditworthiness assessments as an example for how AI is employed in financial services (Section 16.2), for the ethical challenges this raises (Section 16.3), and for the legal tools that attempt to adequately balance advantages and challenges of this technique (Section 16.4). It closes with a look at scoring beyond the credit situation (Section 16.5).