To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of artificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understanding of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so we suggest that “AI philosophy” provides a new method for philosophy.
The availability of data is a condition for the development of AI. This is no different in the context of healthcare-related AI applications. Healthcare data are required in the research, development, and follow-up phases of AI. In fact, data collection is also necessary to establish evidence of compliance with legislation. Several legislative instruments, such as the Medical Devices Regulation and the AI Act, enacted data collection obligations to establish (evidence of) the safety of medical therapies, devices, and procedures. Increasingly, such health-related data are collected in the real world from individual data subjects. The relevant legal instruments therefore explicitly mention they shall be without prejudice to other legal acts, including the GDPR. Following an introduction to real-world data, evidence, and electronic health records, this chapter considers the use of AI for healthcare from the perspective of healthcare data. It discusses the role of data custodians, especially when confronted with a request to share healthcare data, as well as the impact of concepts such as data ownership, patient autonomy, informed consent, and privacy and data protection-enhancing techniques.
Artificial intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility, and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but often fail to suffice due to the context-sensitivity of ethical challenges. Second, this chapter discusses methods to tackle these challenges. Main ethical theories (such as virtue ethics, consequentialism, and deontology) are shown to provide a starting point, but often lack the details needed for actionable AI ethics. Instead, we argue that mid-level philosophical theories coupled to design-approaches such as “design for values”, together with interdisciplinary working methods, offer the best way forward. The chapter aims to show how these approaches can lead to an ethics of AI that is actionable and that can be proactively integrated in the design of AI systems.
In spring 2024, the European Union formally adopted the AI Act, aimed at creating a comprehensive legal regime to regulate AI systems. In so doing, the Union sought to maintain a harmonized and competitive single market for AI in Europe while demonstrating its commitment to protect core EU values against AI’s adverse effects. In this chapter, we question whether this new regulation will succeed in translating its noble aspirations into meaningful and effective protection for people whose lives are affected by AI systems. By critically examining the proposed conceptual vehicles and regulatory architecture upon which the AI Act relies, we argue there are good reasons for skepticism, as many of its key operative provisions delegate critical regulatory tasks to AI providers themselves, without adequate oversight or redress mechanisms. Despite its laudable intentions, the AI Act may deliver far less than it promises.
Artificial intelligence (AI) is becoming increasingly important in our daily lives and so is academic research on its impact on various legal domains. One of the fields that has attracted much attention is extra-contractual or tort liability, as AI will inevitably cause damage. Reference can also be made to accidents involving autonomous vehicles. In this chapter, we will discuss some major and general challenges that arise in this context. We will thereby illustrate the remaining importance of national law to tackle these challenges and focus on procedural elements, including disclosure requirements and rebuttable presumptions. We will also illustrate how existing tort law concepts are being challenged by AI characteristics and provide an overview of regulatory answers.
Can we develop machines that exhibit intelligent behavior? And how can we build machines that perform a task without being explicitly programmed but by learning from examples or experience? Those are central questions for the domain of artificial intelligence. In this chapter, we introduce this domain from a technical perspective and dive deeper into machine learning and reasoning, which are essential for the development of AI.
There are several reasons to be ethically concerned about the development and use of AI. In this contribution, we focus on one specific theme of concern: moral responsibility. In particular, we consider whether the use of autonomous AI causes a responsibility gap and put forward the thesis is that this is not the case. Our argument proceeds as follows. First, we provide some conceptual background by discussing respectively what autonomous systems are, how the notion of responsibility can be understood, and what the responsibility gap is about. Second, we explore to which extent it could make sense to assign responsibility to artificial systems. Third, we argue that the use of autonomous systems does not necessarily lead to a responsibility gap. In the fourth and last section of this chapter, we set out why the responsibility gap – even if it were to exist – is not necessarily problematic.
There is no doubt that AI systems, and the large-scale processing of personal data that often accompanies their development and use, has put a strain on individuals’ fundamental rights and freedoms. Against that background, this chapter aims to walk the reader through a selection of key concerns arising from the application of the GDPR to the training and use of such systems. First, it clarifies the position and role of the GDPR within the broader European data protection regulatory framework. Next, it delineates its scope of application by delving into the pivotal notions of “personal data,” “controller,” and “processor.” Lastly, it highlights some friction points between the characteristics inherent to most AI systems and the general principles outlined in Article 5 GDPR, including lawfulness, transparency, purpose limitation, data minimization, and accountability.
AI has the potential to support many of the proposed solutions to solve sustainability concerns. However, AI itself is also unsustainable in many ways, as its development and use are for example linked with high carbon emissions, discrimination based on biased training data, surveillance practices, and the influence on elections through microtargeting. Addressing the long-term sustainability of AI is crucial, as it impacts social, personal, and natural environments for future generations. The “sustainable” approach is one that is inclusive in both time and space; where the past, present, and future of human societies, the planet, and environment are considered equally important to protect and secure, including the integration of all countries in economic and social change. Furthermore, our use of the concept “sustainable” demands we ask what practices in the current development and use of AI we want to maintain and alternatively what practices we want to repair and/or change. This chapter explores the ethical dilemma of AI for sustainability, balancing its potential to address many sustainable development challenges while at the same time causing harm to the environment and society.
This chapter discusses the interface of artificial intelligence (AI) and intellectual property (IP) law. It focuses on the protection of AI technology, the contentious qualification of AI systems as authors and/or inventors, and the question of ownership of AI-assisted and AI-generated output. The chapter also treats a number of miscellaneous topics, including the question of liability for IP infringement that takes place by or through the intervention of an AI system. More generally, it notes the ambivalent relationship between AI and the IP community, which appears to drift between apparent enthusiasm for the use of AI in IP practice and a clear hesitancy toward catering for additional incentive creation in the AI sphere by amending existing IP laws.
The integration of AI into business models and workplaces has a profound impact on society, legal systems, and organizational structures. It also became intrinsically intertwined with the concept of work and worker, and with the assignment of jobs, the measurement of performance and the evaluation of tasks, and decisions related to disciplinary measures or dismissals. The objective of this chapter is to provide an overview of the multifaceted aspects of AI and labor law, focusing on the profound legal questions arising from this intersection, including its implications for employment relationships, the exercise of labor rights, and social dialogue.
This chapter discusses how AI technologies permeate the media sector. It sketches opportunities and benefits of the use of AI in media content gathering and production, media content distribution, fact-checking, and content moderation. The chapter then zooms in on ethical and legal risks raised by AI-driven media applications: lack of data availability, poor data quality, and bias in training datasets, lack of transparency, risks for the right to freedom of expression, threats to media freedom and pluralism online, and threats to media independence. Finally, the chapter introduces the relevant elements of the EU legal framework which aim to mitigate these risks, such as the Digital Services Act, the European Media Freedom Act, and the AI Act.
In virtually all societal domains, algorithmic systems, and AI more particularly, have made a grand entrance. Their growing impact renders it increasingly important to understand and assess the challenges and opportunities they raise – an endeavor to which this book aims to contribute. In this chapter, I start by putting the current “AI hype” into context. I emphasize the long history of human fascination with artificial beings; the fact that AI is but one of many powerful technologies that humanity has grappled with over time; and the fact that its uptake is inherently enabled by our societal condition. Subsequently, I introduce the chapters of this book, dealing with AI, ethics and philosophy (Part I); AI, law and policy (Part II); and AI across sectors (Part III). Finally, I discuss some conundrums faced by all scholars in this field, concerning the relationship between law, ethics and policy and their roles in AI governance; the juxtaposition between protection and innovation; and law’s (in)ability to regulate a continuously evolving technology. While their solutions are far from simple, I conclude there is great value in acknowledging the complexity of what is at stake and the need for more nuance in the AI governance debate.
The central aim of this book is to provide an accessible and comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems more broadly. As these technologies have a growing impact on all domains of our lives, it is increasingly important to map, understand, and assess the challenges and opportunities they raise. This requires an interdisciplinary approach, which is why this book brings together contributions from a stellar set of authors from different disciplines, with the goal of advancing the understanding of AI’s impact on society and how such impact is and should be regulated. Beyond covering theoretical insights and concepts, the book also provides practical examples of how AI systems are used in society today and which questions are raised thereby, covering both horizontal and sectoral themes. Finally, the book also offers an introduction into the various legal and policy instruments that govern AI, with a particular focus on Europe.
Despite their centrality within discussions on AI governance, fairness, justice, and equality remain elusive and essentially contested concepts: even when some shared understanding concerning their meaning can be found on an abstract level, people may still disagree on their relation and realization. In this chapter, we aim to clear up some uncertainties concerning these notions. Taking one particular interpretation of fairness as our point of departure (fairness as nonarbitrariness), we first investigate the distinction between procedural and substantive conceptions of fairness (Section 4.2). We then discuss the relationship between fairness, justice, and equality (Section 4.3). Starting with an exploration of Rawls’ conception of justice as fairness, we then position distributive approaches toward issues of justice and fairness against socio-relational ones. In a final step, we consider the limitations of techno-solutionism and attempts to formalize fairness by design (Section 4.4). Throughout this chapter, we illustrate how the design and regulation of fair AI systems is not an insular exercise: attention must not only be paid to the procedures by which these systems are governed and the outcomes they produce, but also to the social processes, structures, and relationships that inform, and are co-shaped by, their functioning.