Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In virtually all societal domains, algorithmic systems, and AI more particularly, have made a grand entrance. Their growing impact renders it increasingly important to understand and assess the challenges and opportunities they raise – an endeavor to which this book aims to contribute. In this chapter, I start by putting the current “AI hype” into context. I emphasize the long history of human fascination with artificial beings; the fact that AI is but one of many powerful technologies that humanity has grappled with over time; and the fact that its uptake is inherently enabled by our societal condition. Subsequently, I introduce the chapters of this book, dealing with AI, ethics and philosophy (Part I); AI, law and policy (Part II); and AI across sectors (Part III). Finally, I discuss some conundrums faced by all scholars in this field, concerning the relationship between law, ethics and policy and their roles in AI governance; the juxtaposition between protection and innovation; and law’s (in)ability to regulate a continuously evolving technology. While their solutions are far from simple, I conclude there is great value in acknowledging the complexity of what is at stake and the need for more nuance in the AI governance debate.
The central aim of this book is to provide an accessible and comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems more broadly. As these technologies have a growing impact on all domains of our lives, it is increasingly important to map, understand, and assess the challenges and opportunities they raise. This requires an interdisciplinary approach, which is why this book brings together contributions from a stellar set of authors from different disciplines, with the goal of advancing the understanding of AI’s impact on society and how such impact is and should be regulated. Beyond covering theoretical insights and concepts, the book also provides practical examples of how AI systems are used in society today and which questions are raised thereby, covering both horizontal and sectoral themes. Finally, the book also offers an introduction into the various legal and policy instruments that govern AI, with a particular focus on Europe.
Despite their centrality within discussions on AI governance, fairness, justice, and equality remain elusive and essentially contested concepts: even when some shared understanding concerning their meaning can be found on an abstract level, people may still disagree on their relation and realization. In this chapter, we aim to clear up some uncertainties concerning these notions. Taking one particular interpretation of fairness as our point of departure (fairness as nonarbitrariness), we first investigate the distinction between procedural and substantive conceptions of fairness (Section 4.2). We then discuss the relationship between fairness, justice, and equality (Section 4.3). Starting with an exploration of Rawls’ conception of justice as fairness, we then position distributive approaches toward issues of justice and fairness against socio-relational ones. In a final step, we consider the limitations of techno-solutionism and attempts to formalize fairness by design (Section 4.4). Throughout this chapter, we illustrate how the design and regulation of fair AI systems is not an insular exercise: attention must not only be paid to the procedures by which these systems are governed and the outcomes they produce, but also to the social processes, structures, and relationships that inform, and are co-shaped by, their functioning.
The rules of war, formally known as international humanitarian law, have been developing for centuries, reflecting society’s moral compass, the evolution of its values, and technological progress. While humanitarian law has been successful in prohibiting the use of certain methods and means of warfare, it is nevertheless destined to remain in a constant catch-up cycle with the atrocities of war. Nowadays, the widespread development and adoption of digital technologies in warfare, including AI, are leading to some of the biggest changes in human history. Is international humanitarian law up to the task of addressing the threats those technologies can present in the context of armed conflicts? This chapter provides a basic understanding of the system, principles, and internal logic of this legal domain, which is necessary to evaluate the actual or potential role of AI systems in (non-)international armed conflicts. The chapter aims to contribute to the discussion of the ex-ante regulation of AI systems used for military purposes beyond the scope of lethal autonomous weapons, as well as to recognize the potential that AI carries for improving the applicability of the basic principles of international humanitarian law, if used in an accountable and responsible way.
Public administrations are increasingly deploying algorithmic systems to facilitate the application, execution, and enforcement of regulation, a practice that can be denoted as algorithmic regulation. While their reliance on digital technology is not new, both the scale at which they automate administrative acts and the importance of the decisions they delegate to algorithmic tools is on the rise. In this chapter, I contextualize this phenomenon and discuss the implementation of algorithmic regulation across several public sector domains. I then assess some of the ethical and legal conundrums that public administrations face when outsourcing their tasks to such systems and provide an overview of the legal framework that governs this practice, with a particular focus on the European Union. This framework encompasses not only constitutional and administrative law but also data protection law and AI-specific law. Finally, I offer some take-aways for public administrations to consider when seeking to deploy algorithmic regulation.
Firms use algorithms for important decisions in areas from pricing strategy to product design. Increased price transparency and availability of personal data, combined with ever more sophisticated machine learning algorithms, has turbocharged their use. Algorithms can be a procompetitive force, such as when used to undercut competitors or to improve recommendations. But algorithms can also distort competition, as when firms use them to collude or to exclude competitors. EU competition law, in particular its provisions on restrictive agreements and abuse of dominance (Articles 101–102 TFEU), prohibits such practices, but novel anticompetitive practices – when algorithms collude autonomously for example – may escape its grasp. This chapter assesses to what extent anticompetitive algorithmic practices are covered by EU competition law, examining horizontal agreements (collusion), vertical agreements (resale price maintenance), exclusionary conduct (ranking), and exploitative conduct (personalized pricing).
The actors that are active in the financial world process vast amounts of information, starting from customer data and account movements over market trading data to credit underwriting or money-laundering checks. It is one thing to collect and store these data, yet another challenge to interpret and make sense of them. AI helps with both, for example, by checking databases or crawling the Internet in search of relevant information, by sorting it according to predefined categories or by finding its own sorting parameter. It is hence unsurprising that AI has started to fundamentally change many aspects of finance. This chapter takes AI scoring and creditworthiness assessments as an example for how AI is employed in financial services (Section 16.2), for the ethical challenges this raises (Section 16.3), and for the legal tools that attempt to adequately balance advantages and challenges of this technique (Section 16.4). It closes with a look at scoring beyond the credit situation (Section 16.5).
The main goal of this chapter is to introduce one type of AI used for law enforcement, namely predictive policing, and to discuss the main legal, ethical, and social concerns this raises. In the last two decades, police forces in Europe and in North America have increasingly invested in predictive policing applications. Two types of predictive policing will be discussed: predictive mapping and predictive identification. After discussing these two practices and what is known about their effectiveness, I discuss the legal, ethical, and social issues they raise, covering aspects relating to their efficacy, governance, and organizational use, as well as the impact they have on citizens and society.
The Cambridge Handbook of Emerging Issues at the Intersection of Commercial Law and Technology is a timely and interdisciplinary examination of the legal and societal implications of nascent technologies in the global commercial marketplace. Featuring contributions from leading international experts in the field, this volume offers fresh and diverse perspectives on a range of topics, including non-fungible tokens, blockchain technology, the Internet of Things, product liability for defective goods, smart readers, liability for artificial intelligence products and services, and privacy in the era of quantum computing. This work is an invaluable resource for academics, policymakers, and anyone seeking a deeper understanding of the social and legal challenges posed by technological innovation, as well as the role of commercial law in facilitating and regulating emerging technologies.
This informative Handbook provides a comprehensive overview of the legal, ethical, and policy implications of AI and algorithmic systems. As these technologies continue to impact various aspects of our lives, it is crucial to understand and assess the challenges and opportunities they present. Drawing on contributions from experts in various disciplines, the book covers theoretical insights and practical examples of how AI systems are used in society today. It also explores the legal and policy instruments governing AI, with a focus on Europe. The interdisciplinary approach of this book makes it an invaluable resource for anyone seeking to gain a deeper understanding of AI's impact on society and how it should be regulated. This title is also available as Open Access on Cambridge Core.
This chapter presents the current state of research in multimodal Construction Grammar with a focus on co-speech gestures. We trace the origins of the idea that constructions may have to be (re-)conceptualized as multimodal form–meaning pairs, deriving from the inherently multimodal nature of language use and the usage-based model, which attributes to language use a primordial role in language acquisition. The issue of whether constructions are actually multimodal is contested. We present two current positions in the field. The first one argues that a construction should only count as multimodal if gestures are mandatory parts of that construction. Other, more meaning-centered, approaches rely less on obligatoriness and frequency of gestural (co-)occurrences and either depart from a recurrent gesture to explore the verbal constructions it combines with or focus on a given meaning, for example, negation, and explore its multimodal conceptualization in discourse. The chapter concludes with a plea for more case studies and for the need to develop large-scale annotated corpora and apply statistical methods beyond measuring mere frequency of co-occurrence.
This chapter discusses the implications of the inclusion of emission allowances within the scope of EU capital markets legislation, and its interrelations with other relevant sources of EU law, namely the Emissions Trading Schemes and REMIT. The process leading to the comprehensive treatment of emission allowances by EU capital markets legislation began with MiFID I and reached its peak with MiFID II, that includes EUAs within the definition of financial instruments. This inclusion also implies that EUAs are subject to the provisions of the Market Abuse Regulation. This phenomenon raises several issues, first and foremost those that concern coordination between different legal texts. It also raises the question as to whether capital markets legislation is indeed capable of supporting the ultimate goal that the entire regulation of EUAs pursues, that is, the reduction of emissions. Whereby there is still insufficient empirical evidence as to whether this is effectively the case, EU legislation seems to believe in an assumption that would need to be better verified overtime.
This chapter discusses the integration of sustainability risks and factors into insurance regulation. According to the European Commission, sustainability considerations should be placed at the heart of the financial system. In its Action Plan, the European Commission announced its intention to clarify the integration of sustainability in so-called fiduciary duties in sectoral legislation. The objective of the European Commission is to direct financial and capital flows to green investment and to avoid stranded assets, which could be facilitated if sustainability is more clearly integrated in such duties of financial undertakings. This chapter describes the wide range and variety of developments in this area, which reflects some of the unique characteristics of the insurance sector, and which provides opportunities to contribute to the EC’s sustainability agenda. This contribution is not limited to the provision of considerable financial contributions to the sustainable investment agenda, which is closely related to the fiduciary duties of insurers and the application of the prudent person principle, but relate as well as to other elements of the sustainability agenda and resilience of the European economy, for which the insurance and reinsurance sector is well positioned to provide a meaningful support, for instance by addressing the protection gap.