We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
AI brings risks but also opportunities for consumers. When it comes to consumer law, which traditionally focuses on protecting consumers’ autonomy and self-determination, the increased use of AI also poses major challenges. This chapter discusses both the challenges and opportunities of AI in the consumer context (Section 10.2 and 10.3) and provides a brief overview of some of the relevant consumer protection instruments in the EU legal order (Section 10.4). A case study on dark patterns illustrates the shortcomings of the current consumer protection framework more concretely (Section 10.5).
This chapter discusses how AI technologies permeate the media sector. It sketches opportunities and benefits of the use of AI in media content gathering and production, media content distribution, fact-checking, and content moderation. The chapter then zooms in on ethical and legal risks raised by AI-driven media applications: lack of data availability, poor data quality, and bias in training datasets, lack of transparency, risks for the right to freedom of expression, threats to media freedom and pluralism online, and threats to media independence. Finally, the chapter introduces the relevant elements of the EU legal framework which aim to mitigate these risks, such as the Digital Services Act, the European Media Freedom Act, and the AI Act.
Despite their centrality within discussions on AI governance, fairness, justice, and equality remain elusive and essentially contested concepts: even when some shared understanding concerning their meaning can be found on an abstract level, people may still disagree on their relation and realization. In this chapter, we aim to clear up some uncertainties concerning these notions. Taking one particular interpretation of fairness as our point of departure (fairness as nonarbitrariness), we first investigate the distinction between procedural and substantive conceptions of fairness (Section 4.2). We then discuss the relationship between fairness, justice, and equality (Section 4.3). Starting with an exploration of Rawls’ conception of justice as fairness, we then position distributive approaches toward issues of justice and fairness against socio-relational ones. In a final step, we consider the limitations of techno-solutionism and attempts to formalize fairness by design (Section 4.4). Throughout this chapter, we illustrate how the design and regulation of fair AI systems is not an insular exercise: attention must not only be paid to the procedures by which these systems are governed and the outcomes they produce, but also to the social processes, structures, and relationships that inform, and are co-shaped by, their functioning.
The rules of war, formally known as international humanitarian law, have been developing for centuries, reflecting society’s moral compass, the evolution of its values, and technological progress. While humanitarian law has been successful in prohibiting the use of certain methods and means of warfare, it is nevertheless destined to remain in a constant catch-up cycle with the atrocities of war. Nowadays, the widespread development and adoption of digital technologies in warfare, including AI, are leading to some of the biggest changes in human history. Is international humanitarian law up to the task of addressing the threats those technologies can present in the context of armed conflicts? This chapter provides a basic understanding of the system, principles, and internal logic of this legal domain, which is necessary to evaluate the actual or potential role of AI systems in (non-)international armed conflicts. The chapter aims to contribute to the discussion of the ex-ante regulation of AI systems used for military purposes beyond the scope of lethal autonomous weapons, as well as to recognize the potential that AI carries for improving the applicability of the basic principles of international humanitarian law, if used in an accountable and responsible way.
The main goal of this chapter is to introduce one type of AI used for law enforcement, namely predictive policing, and to discuss the main legal, ethical, and social concerns this raises. In the last two decades, police forces in Europe and in North America have increasingly invested in predictive policing applications. Two types of predictive policing will be discussed: predictive mapping and predictive identification. After discussing these two practices and what is known about their effectiveness, I discuss the legal, ethical, and social issues they raise, covering aspects relating to their efficacy, governance, and organizational use, as well as the impact they have on citizens and society.
Imagine that you are given access to an AI chatbot that compellingly mimics the personality and speech of a deceased loved one. If you start having regular interactions with this “thanabot,” could this new relationship be a continuation of the relationship you had with your loved one? And could a relationship with a thanabot preserve or replicate the value of a close human relationship? To the first question, we argue that a relationship with a thanabot cannot be a true continuation of your relationship with a deceased loved one, though it might support one’s continuing bonds with the dead. To the second question, we argue that, in and of themselves, relationships with thanabots cannot benefit us as much as rewarding and healthy intimate relationships with other humans, though we explain why it is difficult to make reliable comparative generalizations about the instrumental value of these relationships.
Currently, methods for mapping agricultural crops have been predominantly developed for a number of the most important and popular crops. These methods are often based on remote sensing data, scarce information about the location and boundaries of fields of a particular crop, and involve analyzing phenological changes throughout the growing season by utilizing vegetation indices, e.g., the normalized difference vegetation index. However, this approach encounters challenges when attempting to distinguish fields with different crops growing in the same area or crops that share similar phenology. This complicates the reliable identification of the target crops based solely on vegetation index patterns. This research paper aims to investigate the potential of advanced techniques for crop mapping using satellite data and qualitative information. These advanced approaches involve interpreting features in satellite images in conjunction with cartographic, statistical, and climate data. The study focuses on data collection and mapping of three specific crops: lavender, almond, and barley, and relies on various sources of information for crop detection, including satellite image characteristics, regional statistical data detailing crop areas, and phenological information, such as flowering dates and the end of the growing season in specific regions. As an example, the study attempts to visually identify lavender fields in Bulgaria and almond orchards in the USA. We test several state-of-the-art methods for semantic segmentation (U-Net, UNet++, ResUnet). The best result was achieved by a ResUnet model (96.4%). Furthermore, the paper explores how vegetation indices can be leveraged to enhance the precision of crop identification, showcasing their advanced capabilities for this task.
Language is the natural currency of most social communication. Until the emergence of more powerful computational methods, it simply was not feasible to measure its use in mainline social psychology. We now know that language can reveal behavioral evidence of mental states and personality traits, as well as clues to the future behavior of individuals and groups. In this chapter, we first review the history of language research in social personality psychology. We then survey the main methods for deriving psychological insights from language (ranging from data-driven to theory-driven, naturalistic to experimental, qualitative to quantitative, holistic to granular, and transparent to opaque) and describe illustrative examples of findings from each approach. Finally, we present our view of the new capabilities, real-world applications, and ethical and psychometric quagmires on the horizon as language research continues to evolve in the future.
Nigeria has a significant gender financial inclusion gap with women disproportionately represented among the financially excluded. Artificial intelligence (AI) powered financial technologies (fintech) present distinctive advantages for enhancing women’s inclusion. This includes efficiency gains, reduced transaction costs, and personalized services tailored to women’s needs. Nonetheless, AI harbours a paradox. While it promises to address financial inclusion, it can also inadvertently perpetuate and amplify gender bias. The critical question is thus, how can AI effectively address the challenges of women’s financial exclusion in Nigeria? Using publicly available data, this research undertakes a qualitative analysis of AI-powered Fintech services in Nigeria. Its objective is to understand how innovations in financial services correspond to the needs of potential users like unbanked or underserved women. The research finds that introducing innovative financial services and technology is insufficient to ensure inclusion. Financial inclusion requires the availability, accessibility, affordability, appropriateness, sustainability, and alignment of services with the needs of potential users, and policy-driven strategies that aid inclusion.
In the literature, there are polarized views regarding the capabilities of technology to embed societal values. One aisle of the debate contends that technical artifacts are value-neutral since values are not peculiar to inanimate objects. Scholars on the other side of the aisle argue that technologies tend to be value-laden. With the call to embed ethical values in technology, this article explores how AI and other adjacent technologies are designed and developed to foster social justice. Drawing insights from prior studies, this paper identifies seven African moral values considered central to actualizing social justice; of these, two stand out—respect for diversity and ethnic neutrality. By introducing use case analysis along with the Discovery, Translation, and Verification (DTV) framework and validating via Focus Group Discussion, this study revealed novel findings: first, ethical value analysis is best carried out alongside software system analysis. Second, to embed ethics in technology, interdisciplinary expertise is required. Third, the DTV approach combined with the software engineering methodology provides a promising way to embed moral values in technology. Against this backdrop, the two highlighted ethical values—respect for diversity and ethnic neutrality—help ground the pursuit of social justice.
Discussions of the development and governance of data-driven systems have, of late, come to revolve around questions of trust and trustworthiness. However, the connections between them remain relatively understudied and, more importantly, the conditions under which the latter quality of trustworthiness might reliably lead to the placing of ‘well-directed’ trust. In this paper, we argue that this challenge for the creation of ‘rich’ trustworthiness, which we term the Trustworthiness Recognition Problem, can usefully be approached as a problem of effective signalling, and suggest that its resolution can be informed by a multidisciplinary approach that relies on insights from economics and behavioural ecology. We suggest, overall, that the domain specificity inherent to the signalling theory paradigm offers an effective solution to the TRP, which we believe will be foundational to whether and how rapidly improving technologies are integrated in the healthcare space. We suggest that solving the TRP will not be possible without taking an interdisciplinary approach and suggest further avenues of inquiry that we believe will be fruitful.
The Conclusion provides a very brief recap of the issues discussed in the preceding chapters. It reflects on the larger context of regulatory change, and touches upon contemporary challenges of regulation such as the role of gender, race, sustainability, and future generations in the regulatory process.
This chapter offers an introduction to the book. It defines regulation, distinguishing it from other concepts such as governance. We define regulation as ‘intentional, organised attempts to manage or control risk or the behaviours of a different party through the exercise of authority, usually through the use of mechanisms of standard-setting, monitoring and information-gathering and behaviour modification to address a collective tension or problem’. The Introduction reflects upon the most important changes in regulation in the last two decades and the growing relevance of regulation in society. The chapter explains significant changes in the practice and context of regulation that have occurred since the first editions was published.
Technological change often prompts calls for regulation. Yet formulating regulatory policy in relation to rapidly-changing technology is complex. It requires an understanding of the politics of technology, the complexity of the innovation process, and its general impact on society. Chapter 3 introduces a variety of academic literatures across the humanities, law and the social sciences that offer insights on understanding technological change that have direct relevance to the challenges of regulating new and emerging technology. The chapter discusses different strands of scholarship, ranging from the history of technology, innovation studies and the growing field of law and technology that have until now remained largely fragmented and siloed, focusing primarily on digital technologies.
This study focuses on the practicalities of establishing and maintaining AI infrastructure, as well as the considerations for responsible governance by investigating the integration of a pre-trained large language model (LLM) with an organisation’s knowledge management system via a chat interface. The research adopts the concept of “AI as a constituted system” to emphasise the social, technical, and institutional factors that contribute to AI’s governance and accountability. Through an ethnographic approach, this article details the iterative processes of negotiation, decision-making, and reflection among organisational stakeholders as they develop, implement, and manage the AI system. The findings indicate that LLMs can be effectively governed and held accountable to stakeholder interests within specific contexts, specifically, when clear institutional boundaries facilitate innovation while navigating the risks related to data privacy and AI misbehaviour. Effective constitution and use can be attributed to distinct policy creation processes to guide AI’s operation, clear lines of responsibility, and localised feedback loops to ensure accountability for actions taken. This research provides a foundational perspective to better understand algorithmic accountability and governance within organisational contexts. It also envisions a future where AI is not universally scaled but consists of localised, customised LLMs tailored to stakeholder interests.
Contemporary life relies on regulation. The quality and safety of the water we drink, the food we eat, and the social media applications we use are all governed by multiple regulatory regimes. Although rooted in law, regulation is a multidisciplinary endeavour. Debates about regulation, particularly in the face of rapid change and the emergence of new 'risks', are now commonplace. Despite extensive scholarship, regulation is often poorly understood, even by policy-makers, with unintended and even disastrous consequences. This book offers a critical introduction to core theories, concepts, methods, tools, and techniques of regulation, including regulatory policy, instruments, enforcement, compliance, accountability and legitimacy. Weaving extracts from texts drawn from many disciplines with accessible commentary, it introduces this important field to students, scholars, and practitioners in a scholarly yet accessible and engaging manner with discussion questions and additional readings for those seeking to deepen their knowledge.
This chapter introduces social scientific perspectives and methods applicable to observing the relationship between artificial intelligence (AI) and religion. It discusses the contributions that anthropological and sociological approaches can make to this entanglement of two modern social phenomena while also drawing attention to the inherent biases and perspectives that both fields bring with them due to their histories. Examples of research on religion and AI are highlighted, especially when they demonstrate agile and new methodologies for engaging with AI in its many applications; including but not limited to online worlds, multimedia formats, games, social media and the new spaces made by technological innovation such as the innovations such as the platforms underpinning the gig economy. All these AI-enabled spaces can be entangled with religious and spiritual conceptions of the world. This chapter also aims to expand upon the relationship between AI and religion as it is perceived as a general concept or object within human society and civilisation. It explains how both anthropology and sociology can provide frameworks for conceptualising that relationship and give us ways to account for our narratives of secularisation – informed by AI development – that see religion as a remnant of a prior, less rational stage of human civilisation.
Artificial intelligence (AI) as an object and term remains enmeshed in our imaginaries, narratives, institutions and aspirations. AI has that in common with the other object of discussion in this Cambridge Companion: religion. But beyond such similarities in form and reception, we can also speak to how entangled these two objects have been, and are yet still becoming, with each other. This introductory chapter explores the difficulty of definitions and the intricacies of the histories of these two domains and their entanglements. It initially explores this relationship through the religious narratives and tropes that have had a role to play in the formation of the field of AI, in its discursive modes. It examines the history of AI and religion through the language and perspectives of some of the AI technologists and philosophers who have employed the term ‘religion’ in their discussions of the technology itself. Further, this chapter helps to set the scene for the larger conversation on religion and AI of this volume by demonstrating some of the tensions and lacunae that the following chapters address in greater detail.
The global and historical entanglements between articifial intelligence (AI)/robotic technologies and Buddhism, as a lived religion and philosophical tradition, are significant. This chapter sets out three key sites of interaction between Buddhism and AI/robotics. First, Buddhism, as an ontological model of mind (and body) that describes the conditions for what constitutes artificial life. Second, Buddhism defines the boundaries of moral personhood and thus the nature of interactions between human and non-human actors. And finally, Buddhism can be used as an ethical framework to regulate and direct the development of AI/robotics technologies. It argues that Buddhism provides an approach to technology that is grounded in the interdependence of all things, and this gives rise to both compassion and an ethical commitment to alleviate suffering.