To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The entangled relations of humanity’s natural and digital ecosystems are discussed in terms of the risk-uncertainty conundrum. The discussion focuses on global warming from the perspective of the small world of geoengineering, with a particular focus on geothermal energy, marine geoengineering, and the political economy of mitigation and adaptation (section 1). It inquires into the large world of the biosphere, Anthropocene, and uncertainties created by the overlay of human and geological time (section 2). And it scrutinizes the technosphere, consciousness, and language as humanity’s arguably most important cultural technology (section 3).
Technological disruption leads to discontent in the law, regarding the limited remedies that are available under private law. The source of the problem is a ‘private law’ model that assumes that the function of law is to correct wrongs by compensating individuals who are harmed. So, the model is based on (i) individual claimants and (ii) financial redress. If we copy this private law model into our regulatory regimes for new technologies our governance remedies will fall short. On the one hand, the use of AI can affect in a single act a large number of people. On the other hand, not all offences can be cured through awarding money damages. Therefore, it is necessary to rethink private remedies in the face of AI wrongs to make law effective. To achieve this, the mantra of individual compensation has to be overcome in favor of a social perspective should prevail including the use of non-pecuniary measures to provide effective remedies for AI wrongs.
This chapter examines some ways in which human agency might be affected by a transition from legal regulation to regulation by AI. To do that, it elucidates an account of agency, distinguishing it from related notions like autonomy, and argues that this account of agency is both philosophically respectable and fits common sense. With that account of agency in hand, the chapter then examines two different ways – one beneficial, one baleful – in which agency might be impacted by regulation by AI, focussing on some agency-related costs and benefits of transforming private law from its current rule-based regulatory form to an AI-enabled form of technological management. It concludes that there are few grounds to be optimistic about the effects of such a transition and good reason to be cautious.
Being Human in the Digital World is a collection of essays by prominent scholars from various disciplines exploring the impact of digitization on culture, politics, health, work, and relationships. The volume raises important questions about the future of human existence in a world where machine readability and algorithmic prediction are increasingly prevalent and offers new conceptual frameworks and vocabularies to help readers understand and challenge emerging paradigms of what it means to be human. Being Human in the Digital World is an invaluable resource for readers interested in the cultural, economic, political, philosophical, and social conditions that are necessary for a good digital life. This title is also available as Open Access on Cambridge Core.
Interest in the use of chatbots powered by large language models (LLMs) to support women and girls in conflicts and humanitarian crises, including survivors of gender-based violence (GBV), appears to be increasing. Chatbots could offer a last-resort solution for GBV survivors who are unable or unwilling to access relevant information and support in a safe and timely manner. With the right investment and guard-rails, chatbots might also help treat some symptoms related to mental health and psychosocial conditions, extending mental health and psychosocial support (MHPSS) to crisis-affected communities. However, the use of chatbots can also increase risks for individual users – for example, generating unintended harms when a chatbot hallucinates or produces errors. In this paper, we critically examine the opportunities and limitations of using LLM-powered chatbots1 that provide direct care and support to women and girls in conflicts and humanitarian crises, with a specific focus on GBV survivors. We find some evidence in the global North to suggest that the use of chatbots may reduce self-reported feelings of loneliness for some individuals, but we find less evidence on the role and effectiveness of chatbots in crisis counselling and treating depression, post-traumatic or somatic symptomology, particularly as it relates to GBV in emergencies or other traumatic events that occur in armed conflicts and humanitarian crises. Drawing on key expert interviews as well as evidence and research from adjacent scholarship – such as feminist AI, trauma treatment, GBV, and MHPSS in conflicts and emergencies – we conclude that the potential benefits of GBV-related, AI-enabled talk therapy chatbots do not yet outweigh their risks, particularly when deployed in high-stakes scenarios and contexts such as armed conflicts and humanitarian crises.
Two modern trends in insurance are data-intensive underwriting and behaviour-based insurance. Data-intensive underwriting means that insurers analyse more data for estimating the claim cost of a consumer and for determining the premium based on that estimation. Insurers also offer behaviour-based insurance. For example, some car insurers use artificial intelligence (AI) to follow the driving behaviour of an individual consumer in real time and decide whether to offer that consumer a discount. In this paper, we report on a survey of the Dutch population (N = 999) in which we asked people’s opinions about examples of data-intensive underwriting and behaviour-based insurance. The main results include: (i) If survey respondents find an insurance practice unfair, they also find the practice unacceptable. (ii) Respondents find almost all modern insurance practices that we described unfair. (iii) Respondents find practices for which they can influence the premium fairer. (iv) If respondents find a certain consumer characteristic illogical for basing the premium on, then respondents find using the characteristic unfair. (v) Respondents find it unfair if an insurer offers an insurance product only to a specific group. (vi) Respondents find it unfair if an insurance practice leads to the poor paying more. We also reflect on the policy implications of the findings.
This short research article interrogates the rise of digital platforms that enable ‘synthetic afterlives’, with a focus on how deathbots – AI-driven avatar interactions grounded in personal data and recordings – reshape memory practices. Drawing on socio-technical walkthroughs of four platforms – Almaya, HereAfter, Séance AI, and You, Only Virtual – we analyse how they frame, archive, and algorithmically regenerate memories. Our findings reveal a central tension: between preserving the past as a fixed archive and continually reanimating it through generative AI. Our walkthroughs demonstrate how these services commodify remembrance, reducing memory to consumer-driven interactions designed for affective engagement while obscuring the ethical, epistemological and emotional complexities of digital commemoration. In doing so, they enact reductive forms of memory that are embedded within platform economies and algorithmic imaginaries.
Section 230 of the Communications Decency Act is often called "The Twenty-Six Words That Created the Internet." This 1996 law grants platforms broad legal immunity against claims arising from both third-party content that they host, and good-faith content moderation decisions that they make. Most observers agree that without Section 230 immunity, or some variant of it, the modern internet and social media could not exist. Nonetheless, Section 230 has been subject to vociferous criticism, with both Presidents Biden and Trump having called for its repeal. Critics claim that Section 230 lets platforms have it both ways, leaving them free to host harmful content but also to block any content they object to. This chapter argues that criticisms of Section 230 are largely unwarranted. The diversity of the modern internet, and ability of ordinary individuals to reach broad audiences on the internet, would be impossible without platform immunity. As such, calls for repeal of or major amendments to Section 230 are deeply unwise. The chapter concludes by pointing to important limits on Section 230 immunity and identifying some narrow amendments to Section 230 that may be warranted.
The integration of artificial intelligence (AI)-driven technologies into peace dialogues offers both innovative possibilities and critical challenges for contemporary peacebuilding practice. This article proposes a context-sensitive taxonomy of digital deliberation tools designed to guide the selection and adaptation of AI-assisted platforms in conflict-affected environments. Moving beyond static typologies, the framework accounts for variables such as scale, digital literacy, inclusivity, security, and the depth of AI integration. By situating digital peace dialogues within broader peacebuilding and digital democracy frameworks, the article examines how AI can enhance participation, scale deliberation, and support knowledge synthesis, —while also highlighting emerging concerns around algorithmic bias, digital exclusion, and cybersecurity threats. Drawing on case studies involving the United Nations (UN) and civil society actors, the article underscores the limitations of one-size-fits-all approaches and makes the case for hybrid models that balance AI capabilities with human facilitation to foster trust, legitimacy, and context-responsive dialogue. The analysis contributes to peacebuilding scholarship by engaging with the ethics of AI, the politics of digital diplomacy, and the sustainability of technological interventions in peace processes. Ultimately, the study argues for a dynamic, adaptive approach to AI integration, continuously attuned to the ethical, political, and socio-cultural dimensions of peacebuilding practice.
The chapter will help you to be able to describe computerised CBT and its evidence base to date and weigh up the benefits and costs of computerised CBT to both the provider and client
The nexus of artificial intelligence (AI) and memory is typically theorized as a ‘hybrid’ or ‘symbiosis’ between humans and machines. The dangers related to this nexus are subsequently imagined as tilting the power balance between its two components, such that humanity loses control over its perception of the past to the machines. In this article, I propose a new interpretation: AI, I posit, is not merely a non-human agency that changes mnemonic processes, but rather a window through which the past itself gains agency and extends into the present. This interpretation holds two advantages. First, it reveals the full scope of the AI–memory nexus. If AI is an interactive extension of the past, rather than a technology acting upon it, every application of it constitutes an act of memory. Second, rather than locating AI’s power along familiar axes – between humans and machines, or among competing social groups – it reveals a temporal axis of power: between the present and the past. In the article’s final section, I illustrate the utility of this approach by applying it to the legal system’s increasing dependence on machines, which, I claim, represents not just a technical but a mnemonic shift, where the present is increasingly falling under the dominion of the past – embodied by AI.
The recent phenomenon of anthropomorphizing artificial intelligences (AIs) is uniquely provocative for philosophy of religion because of its tendency to place AIs in an analogous position to divinity vis-à-vis humans in spite of AIs being human artefacts. In the case of divinity, intelligent mental capacities are, and in the case of AIs are sometimes presented as inevitably becoming, not just equivalent but in fact superior to their realization in humans. Philosophers of religion would do well to learn from discussions of anthropomorphism in AI, in conversation with the historical debates over anthropomorphizing divinity, and to remember that evolved cognitive biases may lose their adaptive functions as the cultural context shifts, and even become maladaptive.
Chapter 9 draws on the evidence outlined earlier in the book to evaluate a range of possible legal interventions. Structured according to the five potential equality objectives outlined earlier, the measures include steps to increase the visibility of people with disfigurements in daily life, methods of motivating employers to become appearance-inclusive and changes to influential institutions outside the employment context. They also include a range of legislative reforms to replace the severe disfigurement provision with a better remedial mechanism, such as the creation of a new protected characteristic of disfigurement or the reformulation of the definition of disability.
AI-based autocontouring products claim to be able to segment organs with accuracy comparable to humans. We compare the geometric and dosimetric performance of three AI-based autocontouring packages (Autocontour 2.5.6, (“RF”); Annotate 2.3.1, (“TP”) and RT-Mind_AI 1.0, (“MM”)) in the head and neck region.
Methods:
We generated 14 organ at risk (OAR) autocontours on 13 computed tomography (CT) image sets. They were compared with clinical (human-generated) contours. The geometric differences were quantified by calculating Dice coefficients and Hausdorff distances. The autocontours were compared visually with the clinical controus by an expert physician. The autocontour sets were also ranked for accuracy by two physicians. The dosimetric effects were evaluated by recalculating treatment plans on the autocontoured CT sets.
Results:
RF and TP slightly outperformed MM in geometric metrics (the percentage of OARs having mean Dice coefficients > 0.7 was RF 57.1 %, TP 64.3 % and MM 50.0%). The physician judged RF and TP contours to be more anatomically accurate, on average, than the manual contours (manual contour mean accuracy score 2.49, RF 2.28, MM 3.24, TP 1.93). The mean scores given to the autocontours by the two physicians were better for RF and TP, compared to MM (RF 1.86, MM 2.36, TP 1.77). The dosimetric differences were similar for all three programs and were not strongly correlated with the geometric differences.
Conclusions:
The performance of the three autocontouring packages in the head and neck region is similar, with TP and RF slightly outperforming MM. The correlation between geometric and dosimetric metrics is not strong, and dosimetric evaluation is therefore recommended before clinical use of autocontouring software.
This Element brings work from the philosophy of technology into conversation with media, religion, culture studies, and work in digital religion studies to explore examples of how popular media and emerging technologies are increasingly framed and understood through a distinct range of spiritual myths, metaphors, images, and representations of God. Working with three case studies about how internet memes, popular films, and media coverage of public philosophy link ideas about God and technology, this Element draws attention to common conceptions that describe a perceived relationship between religion and technology today. It synthesizes these discussions and categories and presents them in four distinct models, showing a range of ways in which the relationship between God and technology is commonly depicted. The Element seeks to create a platform for scholarly study and critical discourse on technology's religious and spiritual representation in digital and emerging media cultures and contexts through this work.
At the London Tech Week event in early June, Nvidia CEO Jensen Huang praised the UK as the ‘envy of the world’ when it comes to AI researchers, but he also criticised it as the largest AI ecosystem in the world without its own infrastructure. The criticism is somewhat self-serving: when the UK does get around to building out that infrastructure, it’s certain to consist largely of chips sold by Huang’s company. It’s also unsurprising: Huang has been pitching the idea of ‘sovereign AI’ since at least 2023, conscious that nation states are the next deep pockets to target after the hyperscalers and generously funded model builders. In a world where the only real contenders in the race for AI supremacy are the US and China, we look at how the pursuit of AI sovereignty is playing out across the rest of the planet.
Recent developments in artificial intelligence (AI) in general, and Generative AI (GenAI) in particular, have brought about changes across the academy. In applied linguistics, a growing body of work is emerging dedicated to testing and evaluating the use of AI in a range of subfields, spanning language education, sociolinguistics, translation studies, corpus linguistics, and discourse studies, inter alia. This paper explores the impact of AI on applied linguistics, reflecting on the alignment of contemporary AI research with the epistemological, ontological, and ethical traditions of applied linguistics. Through this critical appraisal, we identify areas of misalignment regarding perspectives on knowing, being, and evaluating research practices. The question of alignment guides our discussion as we address the potential affordances of AI and GenAI for applied linguistics as well as some of the challenges that we face when employing AI and GenAI as part of applied linguistics research processes. The goal of this paper is to attempt to align perspectives in these disparate fields and forge a fruitful way ahead for further critical interrogation and integration of AI and GenAI into applied linguistics.
The advent of new technologies, particularly artificial intelligence (AI), has expanded the array of options and enhanced performance in addressing biothreats. This article provides a comprehensive overview of the specific applications of AI in addressing biothreats, aiming to inform and enhance future practices. Research indicates that AI has significantly contributed to infectious disease surveillance and emergency responses, as well as bioterrorism mitigation; despite its limitations, it merits ongoing attention for further study and exploration. The effective deployment of next-generation AI in mitigating biothreats will largely hinge on our ability to engage in continuous experiential learning, acquire high-quality data, refine algorithms, and iteratively update practices. Meanwhile, it is essential to assess the operational risks associated with AI in the context of biothreats and develop robust solutions to mitigate potential risks.
The chapter explores the intricate relationship between sex, gender, science, and technology within STS, examining historical and contemporary intersections. Early STS studies, influenced by second-wave feminism, initially addressed gender inequalities in science and technology, emphasizing women’s underrepresentation. Over time, research expanded to encompass various ways sex and gender interact with these domains. One central theme is social constructivism, questioning Western science’s objectivity and universality. Researchers argue that science and technology aren’t value-neutral, reflecting societal norms and biases. Gender imbalances persist in science and technology jobs, influenced by stereotypes, bias, and limited role models. Work–life challenges, preference differences, and ability disparities contribute to the gender gap. The chapter delves into technology-gendering, examining how certain technologies, such as home appliances, crash dummies, and digital assistants, are associated with specific genders. These design choices either reinforce or challenge traditional gender norms. The discussion extends to gender’s impact on science communication, technological embodiment, and cyberspaces. Online spaces raise concerns about gendered harassment and cyberbullying. The passage also addresses gender imbalances in tech entrepreneurship and leadership, emphasizing women’s underrepresentation in startup ventures. The intersection of gender and AI reveals biases in algorithmic decision-making.
From an “infrastructural gaze,” this chapter examines the penetration of artificial intelligence (AI) in capital markets as a blend of continuity and change in finance. The growing infrastructural dimension of AI originates first from the evolution of algorithmic trading and governance, and second, from its ascent as a “general-purpose technology” within the financial domain. The text discusses the consequences of this “infrastructuralization” of financial AI, considering the micro–macro tension typical of capital accumulation and crisis dynamics. Challenging the commonly espoused notion of AI as a stabilizing force, the analysis underscores its connections with volatile, crisis-prone financialized dynamics. It concludes by outlining potential consequences (unpredictability, operational inefficiency, complexity, further concentration) and (systemic) risks arising from the emergence of AI as a “new” financial infrastructure, particularly those related to biases in data and data commodification, lack of explanation of underlying models, algorithmic collusion, and network effects. The text asserts that a thorough understanding of these hazards can be attained by adopting a perspective that considers the macro/meso/micro connections inherent in infrastructures.