To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter explores the interplay of playfulness and precarity in AI-mediated languaging. Drawing on examples from social media users experimenting with generative AI, the chapter illustrates how AI reshapes communication, linguistic practices, and social interaction. These playful engagements demonstrate AI’s capacity to expand linguistic creativity and produce novel forms of meaning, while simultaneously revealing its fragility and the ethical tensions inherent in its use. By exposing the cultural assumptions, power relations, and value judgments embedded in AI systems, such moments highlight the non-neutral and unpredictable nature of AI technologies. The chapter argues that while AI opens up new possibilities for expression, it also demands critical reflection on issues of power, identity, and social norms. Ultimately, these examples highlight the ‘dis’engagement with AI’s potentiality while recognising and addressing the risks it poses to language and society.
'Using Generative AI in Historical Practice' argues that generative models are reshaping historical scholarship. Rejecting medium - and long -term speculation, it focuses on near-term practice: how historians can use AI now to augment their research through context-aware dialogue, semantic search, network visualization, multimodal source analysis, and code-assisted workflows. It details methods for context management, task design, and response structure, while warning against cognitive offloading and model bias. While it offers a variety of novel methodologies, the book insists on the indispensability of human agency and taste. Case studies range from Augustine of Hippo to early cinematography, demonstrating the possibilities and limits of generative AI. It concludes with a call to historians to engage with the technology critically and productively, reimagining AI-assisted scholarship without surrendering disciplinary standards and aims.
Young people are experiencing worsening mental health and a growing reliance on online tools and services to address mental health difficulties. At the same time, next-generation large language models (LLMs) that are deployed through ‘chatbot style interfaces’, using deep learning artificial intelligence akin to interacting with a human appear to mark an opportunity for mental health therapeutics when designed specifically for clinical intervention. However, emergent evidence suggests the use of more generic LLM chatbots may pose a risk of providing misinformation, bias, or over reliance for some individuals when used outside of clinical contexts for mental health. This perspective paper examines the intersection of youth mental health and the rapid adoption of LLM chatbots. It first contextualises rising mental health challenges among young people alongside their increasing reliance on digital solutions. The paper then explores the potential benefits of LLM chatbot style interfaces in clinical mental health interventions. Following this, we discuss the evidence surrounding adverse mental health outcomes from the use of generic LLMs to support mental health at population level, describing complex system-level and human-level factors noted from the evidence. Finally, we outline considerations for public health and youth mental health discourse, purpose built LLM platform design, and a supporting research agenda. While current evidence on benefits and risks from generic LLMs is emergent and not youth-specific, this perspective highlights a need for research focused on young people to ensure safe and effective use of widely available LLMs for mental health support.
Though coverage denials and delays impose on physicians and patients (especially marginalized patients) substantial administrative burden, the persistence of this practice is inevitable. Drawing on interviews with patients and former health insurance executives, this chapter reflects on harms caused by prior authorization and offers a menu of state and federal solutions to expand access to care, while also reflecting on how the 2024 election results impact their likelihood. A growing complication is major insurers’ increasing reliance on AI tools to process prior authorizations and claims in seconds. Though many states have sought to lessen prior authorization burden in targeted ways, this reach is limited because the Employee Retirement Income Security Act preempts state policies that “relate to” much of employer-sponsored health insurance. Despite some appetite for reform in Congress, legislative efforts have stalled. The 2024 election results signal a likely acceleration of America’s reliance on privatization (especially Medicare Advantage), so it is especially important to understand the impact of these managed care practices and ways to mitigate their burdens.
This paper examines how aesthetics are constructed in technology-mediated musical practice, focusing on the interplay between cultural expectations of AI-generated sounds and the technical structures determining the behaviour of AI algorithms. Through a reconstruction of events in the Surfing Hyperparameters project, we capture how the sonic aesthetics of the system were constructed by negotiating between our sonic expectations (informed by cultural narratives of ghosts in machines) and the sound produced by the system. We argue that the aesthetics of AI-generated sound are often inspired rather than directly caused by the technology itself. While existing research has identified how tools embed ‘paths of least resistance’ towards certain sonic aesthetics, our work reveals a complementary force: how aesthetic expectations rooted in cultural narratives – from science fiction’s stories of autonomous machines to sonic hauntology’s spectral presences – actively shape design decisions and sonic outcomes. Through a radically transparent approach to documenting mismatches between expectation and reality, we show that the stories practitioners tell while building and making music with technology are performative, constructing rather than merely describing aesthetic realities. Addressing these interplays between imagination, expectation and material reality constitutes an important step towards addressing the complex sociotechnical assemblages in which technology-mediated musical practices come into being.
The adoption of Artificial Intelligence (AI) in the maritime sector marks a significant technological advancement with broad implications for operational efficiency, crewing, and regulatory frameworks. While these innovations are expected to enhance safety, reduce operating costs, and promote environmental sustainability, they are also likely to introduce challenges related to workforce displacement, cybersecurity, and evolving labor regulations at sea. This chapter examines the impact of AI on the maritime workforce, more specifically seafarers. It explores how AI may affect crew size, the emergence of new roles, and new skills in the future. It also offers an analysis of the significant impact of AI on working conditions and labor rights at sea under international maritime regulations, particularly the Maritime Labour Convention (MLC), 2006, and the Standards of Training, Certification and Watchkeeping for Seafarers Convention (1978, STCW Convention), as amended. This chapter explores the intersection areas of AI and maritime law, focusing on the emerging regulatory frameworks, including the EU AI Act and the International Maritime Organization’s Maritime Autonomous Surface Ships (MASS) Code. The findings point out that while AI presents opportunities for improving the working conditions of seafarers, its use must adhere to acceptable labor standards, legal clarity, and robust cybersecurity measures.
Using the fields of memory studies and digital humanities, this article argues that there has been a shift from more collective and social memory to more personalised and individual memory. This shift, it is argued here, can be conceptualised through the psychoanalytic concept of ‘psychosis’. While the causes of the changes in our patterns of memory have been located in capitalist and neoliberal principles, the effects of the changes in our memory habits might be found in psychosis. From falling in love with machinic AI replicas to indulging in conspiracy theories to acting as if we are social media influencers or backing ourselves to win out in impossible job markets, we are inclined towards personal fantasy, often at the expense of participating in social life. But why do we do this? Why is it easier to believe a farfetched conspiracy theory or wild personal dream than it is to participate socially and collectively in the world we live in? Part of the reason, at least, is found in our increasing habitual reliance on new and emergent technologies. Often presented to us as a brand-new form of Artificial Intelligence, these generative tools are the latest update to a longer pattern in our digital world: the trend of developing ‘relationships’ with algorithms that, to larger and smaller degrees, we come to rely on for habits of cognition and recognition. By affecting our patterns of memory, these technologies produce a kind of isolation that lends itself to individual and fantastical – rather than shared and realist – thinking.
Manual contouring (MC) is time-consuming work in radiotherapy planning for rectal cancer. Artificial intelligence (AI) can reduce the time required for clinical target volume (CTV) and organs-at-risk (OARs) delineation. In this study, we evaluated the quality of auto-segmented CTVs and OARs.
Methods:
Dose-planning data were collected from ten patients who underwent preoperative radiotherapy for locally advanced rectal cancer in 2024. Auto-segmented structures from the AI-Rad and Contour+ software tools were added. Constructed AI-CTVs, based on Contour+ segmentations and AI-OARs, i.e., bladder, femoral heads and bowel bag, by both AI tools, were compared to their MC counterparts by use of quantitative metrics, volumetric/surface Dice similarity coefficients (vDSC/sDSC) and maximum/average Hausdorff distance (HD/aHD). The constructed AI-CTVs and MC counterparts were graded by two radiotherapists with two qualitative methods.
Results:
The median vDSC, sDSC, HD and aHD values of our constructed AI-CTVs compared with the MC-CTVs were 0.86, 0.61, 23.19 and 0.62 mm, respectively. For both AI tools, the agreement in the OAR metrics was overall good but less similar for the bowel bag. The qualitative evaluations of the AI-CTVs, compared to the MC-CTVs, were in clear favour of the MC-CTVs. The cranial-anterior nodal levels were anatomical areas with poorer coverage, where the contouring guidelines differed.
Conclusion:
The quality of our constructed AI-CTVs was inferior to the MC-CTVs. Thus, the auto-segmentation methods need further development on this aspect for use in the clinical setting. In contrast, the agreement of the quantitative metrics for the OARs was overall good, except for the bowel bag.
This article examines how Artificial Intelligence (AI) is imagined and narrated in relation to terrorism and counterterrorism through two policy reports published jointly by the United Nations Counter-Terrorism Centre and the UN Interregional Crime and Justice Research Institute. Drawing on the concept of Sociotechnical Imaginaries (SIs) and bridging Science and Technology Studies with Critical Security and Terrorism Studies, the article unpacks how AI, terrorism, and counterterrorism are discursively co-constructed. It argues that the reports contribute to the construction of a specific emerging SI: one in which AI is framed as inevitable and transformative, terrorism as increasingly technological, and AI-enabled counterterrorism as both necessary and morally imperative. Through this imaginary, speculative futures and imminent threats are mobilised to legitimise precautionary and potentially exceptional responses. By invoking scientific authority, expert consensus, and the language of technical neutrality, these UN organs perform as a technocratic authority, presenting its guidance as apolitical while reinforcing a particular vision of global security governance. The article thus contributes to the literature by showing how imaginaries of AI are produced, stabilised, and circulated within international security institutions, and by revealing their wider political effects, including the depoliticisation of technological choices and the normalisation of AI-enabled counterterrorism as an inevitable future.
In core government and for public tasks hugely sensitive for citizens such as migration governance in Europe, the use of AI systems developed and provided under contract gives migration authorities and private providers the power to keep their deployment secret and to refuse access to their key components. This article explores the complex interplay between public and private drivers of secrecy surrounding procured AI technologies in European migration governance, demonstrating how these forces converge and reinforce one another. It examines AI secrecy as an institutional framework shaped by both public and private law forms of secrecy, applying the spectrum of secrecy from deep (unknown unknowns) to shallow secrecy (known unknowns). Specifically, the paper looks into the sources of ‘political secrecy’ in migration and security authorities, highlighting how migration agencies and authorities keep the existence and use of AI systems from civil society and affected persons (such as third country nationals). Next, it analyses the legal frameworks that sustain AI vendors’ private secrecy, such as trade secrecy, secrecy in procurement procedures and agreements. In examining the various secrecy points, it scrutinises the limited impact of the AI Act in addressing the entrenched secrecy of AI deployment and development in governing migration. Finally, we condense the main takeaways and examine the broader repercussions of AI secrecy beyond the context of border control, touching upon its implications for a new equilibrium transcending the conventional public/private divide.
Like other areas of law and legal practice, the arbitration world is beginning to grapple with how to harness the potential of artificial intelligence (AI) while managing its risks. Analogizing to existing AI tools for analysing case law and judicial behavior, as well as to algorithmic hiring applications, this chapter explores how similar technology could be used to improve the process of selecting investment arbitrators. As criticisms of investment arbitration continue to mount, a new selection tool could help to address systemic concerns about fairness, diversity, and legitimacy. Such a tool could level the playing field for parties in terms of access to information about prospective arbitrators as well as expand and diversify the pool of viable candidates. In addition to providing guidance for the parties making their own selections, the suggested tool could be used by arbitral institutions to help with appointing the tribunal president or even, with the parties’ consent, the entire panel. The chapter provides a framework for thinking through questions of design and implementation and concludes by addressing potential challenges and objections.
This chapter explores the accuracy of private transcription services when transcribing Black English and Standard American English. Courts and lawyers in the US regularly rely on transcripts from such services, but third-party verification of their accuracy, especially with respect to their quality when faced with nonstandard language varieties, is lacking. This study draws on experimental methods to contrast the quality of transcription services offered by transcribers and AI route. The quantitative results show that transcription by humans and AI resulted in more mistakes when transcribing Black English than when transcribing Standard American English. Furthermore, a qualitative analysis reveals that these mistakes often changed or obscured meaning in legally relevant ways. If these results are generalizable, many transcripts currently in circulation, and crucial both to justice at the trial level and appellate review, contain disproportionately more legally important mistakes for Black English speakers. Given that Black English speakers are a highly overrepresented population in the US criminal system, the chapter proposes ways of redressing transcription shortcomings.
Richard Powers’ latest novel, Playground (2024), foregrounds two realms of scientific issues in the public sphere—the oceans and AI. In that sense, it looks in large part like a follow-up to two of his previous works: The Overstory (2018), an analogous treatment of the environment as represented by forests, and Galatea 2.2 (1995), which describes the creation of a very early version of an AI. Here, the focus appears to be an elaborate scheme to fill the South Pacific with floating cities, to allow wealthy people to escape the tribulations of the real world, with the development of a Facebook-like app as a secondary theme. The exposition proceeds via three narrative streams: the stories of a software engineer, a pioneering deep ocean explorer, and a tiny Polynesian island designated as the home base for the floating cities project. As the plotlines evolve, the reader is led to anticipate how the pieces will connect up—until a totally unexpected plot reversal near the end forces a drastic reappraisal of what the book is primarily about, suggesting that despite the elaborate portrayal of risks to the oceans, AI may, in fact, well prove to be the greater threat.
Experimental legal regimes, notably regulatory sandboxes, seek to promote technological innovation while at the same time ensuring consumer protection against unsafe or unsuitable products and services. But in doing so, they may not always be able to prevent harm to consumers. This chapter explores the relationship between regulatory sandboxes and private law. Given that within such sandboxes the participating firms may benefit from regulatory relief, it considers whether, and, if so, to what extent traditional private law nevertheless remains and should remain applicable to their activities during the experiment. It develops three models of the relationship between regulatory sandboxes and private law – separation, substitution, and complementarity – and considers their key characteristics, manifestations, and implications in the context of European private law. The chapter reveals the tension between, on the one hand, fostering technology-enabled innovation, legal certainty, and uniformity and, on the other hand, realising interpersonal justice and individual fairness while leaving room for diversity. It also assesses each model in terms of its potential to reconcile these competing considerations and draws lessons from this assessment for EU and national legislators and courts.
This paper analyzes the 2024 Mellichamp Mind and Machine Initiative at UCSB, a pioneering AI-inclusive literary competition accepting human, AI, and hybrid works. As Head Judge, I explore key questions confronting the panel: Will AI render human writing obsolete? Can machine-generated literature exhibit creativity or remain mechanical? Are AI and human writing distinguishable? What does authorship mean in an era of AI collaboration? The paper examines judges’ interpretive frameworks, biases, and expectations, contrasting them with outcomes observed in submissions. It also considers how AI challenges traditional notions of the author and fosters new creative possibilities.
The consensus on the need to regulate artificial intelligence is clear, but the how remains elusive. Private regulation, as proposed by the tech industry itself, and state regulation, as embodied in the recent EU Artificial Intelligence Act, are two common forms of governance. We advance a third option that has received very little attention to date: professional regulation. Professional regulation is modeled after hybrid public-private regulatory structures found in medicine, such as those put forth by the American Medical Association. Such governance schemes develop both technical and ethical standards, shaping professional training, continuing knowledge, and conduct. We contend that it is the most practical means of ensuring the development of human-centered AI in an era of rapid technological change and intensely opposing views of what regulation ought to do. This article places the responsibility of acting ethically on the group that knows the technology best and can anticipate its effects: AI developers. But unlike other voluntary standards, professional regulation articulates and enforces standards to certify individuals. Professional licensing is an alternative that provides public protections based on privately developed standards that ensure the safety of AI prior to their release.
Concerns around misinformation and disinformation have intensified with the rise of AI tools, with many claiming this is a watershed moment for truth, accuracy and democracy. In response, numerous laws have been enacted in different jurisdictions. Addressing Misinformation and Disinformation introduces this new legal landscape and charts a path forward. The Element identifies avoidance or alleviation of harm as a central legal preoccupation, outlines technical developments associated with AI and other technologies, and highlights social approaches that can support long-term civic resilience. Offering an expansive interdisciplinary analysis that moves beyond narrow debates about definitions, Addressing Misinformation and Disinformation shows how law can work alongside other technical and social mechanisms, as part of a coherent policy response.
For far too long, tech titans peddled promises of disruptive innovation - fabricating benefits and minimizing harms. The promise of quick and easy fixes overpowered a growing chorus of critical voices, driving a sea of private and public investments into increasingly dangerous, misguided, and doomed forms of disruption, with the public paying the price. But what's the alternative? Upgrades - evidence-based, incremental change. Instead of continuing to invest in untested, high-risk innovations, constantly chasing outsized returns, upgraders seek a more proven path to proportional progress. This book dives deep into some of the most disastrous innovations of recent years - the metaverse, cryptocurrency, home surveillance, and AI, to name a few - while highlighting some of the unsung upgraders pushing real progress each day. Timely and corrective, Move Slow and Upgrade pushes us past the baseless promises of innovation, towards realistic hope.
This talk examines how corpus linguistics and artificial intelligence treasure the potential to reshape contemporary language learning ecologies. It argues that the rapid normalisation of generative AI has intensified the need for pedagogical models that combine low-friction access to language support with transparent methods grounded in attested usage. Drawing on ecological perspectives and recent empirical research, the talk shows how AI-driven environments expand opportunities for language learning while creating risks related to opacity and over-reliance. Corpus linguistics, data-driven learning and corpus literacy offer a complementary foundation by providing traceable evidence, reproducible analyses, and practices that foster learners’ critical judgement. Two convergence scenarios are proposed: AI as an extension of DDL, and corpus literacy as the operational core of critical AI literacy. Together, these scenarios illustrate how open-box pedagogies can reconcile responsiveness and accountability, ensuring that AI-mediated learning remains anchored in transparent processes and empirically grounded language knowledge.
This article critically examines the integration of artificial intelligence (AI) into nuclear decision-making processes and its implications for deterrence strategies in the Third Nuclear Age. While realist deterrence logic assumes that the threat of mutual destruction compels rational actors to act cautiously, AI disrupts this by adding speed, opacity and algorithmic biases to decision-making processes. The article focuses on the case of Russia to explore how different understandings of deterrence among nuclear powers could increase the risk of misperceptions and inadvertent escalation in an AI-influenced strategic environment. I argue that AI does not operate in a conceptual vacuum: the effects of its integration depend on the strategic assumptions guiding its use. As such, divergent interpretations of deterrence may render AI-supported decision making more unpredictable, particularly in high-stakes nuclear contexts. I also consider how these risks intersect with broader arms race dynamics. Specifically, the pursuit of AI-enabled capabilities by global powers is not only accelerating military modernisation but also intensifying the security dilemma, as each side fears falling behind. In light of these challenges, this article calls for greater attention to conceptual divergence in deterrence thinking, alongside transparency protocols and confidence-building measures aimed at mitigating misunderstandings and promoting stability in an increasingly automated military landscape.