To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Khanssa Lagdami, Artificial Intelligence and the Human Element in the Maritime Sector
The adoption of Artificial Intelligence (AI) in the maritime sector marks a significant technological advancement with broad implications for operational efficiency, crewing, and regulatory frameworks. While these innovations are expected to enhance safety, reduce operating costs, and promote environmental sustainability, they are also likely to introduce challenges related to workforce displacement, cybersecurity, and evolving labor regulations at sea. This chapter examines the impact of AI on the maritime workforce, more specifically seafarers. It explores how AI may affect crew size, the emergence of new roles, and new skills in the future. It also offers an analysis of the significant impact of AI on working conditions and labor rights at sea under international maritime regulations, particularly the Maritime Labour Convention (MLC), 2006, and the Standards of Training, Certification and Watchkeeping for Seafarers Convention (1978, STCW Convention), as amended. This chapter explores the intersection areas of AI and maritime law, focusing on the emerging regulatory frameworks, including the EU AI Act and the International Maritime Organization’s Maritime Autonomous Surface Ships (MASS) Code. The findings point out that while AI presents opportunities for improving the working conditions of seafarers, its use must adhere to acceptable labor standards, legal clarity, and robust cybersecurity measures.
Like other areas of law and legal practice, the arbitration world is beginning to grapple with how to harness the potential of artificial intelligence (AI) while managing its risks. Analogizing to existing AI tools for analysing case law and judicial behavior, as well as to algorithmic hiring applications, this chapter explores how similar technology could be used to improve the process of selecting investment arbitrators. As criticisms of investment arbitration continue to mount, a new selection tool could help to address systemic concerns about fairness, diversity, and legitimacy. Such a tool could level the playing field for parties in terms of access to information about prospective arbitrators as well as expand and diversify the pool of viable candidates. In addition to providing guidance for the parties making their own selections, the suggested tool could be used by arbitral institutions to help with appointing the tribunal president or even, with the parties’ consent, the entire panel. The chapter provides a framework for thinking through questions of design and implementation and concludes by addressing potential challenges and objections.
This chapter explores the accuracy of private transcription services when transcribing Black English and Standard American English. Courts and lawyers in the US regularly rely on transcripts from such services, but third-party verification of their accuracy, especially with respect to their quality when faced with nonstandard language varieties, is lacking. This study draws on experimental methods to contrast the quality of transcription services offered by transcribers and AI route. The quantitative results show that transcription by humans and AI resulted in more mistakes when transcribing Black English than when transcribing Standard American English. Furthermore, a qualitative analysis reveals that these mistakes often changed or obscured meaning in legally relevant ways. If these results are generalizable, many transcripts currently in circulation, and crucial both to justice at the trial level and appellate review, contain disproportionately more legally important mistakes for Black English speakers. Given that Black English speakers are a highly overrepresented population in the US criminal system, the chapter proposes ways of redressing transcription shortcomings.
Richard Powers’ latest novel, Playground (2024), foregrounds two realms of scientific issues in the public sphere—the oceans and AI. In that sense, it looks in large part like a follow-up to two of his previous works: The Overstory (2018), an analogous treatment of the environment as represented by forests, and Galatea 2.2 (1995), which describes the creation of a very early version of an AI. Here, the focus appears to be an elaborate scheme to fill the South Pacific with floating cities, to allow wealthy people to escape the tribulations of the real world, with the development of a Facebook-like app as a secondary theme. The exposition proceeds via three narrative streams: the stories of a software engineer, a pioneering deep ocean explorer, and a tiny Polynesian island designated as the home base for the floating cities project. As the plotlines evolve, the reader is led to anticipate how the pieces will connect up—until a totally unexpected plot reversal near the end forces a drastic reappraisal of what the book is primarily about, suggesting that despite the elaborate portrayal of risks to the oceans, AI may, in fact, well prove to be the greater threat.
Experimental legal regimes, notably regulatory sandboxes, seek to promote technological innovation while at the same time ensuring consumer protection against unsafe or unsuitable products and services. But in doing so, they may not always be able to prevent harm to consumers. This chapter explores the relationship between regulatory sandboxes and private law. Given that within such sandboxes the participating firms may benefit from regulatory relief, it considers whether, and, if so, to what extent traditional private law nevertheless remains and should remain applicable to their activities during the experiment. It develops three models of the relationship between regulatory sandboxes and private law – separation, substitution, and complementarity – and considers their key characteristics, manifestations, and implications in the context of European private law. The chapter reveals the tension between, on the one hand, fostering technology-enabled innovation, legal certainty, and uniformity and, on the other hand, realising interpersonal justice and individual fairness while leaving room for diversity. It also assesses each model in terms of its potential to reconcile these competing considerations and draws lessons from this assessment for EU and national legislators and courts.
Drawing on Umberto Eco’s epistemological metaphor of the open work, this paper explores the intersection of two open forms of notation, inherent scores and text scores, with generative AI. Building on the notion of inherent scores, in which the interface merges with the notation, we introduce embodied sketching, a notational approach that streamlines composition and performance with real-time neural audio synthesis (NAS). We then examine text scores in text-to-audio NAS, presenting Mouja+, a work combining real-time NAS with embodied sketches and AI-generated audio from Fluxus scores. Based on the experience of composing and performing Mouja+, we show how AI’s statistical processing of language introduces interpretative gaps between the human understanding of the scores and the model’s output and propose prompting strategies to streamline the use of text scores with text-to-audio generative AI. We continue by discussing how NAS adds to the open work through algorithmic processes that coalesce into an elusive and deferring sense of presence. Through Derrida’s notion of hauntology, we thus extend the open work into what we term the ‘haunted work’, an epistemological metaphor encompassing a growing corpus of works engaging with the tension between presence and absence as a source of openness.
This paper analyzes the 2024 Mellichamp Mind and Machine Initiative at UCSB, a pioneering AI-inclusive literary competition accepting human, AI, and hybrid works. As Head Judge, I explore key questions confronting the panel: Will AI render human writing obsolete? Can machine-generated literature exhibit creativity or remain mechanical? Are AI and human writing distinguishable? What does authorship mean in an era of AI collaboration? The paper examines judges’ interpretive frameworks, biases, and expectations, contrasting them with outcomes observed in submissions. It also considers how AI challenges traditional notions of the author and fosters new creative possibilities.
The consensus on the need to regulate artificial intelligence is clear, but the how remains elusive. Private regulation, as proposed by the tech industry itself, and state regulation, as embodied in the recent EU Artificial Intelligence Act, are two common forms of governance. We advance a third option that has received very little attention to date: professional regulation. Professional regulation is modeled after hybrid public-private regulatory structures found in medicine, such as those put forth by the American Medical Association. Such governance schemes develop both technical and ethical standards, shaping professional training, continuing knowledge, and conduct. We contend that it is the most practical means of ensuring the development of human-centered AI in an era of rapid technological change and intensely opposing views of what regulation ought to do. This article places the responsibility of acting ethically on the group that knows the technology best and can anticipate its effects: AI developers. But unlike other voluntary standards, professional regulation articulates and enforces standards to certify individuals. Professional licensing is an alternative that provides public protections based on privately developed standards that ensure the safety of AI prior to their release.
Concerns around misinformation and disinformation have intensified with the rise of AI tools, with many claiming this is a watershed moment for truth, accuracy and democracy. In response, numerous laws have been enacted in different jurisdictions. Addressing Misinformation and Disinformation introduces this new legal landscape and charts a path forward. The Element identifies avoidance or alleviation of harm as a central legal preoccupation, outlines technical developments associated with AI and other technologies, and highlights social approaches that can support long-term civic resilience. Offering an expansive interdisciplinary analysis that moves beyond narrow debates about definitions, Addressing Misinformation and Disinformation shows how law can work alongside other technical and social mechanisms, as part of a coherent policy response.
For far too long, tech titans peddled promises of disruptive innovation - fabricating benefits and minimizing harms. The promise of quick and easy fixes overpowered a growing chorus of critical voices, driving a sea of private and public investments into increasingly dangerous, misguided, and doomed forms of disruption, with the public paying the price. But what's the alternative? Upgrades - evidence-based, incremental change. Instead of continuing to invest in untested, high-risk innovations, constantly chasing outsized returns, upgraders seek a more proven path to proportional progress. This book dives deep into some of the most disastrous innovations of recent years - the metaverse, cryptocurrency, home surveillance, and AI, to name a few - while highlighting some of the unsung upgraders pushing real progress each day. Timely and corrective, Move Slow and Upgrade pushes us past the baseless promises of innovation, towards realistic hope.
This talk examines how corpus linguistics and artificial intelligence treasure the potential to reshape contemporary language learning ecologies. It argues that the rapid normalisation of generative AI has intensified the need for pedagogical models that combine low-friction access to language support with transparent methods grounded in attested usage. Drawing on ecological perspectives and recent empirical research, the talk shows how AI-driven environments expand opportunities for language learning while creating risks related to opacity and over-reliance. Corpus linguistics, data-driven learning and corpus literacy offer a complementary foundation by providing traceable evidence, reproducible analyses, and practices that foster learners’ critical judgement. Two convergence scenarios are proposed: AI as an extension of DDL, and corpus literacy as the operational core of critical AI literacy. Together, these scenarios illustrate how open-box pedagogies can reconcile responsiveness and accountability, ensuring that AI-mediated learning remains anchored in transparent processes and empirically grounded language knowledge.
This article critically examines the integration of artificial intelligence (AI) into nuclear decision-making processes and its implications for deterrence strategies in the Third Nuclear Age. While realist deterrence logic assumes that the threat of mutual destruction compels rational actors to act cautiously, AI disrupts this by adding speed, opacity and algorithmic biases to decision-making processes. The article focuses on the case of Russia to explore how different understandings of deterrence among nuclear powers could increase the risk of misperceptions and inadvertent escalation in an AI-influenced strategic environment. I argue that AI does not operate in a conceptual vacuum: the effects of its integration depend on the strategic assumptions guiding its use. As such, divergent interpretations of deterrence may render AI-supported decision making more unpredictable, particularly in high-stakes nuclear contexts. I also consider how these risks intersect with broader arms race dynamics. Specifically, the pursuit of AI-enabled capabilities by global powers is not only accelerating military modernisation but also intensifying the security dilemma, as each side fears falling behind. In light of these challenges, this article calls for greater attention to conceptual divergence in deterrence thinking, alongside transparency protocols and confidence-building measures aimed at mitigating misunderstandings and promoting stability in an increasingly automated military landscape.
This article investigates the profound impact of artificial intelligence (AI) and big data on political and military deliberations concerning the decision to wage war. By conceptualising AI as part of a broader, interconnected technology ecosystem – encompassing data, connectivity, energy, compute capacity and workforce – the article introduces the notion of “architectures of AI” to describe the underlying infrastructure shaping contemporary security and sovereignty. It demonstrates how these architectures concentrate power within a select number of technology companies, which increasingly function as national security actors capable of influencing state decisions on the resort to force. The article identifies three critical factors that collectively alter the calculus of war: (i) the concentration of power across the architectures of AI, (ii) the diffusion of national security decision making, and (iii) the role of AI in shaping public opinion. It argues that, as technology companies amass unprecedented control over digital infrastructure and information flows, most nation states – particularly smaller or less technologically advanced ones – experience diminished autonomy in decisions to use force. The article specifically examines how technology companies can coerce, influence or incentivise the resort-to-force decision making of smaller states, thereby challenging traditional notions of state sovereignty and international security.
As globalization spreads, English has become a lingua franca. Emerging technologies (e.g., Artificial Intelligence) now make learning English more accessible, affordable, and tailored to each learner. Social media and digital platforms immerse users in English, offering interactive, personalized, and engaging experiences that fuel Informal Digital Learning of English (IDLE). Research spanning more than ten regions has found that IDLE brings a wide range of benefits, including greater motivation, higher academic achievement, and stronger speaking skills. Today, IDLE is being woven into schools and local communities through partnerships among teachers, NGOs, and industry leaders. This volume seeks to (a) showcase the latest research on IDLE, (b) highlight examples of IDLE in educational and community settings, and (c) chart future pathways for practice, research, and collaboration.
The Council of Europe has very recently adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. This article provides an initial analysis of the CoE AI Convention. It emphasises the necessity of understanding the CoE AI Convention within the context of its adoption as an international treaty negotiated within the Council of Europe. This context has affected its scope in terms of how the treaty includes the regulation of the usage of AI systems by both public authorities and private actors. The detailed review of the available negotiation documents reveals that the concrete level of protection offered by the Convention has been lowered. This includes the risk-based approach, which shapes the obligations undertaken by States under the treaty. This approach is explained and contrasted with the approach under the EU AI Act. The argument that emerges is that the absence of categorisation of risk levels in the treaty is related to its higher level of abstraction, which does not necessarily imply less robust obligations. The content of these obligations is also clarified in light of the requirement imposed by the treaty of consistency with human rights law. An argument is advanced that the principles formulated in the treaty – human dignity and individual autonomy, transparency and oversight, accountability, non-discrimination, data protection, reliability, risk-management – can offer interpretative guidance for the development of human rights standards.
The entangled relations of humanity’s natural and digital ecosystems are discussed in terms of the risk-uncertainty conundrum. The discussion focuses on global warming from the perspective of the small world of geoengineering, with a particular focus on geothermal energy, marine geoengineering, and the political economy of mitigation and adaptation (section 1). It inquires into the large world of the biosphere, Anthropocene, and uncertainties created by the overlay of human and geological time (section 2). And it scrutinizes the technosphere, consciousness, and language as humanity’s arguably most important cultural technology (section 3).
Taking the child’s perspective means looking at the world through the eyes of the infant or the child. This can help us to better understand play practices and better plan for children’s learning and development. But how do we do this in practice? In this chapter we explore these ideas and help you design programs where you gain insight into the importance of documenting infants’ and young children’s perspectives on their play and identify a range of practical ways to find out children’s perspectives on their play.
If the history of human rights shows anything, it shows that claim-making has no predetermined agents, and this volume nods to the rights of the non-human in a chapter by Jim Davies, who analyzes what might be at stake in the recognition of artificial intelligence not just as an instrumental tool, but a rights-bearing claimant in its own right. Indeed, Davies pursues this possibility through an analogy with the rise of entitlements of non-human nature, especially non-human animals.
Technological disruption leads to discontent in the law, regarding the limited remedies that are available under private law. The source of the problem is a ‘private law’ model that assumes that the function of law is to correct wrongs by compensating individuals who are harmed. So, the model is based on (i) individual claimants and (ii) financial redress. If we copy this private law model into our regulatory regimes for new technologies our governance remedies will fall short. On the one hand, the use of AI can affect in a single act a large number of people. On the other hand, not all offences can be cured through awarding money damages. Therefore, it is necessary to rethink private remedies in the face of AI wrongs to make law effective. To achieve this, the mantra of individual compensation has to be overcome in favor of a social perspective should prevail including the use of non-pecuniary measures to provide effective remedies for AI wrongs.
This chapter examines some ways in which human agency might be affected by a transition from legal regulation to regulation by AI. To do that, it elucidates an account of agency, distinguishing it from related notions like autonomy, and argues that this account of agency is both philosophically respectable and fits common sense. With that account of agency in hand, the chapter then examines two different ways – one beneficial, one baleful – in which agency might be impacted by regulation by AI, focussing on some agency-related costs and benefits of transforming private law from its current rule-based regulatory form to an AI-enabled form of technological management. It concludes that there are few grounds to be optimistic about the effects of such a transition and good reason to be cautious.