To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
Daniel Naurin, University of Oslo,Urška Šadl, European University Institute, Florence,Jan Zglinski, London School of Economics and Political Science
This chapter explores the application of large language models (LLMs) in empirical legal studies, with a focus on their potential to advance research on EU law at scale. The chapter provides a non-technical introduction to LLMs and the role they can play in legal information retrieval, including the classification of case characteristics and outcomes, which constitutes one of the most common research tasks in legal scholarship. The chapter stresses the importance of validation – researchers cannot treat the output of LLMs as automatically correct and instead must demonstrate the relevance and reliability of measures and results obtained through the use of LLMs in the context of their research topic. While LLMs are capable of significantly reducing the cost of doing legal research, their use will place growing demands on scholars to ensure the integrity of their findings. The chapter also reflects on the distinction between closed- and open-source models and how ethical and replicability imperatives might influence model choices in an increasingly crowded field.
Arguably, recent and prospective developments within artificial intelligence are a fascination within contemporary technoculture. The dawning of a new era that is characterised by the various impacts of these technological and scientific advances leads to questions about the type of subject that will inherit and inhabit the consequences of these developments. This paper will examine the role that speculative fiction plays as a site of critical engagement in investigating some of the more urgent questions posed by the intersection between humans and technology, such as the social consequences of projected technologies and the possibilities of changing embodiment, and particularly how these issues prove to be of immense importance for the gendered subject. The essays contained within Jeanette Winderson’s non-fictional publication 12 Bytes: How We Got Here. Where We Might Go Next (2021) provide a perceptive insight into both the promises and the pitfalls of AI technology for the future female and embodied experience. Winterson’s thought-provoking contemplations will be read alongside her fictional novels, The Stone Gods (2007) and Frankissstein (2019), to consider how she utilises the genre of speculative fiction to explore existing representations of gender whilst working to define new transhuman subjects. A recurring theme throughout these novels is the way in which AI, despite its liberating and transcendent potential, is imagined as the inevitable perpetuation of female subjugation.
The rapid expansion of artificial intelligence has accelerated its adoption across organizational functions. However, existing reviews often adopt sectoral or technology-focused perspectives, limiting understanding of its implementation within core firm activities. This study addresses this gap through a systematic review of articles published in Web of Science and Scopus up to December 2025, following established methodological guidelines. A total of 160 peer-reviewed articles met the inclusion criteria. Findings reveal convergent patterns of adoption in human resources, marketing and customer services, logistics, and finance. Artificial intelligence enhances analytics, automates routine tasks, personalizes interactions, and supports decision-making. Human resources applications focus on recruitment and workforce planning; marketing relies on predictive analytics and conversational interfaces; logistics improves forecasting and supply chain resilience; finance strengthens risk assessment and process efficiency. The study proposes an integrative conceptual model and research propositions, highlighting cross-functional challenges in governance, organizational capabilities, socio-technical alignment, and responsible implementation.
Innovation in paediatric and adult congenital cardiology increasingly depends on collaboration among academia, industry, and professional communities. From this perspective, the author argues that clinical prediction represents a natural convergence point for these stakeholders, aligning safe, personalised care with economic incentives. The author discusses emerging evidence highlighting the promise of artificial intelligence-driven prediction across various cardiovascular domains, while highlighting current limitations related to narrow scope, static design, and weak integration into clinical decision-making. Medicine-based evidence and a high-quality, inclusive data infrastructure may help address these gaps. Together, these approaches, along with stakeholders upholding their responsibilities, define a path towards predictive innovation.
This study investigates employees’ perceptions of artificial intelligence (AI) in the workplace, using data from 1,224 working adults across two samples. Drawing from an extended version of the Technology Acceptance Model, we examine how employees’ trust in AI and their perceptions of AI’s usefulness and ease-of-use at work shape their affective attitudes toward using AI, which in turn influence their intentions to adopt AI in their job. Perceived usefulness and trust in AI predicted employees’ intentions to adopt it at work via affective attitudes toward using AI. The findings for perceived ease-of-use were inconsistent, suggesting potential workplace-specific implications of this pathway. None of the relationships differed by gender, education, or leadership status. The findings bridge the technology adoption and organizational science literature to offer theoretical insights, practical implications, and future research directions for facilitating employees’ intentions to adopt AI at work.
The global demand for artificial intelligence (AI) is fuelling a rapid expansion of data infrastructure, an industry that is notoriously water-intensive. This growth creates a critical, yet understudied, nexus between digital expansion and hydrological systems, particularly in ecologically vulnerable regions. This study applies a spatially explicit framework to quantify the water footprint of AI data centres in Brazil, a nation heavily reliant on drought-sensitive hydropower. Our method integrates datasets on data centre locations, regional hydrological cycles, power generation sources and watershed-level water stress indices to model both direct (cooling) and indirect (energy generation) water consumption. Our key finding is that the AI infrastructure cluster in the São Paulo metropolitan region, with an operational IT load of ~550 MW, has an estimated annual water footprint of 16.1 million cubic metres. A significant portion of this, over 46%, is indirect “virtual water” consumed through hydropower generation, establishing a direct feedback loop where data centre demand stresses water and energy systems already compromised by climate change. This article concludes that the environmental cost of AI extends beyond carbon to include water, a cost disproportionately borne by biodiverse regions. We call for a paradigm shift in tech policy and corporate sustainability to include metrics of water neutrality and watershed resilience, in alignment with global sustainability goals.
This article examines the transformative impact of large language models (LLMs) on online content moderation, revealing a critical gap between platforms’ rule-based policies and their AI-driven enforcement mechanisms. Using Facebook’s hate speech moderation policies and practices as a case study, we identify a paradox: while content policies are increasingly rule-oriented, AI-driven enforcement seems to operate in a standard-like manner. This disconnect creates transparency, consistency and accountability challenges relating to the delineation of online freedom of expression that are not addressed in the literature, and require attention and mitigation. In this specific context, we introduce the concept of ‘rules by the millions’ to describe how AI systems actually operate through generating vast networks of micro-rules that evade traditional regulatory oversight. This phenomenon disrupts the conventional rules-versus-standards framework used in legal theory, raising urgent questions about the adequacy of current AI governance mechanisms. Indeed, the rapid adoption of LLMs in content moderation has outpaced the human capacity to monitor them, creating a pressing need for adaptive frameworks capable of managing the evolving capacities of AI.
This study investigates the use of large language models (LLMs) to classify question utterances within verbal design protocols according to Eris’ (2004) taxonomy. We evaluate the performance of two proprietary LLMs – OpenAI’s GPT-4.1 and Anthropic’s Claude Sonnet 4.5 – across experiments designed to assess classification accuracy, sensitivity to prompt configuration and in-context learning (ICL), and generalization across datasets and models. Using two human-coded datasets of differing size and quality, we measure alignment between LLM-generated labels and human judgments at both question category and subcategory levels. Results show that both LLMs achieved moderate to strong alignment rates at the category level (up to 85.7% for GPT-4.1 and 82.9% for Claude Sonnet 4.5), with substantially lower alignment at the more granular subcategory level. Performance differences across prompt configurations and ICL conditions were small, indicating robust generalization across datasets and transferability of prompt designs. While these results suggest that LLMs can effectively support scalable question classification, human judgment and oversight remain essential. Future research should explore the development and evaluation of alternative hybrid human–LLM workflows in protocol analysis, as well as the use of smaller or open-source models to address data privacy concerns.
Chapter 5 analyzes contemporary societal transformations through the lens of emerging technologies, political trends, and cultural shifts. It emphasizes how social media and artificial intelligence (AI), especially large language models, are reshaping communication, public perception, and decision-making processes. Social media amplify discontent, promote self-organization, and facilitate both progressive movements and misinformation. A concerning trend is the apparent societal shift from rational, collective discourse toward more intuitive, individualistic, and emotionally driven communication. This is evidenced by linguistic analyses of books, search trends, and journalistic styles. The chapter also explores the effects of neoliberal economic policies, which have fueled inequality and stress, potentially impacting cognitive function and social cohesion. Concurrently, a rise in populism and democratic backsliding is observed, driven by perceived grievances, xenophobia, and manipulation of public opinion. Together, these interconnected developments suggest humanity is at a critical juncture.
When foundation models analyze political content, do they use demographic characteristics as shortcuts for ideological attribution? We conducted detailed experiments with GPT-4o-mini and validated key findings across GPT-4o and LLaVA, using identical, ideologically neutral campaign advertisements with systematically varied candidate demographics. All models consistently attributed more liberal ideologies to women than men. These effects exceeded real-world gender differences from a nationally representative survey. However, racial associations differed by model: strong in GPT-4o-mini (where Black candidates received substantially more liberal attributions), attenuated in GPT-4o, and insignificant in LLaVA. These demographic effects persisted across temperature settings, prompt variations, and even explicit debiasing instructions in GPT-4o-mini. Our findings reveal that visual demographic features can shape AI outputs in ways that vary across models, with implications for applications such as content classification.
Manual submission of clinical trial data to the ClinicalTrials.gov registry is labor-intensive and error-prone, contributing to variability in the completeness and consistency of registry entries. To explore whether recent advances in large language models could support this process, we developed ChatCT, a pilot retrieval-augmented system that drafts ClinicalTrials.gov registry elements.
Methods:
We evaluated ChatCT-generated registry elements across three dimensions: 1. semantic similarity to the public ClinicalTrials.gov record, 2. formatting compliance with ClinicalTrials.gov requirements, and 3. coverage of key trial biomedical concepts.
Results:
ChatCT-generated registry elements were highly semantically similar to human-authored ClinicalTrials.gov records (median BERTScore F1 ≈ 0.82). Formatting compliance was high for structured elements, including Study Design (91% of required fields present; mean completeness 0.897) and Arms/Interventions (75%; 0.772), while narrative sections showed greater variability, including Outcome Measures (79%; 0.929) and Study Description (57%; 0.784). Ontology-based concept extraction and matching demonstrated consistently high precision, with scores ranging from 90% to 100%.
Conclusions:
A retrieval-augmented large language model can generate ClinicalTrials.gov registry drafts that preserve essential protocol details and adhere to most formatting requirements. However, light post-processing (e.g., automated schema validation) remains necessary for full submission readiness. This proof-of-concept evaluation suggests that ChatCT-assisted drafting could support registry reporting by improving consistency between protocol documents and publicly reported trial information.
The development of artificial intelligence and machine learning is leading to a revolution in the way we think about economic decisions. The Economics of Language explores how the use of generative AI and large language models (LLMs) can transform the way we think about economic behaviour. It introduces the LENS framework (Linguistic content triggers Emotions and suggests Norms, which shape Strategy choice) and presents empirical evidence that LLMs can predict human behaviour in economic games more accurately than traditional outcome-based models. It draws on years of research to provide a step-by-step development of the theory, combining accessible examples with formal modelling. Offering a roadmap for future research at the intersection of economics, psychology, and AI, this book equips readers with tools to quantify the role of language in decision-making and redefines how we think about utility, rationality, and human choice.
Business management education is increasingly making use of artificial intelligence as an emerging technology that will lead to major societal changes in learning and knowledge endeavours. This editorial article focuses on the link between business management and artificial intelligence as an enabler of social policy changes. This means considering the history of artificial intelligence and how business management education has evolved in recent years. By doing so, it encourages more focus on creative uses of social policy in terms of discussion about educational initiatives. This is helpful in gaining more insight into the novel and entrepreneurial ways business management education can embed artificial intelligence and improve overall learning outcomes.
Specialised AI hardware becomes economically obsolete much faster than conventional capital, so maintaining a given stock of compute requires high replacement investment. This paper studies the implications for growth, adjustment dynamics, and policy in a two-asset growth model in which AI capacity both raises productivity and produces digital services at low marginal cost. Calibrated to advanced economies, the model delivers two distinct adjustment speeds. AI capacity reverts relatively quickly, with a half-life of about seven quarters, while conventional capital adjusts over roughly a decade. When hardware is short-lived, even modest changes in gross spending can produce large swings in measured AI investment, despite only limited movements in the underlying stock. This helps explain the volatility often seen in specialised AI hardware investment cycles. Hardware durability also has first-order welfare effects. In the baseline calibration, a two-percentage-point fall in quarterly depreciation raises welfare by 0.36% in consumption-equivalent terms, while an equal-sized compute tax reduces the steady-state AI stock by around one-fifth.
The vast majority of researchers, actuaries, and demographers use standard time series analysis techniques to project time-varying parameters of popular mortality forecasting methods such as the Lee–Carter and Li–Lee models. However, spatial dependence can be as significant as temporal autocorrelation in these time series, and the underlying panel structure of the data is often neglected. We draw on techniques from panel and spatial econometrics, including ordinary and spatial dynamic panel linear models, spatiotemporal autoregressive integrated moving average processes, and spatial eigenvector filters, to capture such dependence and improve projections. We present a methodology to estimate the parameters of these techniques from spatial multipopulation mortality series, select their optimal hyperparameters, and use them for forecasting. We propose a tailor-made robust selection framework to identify the best model–technique combinations for each country, as well as a bootstrap-based procedure to quantify projection uncertainty with accurate nominal coverage on a separate validation period and a strategy for assessing the quality of the resulting prediction intervals. We test these methods on mortality data from 22 European countries. The results show that the proposed techniques yield a clear advantage in both point and interval forecasts for several populations, and these findings are corroborated by a robust selection design and additional robustness checks. These improvements have the potential to deliver meaningful gains for life insurance, pensions, and other contexts involving longevity risk.
This study presents a systematic review of peer-reviewed academic literature to explore the current landscape of artificial intelligence (AI) applications in sustainable aviation operations. Using a qualitative content analysis approach, four main thematic domains were identified, encompassing emission and fuel efficiency, maintenance reliability, infrastructure sustainability and education- or policy-related applications. In addition to thematic synthesis, the study mapped the annual publication frequency, the AI methods employed and the aviation domains targeted. The results reveal an increasing interest in hybrid and deep learning models, such as long short-term memory (LSTM), convolutional neural networks (CNN) and attention-based architectures, particularly in-flight optimisation and delay prediction tasks. AI-based flight optimisation techniques, such as trajectory prediction and adaptive fuel management, contribute to reducing CO2 emissions through more efficient flight planning and operations. Moreover, predictive maintenance supported by AI-driven digital twin systems has gained prominence due to its potential to reduce downtime and increase safety. The discussion further addresses regulatory challenges, the importance of explainable AI and integration barriers within complex aviation ecosystems. Findings are derived from a focused corpus of 27 peer-reviewed studies, which, although limited in number, offer representative insights into current sectoral trends. This review makes a significant contribution to both academia and industry by offering a comprehensive framework that categorises AI applications and highlights future research directions. Key implications include the need for regulatory harmonisation, real-time decision-support tools, and interdisciplinary approaches that integrate AI with behavioural sciences and sustainability goals.
Poor public understanding of artificial intelligence (AI) systems has become a matter of acute concern. Even when lacking expert technical knowledge, there are good democratic, economic and other societal reasons for ensuring that the public right to know operates effectively in the AI era. Yet, the trade-secret claims of AI providers and deployers are widely seen as a potential barrier to information disclosure rights and duties, which has provoked calls for areas of significant public interest to be carved out from the protections of trade-secrets law. Such transparency carve-outs are, however, likely to lead to uncertainty, over-inclusion and ineffectiveness. In this article, we argue that the dynamic, public-driven character of the right to know can be better secured through third-party participation and public-interest stewardship innovations in AI transparency.
Like other areas of law and legal practice, the arbitration world is beginning to grapple with how to harness the potential of artificial intelligence (AI) while managing its risks. Analogizing to existing AI tools for analysing case law and judicial behavior, as well as to algorithmic hiring applications, this chapter explores how similar technology could be used to improve the process of selecting investment arbitrators. As criticisms of investment arbitration continue to mount, a new selection tool could help to address systemic concerns about fairness, diversity, and legitimacy. Such a tool could level the playing field for parties in terms of access to information about prospective arbitrators as well as expand and diversify the pool of viable candidates. In addition to providing guidance for the parties making their own selections, the suggested tool could be used by arbitral institutions to help with appointing the tribunal president or even, with the parties’ consent, the entire panel. The chapter provides a framework for thinking through questions of design and implementation and concludes by addressing potential challenges and objections.
There are all sorts of dilemmas when it comes to technology and education. How much should be allowed in schools? Do teachers have to worry about students’ data security and privacy? Is it ok for you to ask a computer to write your essay for you? Are we ruining the eyesight and attention spans of an entire generation thanks to excessive screen time? This chapter looks at the debates that exist when it comes to digital technology and education. It will be argued here that the interplay between technology and education is highly complex – and changing – at a pace that is almost unimaginable.
In this introduction to Pragmatism Revisited, Robert Lane summarizes the book’s fifteen chapters. Those chapters apply classical and newer pragmatist ideas to a wide range of issues, including the imagination, conceptual change, ignorance, religious fundamentalism, truth in political discourse, authoritarian populism, academic freedom, criminal punishment and mass incarceration, environmental philosophy, bioethics, artificial intelligence, the Black intellectual tradition, feminism, gender, and social construction; the final chapter examines the future of pragmatism itself.