To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces a bold proposition: reframing mathematics anxiety as a potential catalyst for building student resilience. It delves into the emerging role of Artificial Intelligence (AI) in addressing mathematics anxiety, offering a perspective on technology’s place in mathematics education. Drawing on the author’s unique contributions to mathematics education, the chapter presents evidence-based recommendations for preventing mathematics anxiety and fostering more inclusive, emotionally intelligent learning environments. It is an indispensable resource for educators, researchers, and all those dedicated to reshaping the future of mathematics education.
Xiaolan Fu, University of Oxford,George Yip, Imperial College Business School,Xuechen Ding, Beijing Technology and Business University,Wei Wei, University of Sussex
This chapter describes Tencent’s development history, its transformation of corporate strategies, its core business distribution, the adjustments it has made to organisational structure, and its innovative developments in both product offerings and the technology it uses, describing how all of this happened in the context of the development of the internet industry in China over the past twenty-five years. The chapter summarises the innovation milestones of Tencent’s products and services, such as the multiple iterations of the instant messaging services QQ and WeChat, and shows how it developed its businesses through major strategic investments and venture capital in many fields. It also describes Tencent’s R&D endeavours and provides a detailed picture of its technological innovation over the past two decades. The chapter also compares Tencent’s patent applications with those of other foreign and domestic companies, showing Tencent’s leading position globally and domestically in this category.
This chapter analyses how trade law conceptualises data and AI. It shows that trade law applies long-established concepts to these novel phenomena while experimenting with new categories in preferential agreements. For data, these categories include data as a good, as a service, as a digital product, intellectual property, electronic transmissions, and as a regulatory object. For AI, the chapter distinguishes between the trade regulation of AI components, AI products, and AI governance. It concludes by suggesting that trade law can be understood as a form of AI/data law, which may help in recognising and addressing the challenges that the digital economy poses for trade law.
Generative artificial intelligence (AI) is becoming an integral part of children's lives, ranging from voice assistants and social robots to AI-generated storybooks. As children increasingly interact with these technologies, it is essential to consider their implications for developmental outcomes. This Element examines these implications across three interconnected domains: interaction, perception, and learning. A recurring theme across these domains is that children's engagement with AI both mirrors and diverges from their engagement with humans, positioning AI as a distinct yet potentially complementary source of experience, enrichment, and knowledge. Ultimately, the Element advances a framework for understanding the complex interplay among technology, children, and the social contexts that shape their development. This title is also available as Open Access on Cambridge Core.
Edited by
Daniel Naurin, University of Oslo,Urška Šadl, European University Institute, Florence,Jan Zglinski, London School of Economics and Political Science
This chapter explores the application of large language models (LLMs) in empirical legal studies, with a focus on their potential to advance research on EU law at scale. The chapter provides a non-technical introduction to LLMs and the role they can play in legal information retrieval, including the classification of case characteristics and outcomes, which constitutes one of the most common research tasks in legal scholarship. The chapter stresses the importance of validation – researchers cannot treat the output of LLMs as automatically correct and instead must demonstrate the relevance and reliability of measures and results obtained through the use of LLMs in the context of their research topic. While LLMs are capable of significantly reducing the cost of doing legal research, their use will place growing demands on scholars to ensure the integrity of their findings. The chapter also reflects on the distinction between closed- and open-source models and how ethical and replicability imperatives might influence model choices in an increasingly crowded field.
Large Language Models (LLMs) like OpenAI’s ChatGPT or Google’s Gemini are the new sensation in artificial intelligence (AI) research. These systems exhibit impressive conversational abilities and have even managed to convince some people of their possible sentience. But do LLMs actually speak our language, or do they merely appear as if they do? Do they really reason and think, or are they simply good at superficially imitating these abilities? In this chapter, I argue that Wilfrid Sellars’s functionalist-pragmatist approach to language and concept learning might be especially useful in the context of answering the questions above. In particular, conceiving the process of learning and mastering language as analogous to the process of learning to play a game within a set of normative social practices can shed light on the kind of abilities LLMs possess and what we can expect them to do in the future, including becoming genuine members of our linguistic community rather than mere “stochastic parrots.”
The increasing presence of artificial intelligence (AI), electronic patient-reported outcomes (ePROMs), and digital infrastructures in palliative care is transforming how clinical encounters are organized and how suffering is interpreted. These technological shifts heighten the risk of relational compression and a reduction of dignity to measurable outputs. This paper proposes the DiRePal model (Relational–Temporal Dignity in Palliative Care) as a philosophical framework to re-examine dignity beyond coherent narrative identity or autonomy-centered ethics, emphasizing relational presence, temporal sensitivity, and structural conditions of care.
Methods
A philosophical–ethical analysis informed by narrative identity (P. Ricoeur), ethics of alterity (E. Levinas), capabilities theory (M. Nussbaum), and care ethics (J. Tronto). Critical readings of dignity frameworks, AI ethics, and digital health literature were synthesized to develop a relational–temporal account of dignity and 2 operational concepts: the temporal dignity indicator and the architecture of prudence.
Results
While digital tools can enhance communication and support anticipatory care, they also risk reducing patients to data profiles, narrowing listening practices, and eroding opportunities for narrative, silence, and relational presence. The DiRePal model reframes dignity as a fluctuating, co-constructed achievement that depends on temporal attentiveness, ethical listening, institutional conditions, and prudent integration of AI and ePROMs. It further expands dignity to include post-biographical dimensions such as memory, grievability, and digital legacy.
Significance of results
End-of-life care in the algorithmic age requires an ethics that recognizes dignity as relational, temporal, and structurally mediated. The DiRePal model offers clinicians and institutions a conceptual grammar to resist technological reductionism, protect time for presence, and safeguard the narrative and post-biographical continuity of persons whose voices may be fragmented, vulnerable, or digitally extended.
The World Health Organization has declared 2021–2030 the “Decade of Healthy Ageing”, aiming for the best quality of life through health as the population ages. Beyond healthy ageing, scientists are adopting artificial intelligence technologies for longevity science which can foreseeably enable humans to routinely live to 120 years and beyond. With such breakthroughs within reach, the challenges associated with longevity need to be considered, from the impact on the social system to the possibility of an international law right to longevity, along with associated considerations such as on sustainability. This article questions whether there already is, or should be, an international human right to facilitate considerably extended lifespans, along with other relevant legal frameworks.
Since the public release of ChatGPT in November 2022, the artificial intelligence (AI) landscape is undergoing a rapid transformation. Currently, the use of AI chatbots by consumers has largely been limited to image generation or question-answering language models. The next generation of AI systems, AI agents that can plan and execute complex tasks with only limited human involvement, will be capable of a much broader range of actions. In particular, consumers could soon be able to delegate purchasing decisions to AI agents acting as “Custobots.” Against this background, the Article explores whether EU consumer law, as it currently stands, is ready for the rise of the Custobot Economy. In doing so, the Article makes three contributions. First, it outlines how the advent of AI agents could change the existing e-commerce landscape. Second, it explains how AI agents challenge the premises of a human-centric consumer law which is based on the assumption that consumption decisions are made by humans. Third, the Article presents some initial considerations how a future consumer law could look like that works for both humans and machines.
An enduring access-to-justice crisis leaves most low- and middle-income people without meaningful assistance for civil legal problems. In response, several U.S. jurisdictions have experimented with licensing legal paraprofessionals—such as Limited License Legal Technicians (LLLTs)—to provide a circumscribed set of services directly to the public. Using Washington State’s pioneering LLLT program and its successors as a case study, this Article argues that paraprofessional reforms have under-delivered because they replicate key features of the traditional professional model: substantial educational prerequisites, supervised practice requirements, and high-stakes examinations that raise entry costs, limit supply, and constrain scalability.
The Article contends that modern AI changes the production function of routine legal work—particularly client intake, document preparation, and the translation of facts into legally relevant narratives—yet AI deployed directly to consumers poses serious risks, including error, bias, confidentiality threats, and jurisdictional mismatch, and it cannot reliably identify when a matter requires escalation to a lawyer. The Article therefore proposes an “AI–paraprofessional fusion” model: purpose-built, jurisdiction-specific AI tools paired with lightly trained human paraprofessionals who provide process guidance, verify and quality-control outputs, and triage cases for escalation when warranted.
Finally, because unauthorized-practice rules are state-created constraints that helped produce today’s scarcity, the Article argues that the AI infrastructure enabling this model should be developed and maintained as a public good—auditable, updateable, and broadly accessible—rather than left solely to private market incentives. This approach offers a scalable path for United States jurisdictions—and potentially others—to expand competent, lower-cost legal assistance while preserving safety through human oversight and clear escalation channels.
We stand at a curious moment in the history of law and technology. Nations around the world are scrambling to regulate or deregulate artificial intelligence, each convinced they are in a “race”—for dominance, for values, for the future itself. Brussels votes on comprehensive AI Acts. Beijing issues the world’s first copyright ruling on AI-generated content. Washington debates whether chatbots should have First Amendment rights. The underlying premise of this volume is that this framing as a zero-sum competition fundamentally misunderstands both the nature of AI and the task before us. The truth is more sobering and more hopeful: We are not racing against each other but experimenting together, trying to govern technologies that respect neither borders nor traditional legal categories. The real question is not who will “win” the AI race, but how we can learn from each other’s experiments fast enough to keep pace with systems that evolve by the microsecond. This Special Issue of the German Law Journal brings together fifteen contributions that demonstrate why comparative law has never been more essential—or more challenging. The authors span continents and legal traditions, from Beijing to Brussels, from Silicon Valley to Sydney.
This Article discusses China’s content moderation in the age of artificial intelligence. It first introduces two long-overlooked features of China’s content moderation: the medium-based model and the “No-Dispute” Policy. The former emphasizes that content moderation in China varies based on different media, while the latter argues that China’s content moderation is often content-neutral rather than being driven by ideology or having an official stance. The Article then summarizes the three main challenges artificial intelligence presents to content moderation: a shift in structure from the traditional “state v. citizen” dichotomy to the “platform–government–citizen” triangle; a transition in means from human review to algorithm-based and machine-based moderation; and stimulating a reimagination of traditional theories and doctrines of freedom of speech in terms of standards and classification. Finally, the Article takes online violence, one of the most prominent issues in contemporary Chinese content moderation, as a case study to examine specific issues in China’s content moderation in the era of artificial intelligence.
Systematic reviews (SRs) are critical for evidence-based research but are time-consuming and labor-intensive. The rapid expansion of academic publications further challenges the performance and applicability of existing screening and classification methods. While large language models (LLMs) present new opportunities for automation, limited research has examined whether they can achieve classification performance comparable to human reviewers in large-scale, multi-class settings. With the goal of improving classification performance, we proposed an LLM-based framework that leverages full-text key-insight extraction to enhance literature classification. We constructed a manually curated dataset of 900 articles from 17 published SRs to quantitatively evaluate the classification capabilities of LLMs. The results provided empirical evidence of LLMs’ potential in supporting large-scale SRs and introduced a practical pathway for improving efficiency and reliability in evidence synthesis. Empirical results showed that key-insight-based classification (KBC) significantly outperforms abstract-based classification (ABC). We implemented a confidence-weighted voting (CWV) mechanism using multiple LLMs to improve robustness. The CWV method achieved the highest macro F1-score of 0.796, substantially exceeding KBC (0.732), ABC (0.676), and unsupervised K-means clustering (0.446). By employing zero-shot LLMs, our approach demonstrated the potential for enhanced adaptability across diverse domains and classification tasks without requiring fine-tuning, demonstrating that a carefully designed pipeline can enable LLMs to achieve classification performance comparable to human reviewers.
Traditional perceptual models are ill-equipped for the high-dimensional data, such as text embeddings, central to modern psychology and AI. We introduce the double machine learning lens model, a framework that utilizes machine learning to handle such data. We applied this model to analyze how a modern AI and human perceivers judge social class from 9,513 aspirational essays written by 11-year-olds in 1969. A systematic comparison of 45 analytical approaches revealed that regularized linear models using dimensionality-reduced language embeddings significantly outperformed traditional dictionary-based methods and more complex non-linear models. Our top model accurately predicted human $(R^{2}_{CV} =0.61)$ and AI $(R^{2}_{CV} =0.56)$ social class perceptions, capturing over 85% of the total accuracy. These results suggest that “unmodeled knowledge” in perception may be an artifact of insufficient measurement tools rather than an unmeasurable intuitive process. We find that both AI and humans use many of the same textual cues (e.g., grammar, occupations, and cultural activities), only a subset of which are valid. Both appear to amplify subtle, real-world patterns into powerful, yet potentially discriminatory heuristics, where a small difference in actual social class creates a large difference in perception.
Arguably, recent and prospective developments within artificial intelligence are a fascination within contemporary technoculture. The dawning of a new era that is characterised by the various impacts of these technological and scientific advances leads to questions about the type of subject that will inherit and inhabit the consequences of these developments. This paper will examine the role that speculative fiction plays as a site of critical engagement in investigating some of the more urgent questions posed by the intersection between humans and technology, such as the social consequences of projected technologies and the possibilities of changing embodiment, and particularly how these issues prove to be of immense importance for the gendered subject. The essays contained within Jeanette Winderson’s non-fictional publication 12 Bytes: How We Got Here. Where We Might Go Next (2021) provide a perceptive insight into both the promises and the pitfalls of AI technology for the future female and embodied experience. Winterson’s thought-provoking contemplations will be read alongside her fictional novels, The Stone Gods (2007) and Frankissstein (2019), to consider how she utilises the genre of speculative fiction to explore existing representations of gender whilst working to define new transhuman subjects. A recurring theme throughout these novels is the way in which AI, despite its liberating and transcendent potential, is imagined as the inevitable perpetuation of female subjugation.
The rapid expansion of artificial intelligence has accelerated its adoption across organizational functions. However, existing reviews often adopt sectoral or technology-focused perspectives, limiting understanding of its implementation within core firm activities. This study addresses this gap through a systematic review of articles published in Web of Science and Scopus up to December 2025, following established methodological guidelines. A total of 160 peer-reviewed articles met the inclusion criteria. Findings reveal convergent patterns of adoption in human resources, marketing and customer services, logistics, and finance. Artificial intelligence enhances analytics, automates routine tasks, personalizes interactions, and supports decision-making. Human resources applications focus on recruitment and workforce planning; marketing relies on predictive analytics and conversational interfaces; logistics improves forecasting and supply chain resilience; finance strengthens risk assessment and process efficiency. The study proposes an integrative conceptual model and research propositions, highlighting cross-functional challenges in governance, organizational capabilities, socio-technical alignment, and responsible implementation.
Innovation in paediatric and adult congenital cardiology increasingly depends on collaboration among academia, industry, and professional communities. From this perspective, the author argues that clinical prediction represents a natural convergence point for these stakeholders, aligning safe, personalised care with economic incentives. The author discusses emerging evidence highlighting the promise of artificial intelligence-driven prediction across various cardiovascular domains, while highlighting current limitations related to narrow scope, static design, and weak integration into clinical decision-making. Medicine-based evidence and a high-quality, inclusive data infrastructure may help address these gaps. Together, these approaches, along with stakeholders upholding their responsibilities, define a path towards predictive innovation.
This study investigates employees’ perceptions of artificial intelligence (AI) in the workplace, using data from 1,224 working adults across two samples. Drawing from an extended version of the Technology Acceptance Model, we examine how employees’ trust in AI and their perceptions of AI’s usefulness and ease-of-use at work shape their affective attitudes toward using AI, which in turn influence their intentions to adopt AI in their job. Perceived usefulness and trust in AI predicted employees’ intentions to adopt it at work via affective attitudes toward using AI. The findings for perceived ease-of-use were inconsistent, suggesting potential workplace-specific implications of this pathway. None of the relationships differed by gender, education, or leadership status. The findings bridge the technology adoption and organizational science literature to offer theoretical insights, practical implications, and future research directions for facilitating employees’ intentions to adopt AI at work.