To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Like other areas of law and legal practice, the arbitration world is beginning to grapple with how to harness the potential of artificial intelligence (AI) while managing its risks. Analogizing to existing AI tools for analysing case law and judicial behavior, as well as to algorithmic hiring applications, this chapter explores how similar technology could be used to improve the process of selecting investment arbitrators. As criticisms of investment arbitration continue to mount, a new selection tool could help to address systemic concerns about fairness, diversity, and legitimacy. Such a tool could level the playing field for parties in terms of access to information about prospective arbitrators as well as expand and diversify the pool of viable candidates. In addition to providing guidance for the parties making their own selections, the suggested tool could be used by arbitral institutions to help with appointing the tribunal president or even, with the parties’ consent, the entire panel. The chapter provides a framework for thinking through questions of design and implementation and concludes by addressing potential challenges and objections.
There are all sorts of dilemmas when it comes to technology and education. How much should be allowed in schools? Do teachers have to worry about students’ data security and privacy? Is it ok for you to ask a computer to write your essay for you? Are we ruining the eyesight and attention spans of an entire generation thanks to excessive screen time? This chapter looks at the debates that exist when it comes to digital technology and education. It will be argued here that the interplay between technology and education is highly complex – and changing – at a pace that is almost unimaginable.
In this introduction to Pragmatism Revisited, Robert Lane summarizes the book’s fifteen chapters. Those chapters apply classical and newer pragmatist ideas to a wide range of issues, including the imagination, conceptual change, ignorance, religious fundamentalism, truth in political discourse, authoritarian populism, academic freedom, criminal punishment and mass incarceration, environmental philosophy, bioethics, artificial intelligence, the Black intellectual tradition, feminism, gender, and social construction; the final chapter examines the future of pragmatism itself.
International investment law faces a paradigm shift with the rise of the digital economy. Emerging technologies such as blockchain, artificial intelligence, and the platform economy redefine investment dynamics while challenging traditional regulatory frameworks. Digitalisation expands cross-border investment opportunities in areas like AI, genomics, and smart infrastructure, while also complicating traditional jurisdictional and territorial considerations. The shift from physical to digital assets necessitates a re-evaluation of the classic definitions of an ‘investor’ and ‘investment’. Meanwhile, states increasingly regulate strategic digital assets under national security concerns, introducing measures ranging from data localization mandates to investment screening mechanisms. These changes raise geopolitical and geoeconomic tensions and highlight disparities in digital governance models between major powers. Investor-state dispute settlement (ISDS) may have to adapt to address disputes over digital assets and data, as well as leverage AI and other digital technologies for efficiency while safeguarding due process. This chapter, along with the broader volume, examines these themes, emphasising balanced frameworks that promote innovation while safeguarding public interests in the evolving digital economy.
Corruption remains a pervasive global challenge, undermining trust, governance, and economic stability. Despite increased regulation, arbitral tribunals have struggled to address corruption effectively, often due to the high evidentiary threshold and associated procedural complexities. Artificial intelligence presents an opportunity to enhance efficiency and accuracy in detecting corruption by analysing evidence supplied by the parties to a dispute or amici curiae for red flags of illicit activities, similarly to other fields like anti-money laundering. The chapter examines the procedural implications of using artificial intelligence in arbitration, including data acquisition, party consent, and the potential impact on due process. It underscores the need for arbitrators to collaborate with parties to design protocols that ensure fairness, transparency, and accountability. By carefully addressing these challenges, artificial intelligence has the potential to become a transformative tool, balancing innovation with procedural integrity.
Edited by
Latika Chaudhary, Naval Postgraduate School, Monterey, California,Tirthankar Roy, London School of Economics and Political Science,Anand V. Swamy, Williams College, Massachusetts
This concluding chapter looks ahead to the challenges facing the economies of Bangladesh, India and Pakistan. The three countries have several challenges in common, including the threat of climate change, the low status of women and the potential of artificial intelligence and robotics to undermine badly needed jobs for their young and growing populations. There are some differences: women’s participation in the labour force is rising in Bangladesh, falling in India and lowest in Pakistan. Bangladesh and India have been on good growth trajectories for the last few decades, but Pakistan’s growth has been slower and less stable. We see many opportunities for research. On the one hand there are neglected areas, such as the princely states of colonial India. On the other hand, the digital revolution has made historical and present-day sources, including data, far more accessible.
Artificial intelligence is increasingly interwoven with design thinking (DT), yet comparative, stage-by-stage syntheses across canonical DT models remain scarce. This literature review maps how AI augments and challenges the major stages of widely used models and relates these effects to five illustrative domains. Following the SPAR-4-SLR protocol, we searched the Web of Science (2005–August 2025), screened records in two stages and assembled a corpus of 205 eligible studies for comparative synthesis. Across models, AI scales early-stage evidence work through large-N text and behavioral analytics, widens ideation via generative systems and accelerates prototyping and testing through simulation and predictive evaluation; at the same time, risks include bias, privacy and sovereignty concerns, evaluation opacity and homogenization of creative output. The weight of evidence supports hybrid intelligence: allocate divergent exploration primarily to AI while retaining human judgment for convergent selection and ethical decision-making. A complementary AI-native “Stingray” model highlights concurrent train–develop–iterate workflows that treat AI as a co-designer, while underscoring governance needs around interpretability and auditability. Overall, the review offers a model-by-model, stage-specific map of AI’s roles in DT, along with practical guidance for responsible deployment and research priorities for assessing boundary conditions and external validity.
Feedback is integral to second language (L2) writing instruction. However, large class sizes and limited teacher time often challenge the delivery of personalized feedback, prompting interest in AI-powered solutions such as ChatGPT (Escalante et al., 2023; Huete-García & Tarp, 2024; Steiss et al., 2024; Yoon et al., 2023; Zhang, 2024). This study evaluates a task-customized GPT model, “Belinda,” trained to assess A1-level Spanish learners’ writing and provide feedback. Two research questions guided the investigation: (1) Can Belinda accurately score beginner Spanish writing using a provided rubric? (2) Can Belinda deliver constructive qualitative feedback? Human and GPT-generated scores were compared for inter- and intrarater reliability, and qualitative analyses categorized the feedback for usability in the classroom. Results revealed moderate alignment between Belinda’s scores and human raters, though reliability of the GPT fell short of calibration benchmarks. Feedback quality varied, with Belinda often providing vague, incomplete, or inaccurate suggestions. Despite iterative training, the GPT struggled to balance error correction with encouragement, a critical need for novice learners. Additionally, inconsistencies in identical GPT versions raised concerns about reliability. While Belinda showed potential in automating feedback, its limitations in accuracy, contextual understanding, and positivity suggest it is not yet a viable substitute for human evaluation by itself. These findings emphasize the challenges of integrating AI into L2 instruction and call for the need for extensive datasets, robust training, and human–AI collaboration to achieve pedagogically sound outcomes. Future research should explore hybrid feedback models and scalable solutions to enhance AI’s role in language education without compromising learner progress or confidence.
This article presents a proposal for personalised learning itineraries in the field of Ancient History, designed specifically for neurodivergent university students, particularly those with autism spectrum disorder (ASD), attention deficit hyperactivity disorder (ADHD), or dyslexia. Drawing on the flipped classroom model, the approach shifts the centre of gravity of learning to the students, who are asked to engage with historical content through autonomous preparation, artificial intelligence (AI)-supported tasks, and collaborative in-class activities. Two differentiated learning paths are outlined: one structured and linear and one open and creative. Both incorporate digital tools, such as Google Forms and Canva, and regulation strategies to support executive functioning and autonomy. The article discusses the inclusive potential of this model and reflects on how the integration of AI reshapes historical learning and critical engagement.
Pragmatism originated in the United States in the 1870s, and since then it has been influential on numerous areas of philosophical thought. This volume of new essays demonstrates pragmatism's continuing vitality and relevance to epistemology, social and political philosophy, applied ethics, metaphysics, and more. Drawing upon the thought of classical pragmatists including Peirce, James, Dewey, Addams, and du Bois, as well as upon that of more recent pragmatists such as Rorty, the essays address a diverse set of topics including artificial intelligence, authoritarianism, feminism, criminal punishment, the value of the environment, the black intellectual tradition, religious fundamentalism, academic freedom, and the moral status of prenatal humans. Concluding with leading contemporary pragmatist Cheryl Misak's reflections on the future of the tradition, the volume demonstrates that pragmatism continues to be a source of valuable ideas and methods for philosophy today.
The rapid evolution of artificial intelligence (AI) has profoundly reconfigured the contemporary workplace, redefining the interactions among human employees, AI systems, and organizational processes. Yet, most research adopts a tool-centric view, overlooking how AI’s emergence as an alternative working agent reshapes managerial attention and employee welfare. Drawing on the attention-based view (ABV) and a dual-agent model, we theorize that AI adoption activates two opposing mechanisms: a human attention gain mechanism, where collaboration needs heightened focus on employees and increased employee-related corporate social responsibility (ECSR), and an AI attention shift mechanism, where deep AI embedding redirects attention toward AI, suppressing ECSR. Using panel data from 2575 Chinese listed firms (2013–2023), we find an inverted U-shaped relationship between AI adoption and ECSR. Moreover, industry AI substitution risk sharpens and left-shifts this curve, while top management team (TMT) functional diversity and employee stock ownership flattens and right-shifts it. These findings advance research on AI adoption, managerial attention, and employee-focused CSR by illuminating how attention allocation in dual-agent contexts shapes ethical and strategic outcomes, offering actionable insights for balancing human–AI integration with sustained employee welfare.
The incorporation of algorithmic systems into organizations is reconfiguring decision-making processes and raising new ethical challenges related to transparency, impartiality, and accountability. This study maps the field of algorithmic ethics in organizational contexts through a co-citation–based bibliometric analysis of 1,437 Web of Science publications (search conducted on August 20, 2025). The analysis identifies 12 thematic clusters and reveals a robust intellectual structure, with high modularity (Q = 0.726) and a high weighted mean silhouette value (S = 0.894). The findings highlight the centrality of algorithmic management, responsible artificial intelligence, and explainability, as well as bridging works that connect technical, normative, and management-oriented perspectives. The study advances an integrative conceptual model and a future research agenda that point to the emergence of algorithmic ethics as an institutional logic of organizational governance. For managers, the results underscore the need to embed algorithmic ethics within organizational decision-making and control systems.
Governing AI is about getting AI right. Building upon AI scholarship in science and technology studies, technology law, business ethics, and computer science, it documents potential risks and actual harms associated with AI, lists proposed solutions to AI-related problems around the world, and assesses their impact. The book presents a vast range of theoretical debates and empirical evidence to document how and how well technical solutions, business self-regulation, and legal regulation work. It is a call to think inside and outside the box. Technical solutions, business self-regulation, and especially legal regulation can mitigate and even eliminate some of the potential risks and actual harms arising from the development and use of AI. However, the long-term health of the relationship between technology and society depends on whether ordinary people are empowered to participate in making informed decisions to govern the future of technology – AI included.
Chapter 1 introduces basic terminology. Terms such as artificial intelligence, data, algorithm, machine learning, neural networks, deep learning, large language models, generative AI and symbolic AI are presented to develop a sense of what AI is, how it has evolved, and what it does. This chapter also introduces some of the major conceptual disagreements in the field. Different ideas about how to develop AI in the best way drive disagreements, as well as philosophical differences over what intelligence means and whether machines can develop human-like intelligence.
Chapter 6 considers the Commission’s intent to reduce critical dependencies on third-country-based data servers and computing capabilities. It explores the Brussels effect as a motivator for further action on exporting regulatory norms concerning the protection of personal data and non-personal data, coupled with concerns over lack of competitiveness in data-derived commercial activity, particularly in the field of AI. Concerns over competitiveness impacting Europe’s security, and the risks posed by unauthorised access to industrial or sensitive data from the governments in third countries, the Commission desires increased regulatory control. This is facilitated by industrial policy aimed at both promoting European data server infrastructure and encouraging switching to those Common European Data Spaces as a means of building up a European data economy, while placing strict limitations on the export of non-personal data outside of Europe’s borders. The Commission combines this with an attempt to utilise the Brussels effect to ensure that European standards concerning the safe use of AI technologies become global standards shaped by the EU and its values.
This paper investigates a specific culture of interdisciplinarity that has gained traction at the intersection of applied AI and ethics. To address social and ethical harms of AI applications, scholars have suggested importing norms, methodologies and governance frameworks from established disciplines such as the social sciences or medicine. I show how this importation presupposes and endorses a framing of applied AI as a domain separate from established disciplines. Yet, such separation is what initially allows AI practitioners to operate outside those disciplinary norms that have evolved to prevent harms now associated with AI applications. Conversely, if AI applications were understood as situated firmly within these disciplines, practitioners would already be accountable to their norms and standards. Paradoxically, this culture of interdisciplinarity might thus reinforce a problematic disciplinary isolation of applied AI underlying the very ethical issues it seeks to mitigate – fighting symptoms while playing into their cause. In response, I outline three paths forward.
Linnaean taxonomy, which imposes hierarchical classifications based on morphological characteristics, has become deeply embedded in modern data architecture, from databases to metadata schemas to AI training datasets. With its hierarchical structure and rigid categorization, Linnaean taxonomy privileges one type of knowledge while marginalizing alternative taxonomies that offer more fluid, contextual, and relational understandings of the natural world. This paper examines how the legacy of Linnaean taxonomy continues to shape contemporary classification systems and artificial intelligence (AI). Indigenous knowledge systems, which include spiritual, cultural, and ecological dimensions, view entities not as isolated objects but as nodes in dynamic, interconnected networks. We draw from the French naturalist, Comte de Buffon, who, in line with Indigenous knowledge systems, viewed nature as continuous and contextual rather than discretely compartmentalized. The dominance of Linnaean-style classification in AI and data systems perpetuates colonial power dynamics and contributes to knowledge homogenization while losing Indigenous languages and classification systems crucial for addressing contemporary environmental challenges, particularly in agriculture and biodiversity conservation. In this Age of AI, we call for a holistic and ecological approach to archives. Therefore, we propose applying ‘rhizomatic hylomorphism,’ an ethnobiological, alternative classification that transcends hierarchical taxonomies to embrace multiplicity, relationality, and contextual meaning.
In this editorial, three technological laggards reflect on the potential promises and pitfalls of artificial intelligence in psychiatry, as explored in recent BJPsych publications. We argue that the rapid changes driven by artificial intelligence are revealing underlying vulnerabilities within our profession. More importantly, we believe these shifts pose a fundamental challenge to our specialty: can we, as psychiatrists, evolve beyond mere intelligence to practice with genuine wisdom?