To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To assess the feasibility of using large language models (LLMs) to develop research questions about changes to the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) food packages.
Design:
We conducted a controlled experiment using ChatGPT-4 and its plugin, MixerBox Scholarly, to generate research questions based on a section of the USDA summary of the final public comments on the WIC revision. Five questions weekly for three weeks were generated using LLMs under two conditions: fed with or without relevant literature. The experiment generated 90 questions, which were evaluated using the FINER criteria (Feasibility, Innovation, Novelty, Ethics, and Relevance). T-tests and multivariate regression examined the difference by feeding status, AI model, evaluator, and criterion.
Setting:
The United States.
Participants
Six WIC expert evaluators from academia, government, industry, and non-profit sectors.
Results:
Five themes were identified: administrative barriers, nutrition outcomes, participant preferences, economics, and other topics. Feeding and non-feeding groups had no significant differences (Coeff. = 0.03, P = 0.52). MixerBox-generated questions received significantly lower scores than ChatGPT (Coeff. = –0.11, P = 0.02). Ethics scores were significantly higher than feasibility scores (Coeff. = 0.65, P < 0.001). Significant differences were found between the evaluators (P < 0.001).
Conclusions:
The LLM applications can assist in developing research questions with acceptable qualities related to the WIC food package revisions. Future research is needed to compare the development of research questions between LLMs and human researchers.
Understanding service users’ knowledge of and attitudes towards the rapidly progressing field of mental health technology (MHT) is an important endeavour in clinical psychiatry.
Methods:
To evaluate the current use of and attitudes towards MHT (mobile apps, online therapy and counselling, telehealth, web-based programmes, chatbots, social media), a 5-point Likert-scale survey was designed based on previous studies and distributed to attendees of an adult community mental health service in Ireland. Chi-square tests were used and corrected for multiple comparisons.
Results:
107 mental health service users completed the survey (58% female, aged 18–80). 86% of respondents owned a smartphone. 27.1% reported using a mental health application, while 33.6% expressed interest in using one in the future. 61.7% reported they had not used and were not interested in using AI for their mental health, and 51.4% indicated they would not feel comfortable using it. 46.8% were not comfortable with psychiatrists utilising AI in their care. The majority (86.9%) preferred face-to-face appointments, while 52.6% would consider using MHT while on a waiting list. Younger participants reported significantly greater comfort using mental health apps and higher self-rated knowledge of AI.
Conclusion:
There were low levels of knowledge about and comfort using MHT, accompanied by concerns about confidentiality and privacy. Younger service users tended to be more comfortable with and knowledgeable about MHT. Despite the growing interest in digital approaches, there remains a clear preference for face-to-face appointments, underscoring the importance of addressing privacy and safety concerns, together with training and education.
Are nuclear weapons useful for coercion, and, if so, what factors increase the credibility and effectiveness of nuclear threats? While prominent scholars like Thomas Schelling argue that nuclear brinkmanship, or the manipulation of nuclear risk, can effectively coerce adversaries, others contend nuclear weapons are not effective tools of coercion, especially when designed to achieve offensive and revisionist objectives. Simultaneously, there is broad debate about the incorporation of automation via artificial intelligence into military systems, especially nuclear command and control. We develop a theoretical argument that nuclear threats implemented with automated nuclear launch systems are more credible compared to those implemented via non-automated means. By reducing human control over nuclear use, leaders can more effectively tie their hands and thus signal resolve, even if doing so increases the risk of nuclear war and thus is extremely dangerous. Preregistered survey experiments on an elite sample of United Kingdom Members of Parliament and two public samples of UK citizens provide support for these expectations, showing that in a crisis scenario involving a Russian invasion of Estonia, automated nuclear threats can increase credibility and willingness to back down. From a policy perspective, this paper highlights the dangers of countries adopting automated nuclear systems for malign purposes, and contributes to the literatures on coercive bargaining, weapons of mass destruction, and emerging technology.
Integrating AI into military decision processes on the resort to force raises new moral challenges. A key question is: How can we assign responsibility in cases where AI systems shape the decision-making process on the resort to force? AI systems do not qualify as moral agents, and due to their opaqueness and the “problem of many hands,” responsibility for decisions made by a machine cannot be attributed to any one individual. To address this socio-technical responsibility gap, I suggest establishing “proxy responsibility” relations. Proxy responsibility means that an actor takes responsibility for the decisions made by another actor or synthetic agent who cannot be attributed with responsibility for their decisions. This article discusses the option to integrate an AI oversight body to establish proxy responsibility relations within decision-making processes regarding the resort to force. I argue that integrating an AI oversight body creates the preconditions necessary for attributing proxy responsibility to individuals.
The integration of AI systems into the military domain is changing the way war-related decisions are made. It binds together three disparate groups of actors – developers, integrators, and users – and creates a relationship between these groups and the machine, embedded in the (pre-)existing organisational and system structures. In this article, we focus on the important, but often neglected, group of integrators within such a socio-technical system. In complex human–machine configurations, integrators carry responsibility for linking the disparate groups of developers and users in the political and military system. To act as the mediating group requires a deep understanding of the other groups’ activities, perspectives and norms. We thus ask which challenges and shortcomings emerge from integrating AI systems into resort-to-force (RtF) decision-making processes, and how to address them. To answer this, we proceed in three steps. First, we conceptualise the relationship between different groups of actors and AI systems as a socio-technical system. Second, we identify challenges within such systems for human–machine teaming in RtF decisions. We focus on challenges that arise (a) from the technology itself, (b) from the integrators’ role in the socio-technical system and (c) from the human–machine interaction. Third, we provide policy recommendations to address these shortcomings when integrating AI systems into RtF decision-making structures.
What shapes military attitudes of trust in artificial intelligence (AI) used for strategic-level decision-making? When used in concert with humans, AI is thought to help militaries maintain lethal overmatch of adversaries on the battlefield as well as optimize leaders’ decision-making in the war room. Yet it is unclear what shapes servicemembers’ trust in AI used for strategic-level decision-making. In October 2023, I administered a conjoint survey experiment among an elite sample of officers attending the US Army and Naval War Colleges to assess what shapes servicemembers’ trust in AI used for strategic-level deliberations. I find that their trust in AI used for strategic-level deliberations is shaped by a tightly calibrated set of technical, operational, and oversight considerations. These results provide the first experimental evidence for military attitudes of trust toward AI during crisis escalation, which have important research, policy, and modernization implications.
In this article, I consider the potential integration of artificial intelligence (AI) into resort-to-force decision-making from a Just War perspective. I evaluate two principles from this tradition: (1) the jus ad bellum principle of “reasonable prospect of success” and (2) the more recent jus ad vim principle of “the probability of escalation.” More than any other principles of Just War, these prudential standards seem amenable to the probabilistic reasoning of AI-driven systems. I argue, however, that this optimism in the potential of AI-optimized decision-making is largely misplaced. We need to cultivate a tragic sensibility in war – a recognition of the inescapable limits of foresight, the permanence of uncertainty and the dangers of unconstrained ambition. False confidence in the efficacy of these systems will blind us to their technical limits. It will also, more seriously, obscure the deleterious impact of AI on the process of resort-to-force decision-making; its potential to suffocate the moral and political wisdom so essential to the responsible exercise of violence on the international stage.
As artificial intelligence (AI) plays an increasing role in operations on battlefields, we should consider how it might also be used in the strategic decisions that happen before a military operation even occurs. One such critical decision that nations must make is whether to use armed force. There is often only a small group of political and military leaders involved in this decision-making process. Top military commanders typically play an important role in these deliberations around whether to use force. These commanders are relied upon for their expertise. They provide information and guidance about the military options available and the potential outcomes of those actions. This article asks two questions: (1) how do military commanders make these judgements? and (2) how might AI be used to assist them in their critical decision-making processes? To address the first, I draw on existing literature from psychology, philosophy, and military organizations themselves. To address the second, I explore how AI might augment the judgment and reasoning of commanders deliberating over the use of force. While there is already a robust body of work exploring the risks of using AI-driven decision-support systems, this article focuses on the opportunities, while keeping those risks firmly in view.
Artificial intelligence (AI) is increasingly being incorporated into military decision making in the form of decision-support systems (DSS). Such systems may offer data-informed suggestions to those responsible for making decisions regarding the resort to force. While DSS are not new in military contexts, we argue that AI-enabled DSS are sources of additional complexity in an already complex resort-to-force decision-making process that – by its very nature – presents the dual potential for both strategic stability and harm. We present three categories of complexity relevant to AI – interactive and nonlinear complexity, software complexity, and dynamic complexity – and examine how such categories introduce or exacerbate risks in resort-to-force decision-making. We then provide policy recommendations that aim to mitigate some of these risks in practice.
In a resort-to-force setting, what standard of care must a state follow when using AI to avoid international responsibility for a wrongful act? This article develops three scenarios based around a state-owned autonomous system that erroneously resorts to force (the Flawed AI System, the Poisoned AI System, and the Competitive AI System). It reveals that although we know what the substantive jus ad bellum and international humanitarian law rules are, international law says very little about the standards of care to which a state must adhere to meet its substantive obligations under those bodies of law. The article argues that the baseline standard of care under the jus ad bellum today requires a state to act in good faith and in an objectively reasonable way, and it describes measures states should consider taking to meet that standard when deploying AI or autonomy in their resort-to-force systems. It concludes by explaining how clarifying this standard of care will benefit states by reducing the chance of unintended conflicts.
In Western democracies the decision to go to war is made in ways that ensure decision-makers can be held accountable. In particular, bureaucracies rely on the production of a range of documents such as records of meetings to ensure accountability. Inserting AI into the decision-making process means finding ways to make sure that AI can also be held accountable for decisions to resort to force. But problems of accountability arise in this context because AI does not produce the type of documents associated with bureaucratic accountability: it is this gap in documentary capacity which is at the core of the troubling search for accountable AI in the context of the decision to go to war. This paper argues that the search for accountable AI is essentially an attempt to solve problems of epistemic uncertainty via documentation. The paper argues that accountability can be achieved in other ways. It adopts the example of new forms of evidence in the International Criminal Tribunal for Yugoslavia (ICTY) to show that epistemic uncertainty can be resolved and accountability apportioned without absolute epistemic certainty and without documentation in the sense commonly associated with accountability in a bureaucratic context.
The use of Generative Artificial Intelligence (GenAI) by teenagers is increasing rapidly. GenAI is a form of artificial intelligence that creates new text, images, video and audio, using models based on huge amounts of training data. However, using GenAI can also create misinformation and biased, inappropriate and harmful outputs. Teenagers are increasingly using GenAI in daily life, including in mental healthcare, and may not be aware of the limitations and risks. GenAI may also be used for malicious purposes that may have long-term, negative impacts on mental health. There is a need to increase awareness of how GenAI may have a negative impact on the mental health of teenagers.
In the framework of the common objective of this volume, this chapter focuses on the technological element –expressed in AI– which is usually part of the definition of remote work. This chapter discusses how AI tools shape the organization and performance of remote work, how algorithms impact remote workers rights and how trade unions and workers can harness these powerful instruments to improve working and living conditions. Three hypotheses are considered. First, that AI systems and algorithmic management generate a de facto deepening of the subordinate position of the worker. Second, that this process does not represent technological determinism but instead the impact of human and institutional elements. And finally, that technological resources usually are more present in remote work than in traditional work done at the workplace. These hypotheses and concerns are addressed in several ways: by contextualizing the issue over time, through a multi-level optic centered on the interactions of different levels of regulation, by examining practical dimensions and finally by exploring the implications for unions and worker agency.
Since October 2023, residents of Gaza have been subjected to artificial intelligence (AI) target-generation systems by Israel. This article scrutinises the deployment of these technologies through an understanding of Israel’s settler-colonial project, racial-capitalist economy, and delineation of occupied Palestinian territories as carceral geographies. Drawing on the work of Andy Clarno, which demonstrates how Israel’s decreasing reliance on Palestinian labour made them inessential to exploitation, this article argues that Palestinians are valuable to Israel for another purpose: experimentation. For over fifty years, Palestinians have been rendered as test subjects for the development of surveillance and warfare technologies, in what Antony Lowenstein calls “the Palestine Laboratory.” AI introduces a dual paradigm where both Palestinian lives and deaths are turned into sites of data dispossession. This duality demands keeping Palestinians alive to generate constantly updating data for the lethal algorithmic systems that target them, while their deaths generate further data to refine and market those systems as “battle-tested.” The article describes this state as an algorithmic death-world, adapted from Achille Mbembe’s conception of necropolitics. This article concludes that as Israel exports its lethal AI technologies globally, it also exports a model of racialised disposability.
This paper presents a novel artificial intelligence-based autopilot control system designed for the Cessna Citation X (CCX) aircraft longitudinal motion during cruise. In this control methodology, the unknown aircraft dynamics in the state-space representations of each vertical speed (VS) mode and altitude hold (AH) mode were approximated by two multiplayer fuzzy recurrent neural networks (MFRNNs) trained online using a novel approach based on particle swarm optimisation and backpropagation algorithms. These MFRNNs were used with two sliding mode controllers to guarantee the robustness of both VS and AH modes. In addition, a novel fuzzy logic-based transition algorithm was proposed to efficiently switch the controller between these autopilot modes. The performance of the controllers was evaluated with a nonlinear simulation platform developed for the CCX based on data from a Level D research aircraft flight simulator certified by the FAA. The system stability and robustness were proved by the Lyapunov theorem. The simulation, tested under 925 flight conditions, demonstrated the controllers exceptional tracking capability in a variety of uncertainties, including turbulent and non-turbulent flight scenarios. In addition, the design ensured that the smoothness of the control input signals was maintained in order to preserve the mechanical integrity of the elevator actuation system.
This Element provides an overview of FinTech branches and analyzes the associated institutional forces and economic incentives, offering new insights for optimal regulation. First, it establishes a fundamental tension between addressing existing financial inefficiencies and introducing new economic distortions. Second, it demonstrates that today's innovators have evolved from pursuing incremental change through conventional Fin-Tech applications to AI × crypto as the fastest-growing segment. The convergence of previously siloed areas is creating an open-source infrastructure that reduces entry costs and enables more radical innovation, further amplifying change. Yet this transformation introduces legal uncertainty and risks related to liability, cybercrime, taxation, and adjudication. Through case studies across domains, the Element shows that familiar economic tradeoffs persist, suggesting opportunities for boundary-spanning regulation. It offers regulatory solutions, including RegTech frameworks, compliance-incentivizing mechanisms, collaborative governance models, proactive enforcement of mischaracterizations, and alternative legal analogies for AI × crypto.
Against a backdrop of rapidly expanding health artificial intelligence (AI) development, this paper examines how the European Union’s (EU) stringent digital regulations may incentivise the outsourcing of personal health data collection to low- and middle-income countries (LMICs), fuelling a new form of AI ethics dumping. Drawing on parallels with the historical offshoring of clinical trials, we argue that current EU instruments, such as the General Data Protection Regulation (GDPR), Artificial Intelligence Act (AI Act) and Medical Devices Regulation, impose robust internal safeguards but do not prevent the use of health data collected unethically beyond EU borders. This regulatory gap enables data colonialism, whereby commercial actors exploit weaker legal environments abroad without equitable benefit-sharing. Building on earlier EU responses to ethics dumping in clinical trials, we propose legal and policy pathways to prevent similar harms in the context of AI.
The book concludes by offering a discussion of how the investigation of nuclear status contributes to nuclear policy and the future of technological change in world politics. The 2017 Treaty on the Prohibition of Nuclear Weapons represents a promising, though limited, attempt at moving beyond the NPT’s legal categories by challenging the state-centrism of the global nuclear regime. And from a policy perspective, I argue that when diplomats and policymakers focus entirely on nuclear capability, they miss opportunities to engage with and address a state’s status anxiety. Negotiating with Iran and North Korea requires understanding not only their material pursuits but also the status anxieties that motivate those pursuits. Finally, the conclusion discusses how the theoretical framework of nuclear status presented in this book could be applied to understanding burgeoning technological advances in artificial intelligence.
Artificial intelligence is increasingly being used in medical practice to complete tasks that were previously completed by the physician, such as visit documentation, treatment plans and discharge summaries. As artificial intelligence becomes a routine part of medical care, physicians increasingly trust and rely on its clinical recommendations. However, there is concern that some physicians, especially those younger and less experienced, will become over-reliant on artificial intelligence. Over-reliance on it may reduce the quality of clinical reasoning and decision-making, negatively impact patient communications and raise the potential for deskilling. As artificial intelligence becomes a routine part of medical treatment, it is imperative that physicians recognise the limitations of artificial intelligence tools. These tools may assist with basic administrative tasks but cannot replace the uniquely human interpersonal and reasoning skills of physicians. The purpose of this feature article is to discuss the risks of physician deskilling based on increasing reliance on artificial intelligence.
In an era of accelerating ecological degradation, how might experimental art practices help audiences foster deeper, more empathetic engagement with the intelligence of living systems? This paper explores the potential of contemporary art, when aligned with ecological science, to reframe forest regeneration as a site of aesthetic and ethical inquiry — by regarding the forest as a primary composer within artistic and ecological frameworks. It asks: how might this approach underpin a novel form of “Critical Forest Pedagogy” capable of deepening our understanding of the collective natural intelligence of the living world and encouraging long-term conservation?
To test these ideas, a new art-science project, Forest Art Intelligence, was initiated, framing a regenerating forest as an evolving, living artwork. Because forests evolve through stages mediated by life, death, regeneration and human influence, those stages of growth can also be framed as “process art” — a practice that values each stage of an artwork’s transformation. Collectively therefore this approach proposes a form of art-led “Critical Forest Pedagogy” suited to engaging communities traditionally unaligned with conservation, while remaining relevant to ecologically cognate audiences. It further asks whether this framing might promote a rethinking of restrictive, human-centred definitions of intelligence that underpin generative AI.