To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The use of AI by government agencies in guiding important decisions (for example, on policing, welfare, education) has triggered backlash and demands for greater public input in AI regulation. Yet it remains unclear what such input would reflect: general attitudes towards new technologies, personal experience with AI, or learning about its implications. We study this question experimentally by tracking the attitudes of over 1,500 workers whose task assignments were randomly determined by either a human or an AI ‘boss’, with task content and valence also randomized. Across a three-wave panel, we find that personal experience with AI-as-boss affected workers’ job performance but not their attitudes on using AI in public decision making. In contrast, exposure to information about the technology produced significant attitudinal change, even when it conflicted with participants’ prior disposition or direct experience. The results highlight the promise of incorporating public input into AI governance.
The consensus on the need to regulate artificial intelligence is clear, but the how remains elusive. Private regulation, as proposed by the tech industry itself, and state regulation, as embodied in the recent EU Artificial Intelligence Act, are two common forms of governance. We advance a third option that has received very little attention to date: professional regulation. Professional regulation is modeled after hybrid public-private regulatory structures found in medicine, such as those put forth by the American Medical Association. Such governance schemes develop both technical and ethical standards, shaping professional training, continuing knowledge, and conduct. We contend that it is the most practical means of ensuring the development of human-centered AI in an era of rapid technological change and intensely opposing views of what regulation ought to do. This article places the responsibility of acting ethically on the group that knows the technology best and can anticipate its effects: AI developers. But unlike other voluntary standards, professional regulation articulates and enforces standards to certify individuals. Professional licensing is an alternative that provides public protections based on privately developed standards that ensure the safety of AI prior to their release.
Military decision-making institutions face new challenges and opportunities from increasing artificial intelligence (AI) integration. Military AI adoption is incentivized by competitive pressures and expanding national security needs; thus, we can expect increased complexity due to AI proliferation. Governing this complexity is urgent but lacks clear precedents. This discussion critically re-examines key concerns that AI integration into resort-to-force decision-making organizations introduces. Beside concerns, this article draws attention to new, positive affordances that AI proliferation may introduce. I then propose a minimal AI governance standard framework, adapting private sector insights to the defence context. I argue that adopting AI governance standards (e.g., based on this framework) can foster an organizational culture of accountability, combining technical know-how with the cultivated judgment needed to navigate contested governance concepts. Finally, I hypothesize some strategic implications of the adoption of AI governance programmes by military institutions.
The rapid development of data analytics, computational power, and machine/deep learning algorithms has driven artificial intelligence (AI) applications to every sphere of society, with significant economic, legal, ethical, and political ramifications. A growing body of literature has explored critical dimensions of AI governance, yet few touch upon issue areas that directly resonate with the diverse context and dynamics of the non-Western world, particularly Asia. This chapter therefore aims to fill the gap by offering a contextual discussion of how Asian jurisdictions perceive and respond to the challenges posed by AI, as well as how they interact with each other through regulatory cross-referencing, learning, and competition. Premised upon an analysis of the diverse regulatory approaches shaped by respective political, legal, and socioeconomic contexts in such jurisdictions, this chapter identifies how Inter-Asian Law has emerged in AI governance in the forms of regulatory cross-referencing, joint efforts, and cooperation through regional forums and points to potential venues for normative interactions, dialogue, best practices exchanges, and the co-development of AI governance.
Modern Elections can be conceived as a socio-technical system, as the electoral process in many ways relies on technological solutions: voter information, identification and registration, collecting, verifying and counting the votes – in some countries these steps are conducted by using innovative technologies. But how do those devices and processes actually become part of the official legislation and can finally deployed during this sensitive and important democratic procedure? Over time, the State of California has developed a robust regulatory ecosystem for integrating innovative technology into the electoral process and is also able to change and modernize its rules and regulations. Although technologies currently used are more static, hardware-based and usually do not include algorithmic systems, the overall structure of the process may also function as a blueprint for regulating more dynamic algorithm-based or even AI-based technologies.
The establishment of artificial intelligence regulatory sandboxes (AIRSs) poses both policy and technical challenges, especially in how to reconcile support for innovation with regulatory oversight. AIRSs are based on dynamic regulatory feedback mechanisms that allow for a deeper examination of legal norms with a view to their future evolution. These structures facilitate engagement between regulators and innovators, enabling business learning and regulatory adaptation. However, their proliferation across the European Union under the Artificial Intelligence Act (AI Act) may raise issues of coordination between competent authorities, cross-border regulatory alignment and consistency with overlapping (sectoral) rules. In view of these potential complexities, this paper makes two distinct recommendations. First, AIRSs would benefit from cross-border cooperation – efforts should therefore be made to pursue the establishment of joint AIRSs among different Member States in order to reduce regulatory fragmentation, lower the risk of forum shopping, and optimise administrative resources. Second, integrating AI and cybersecurity compliance within the same sandbox environment would be beneficial in terms of providing clearer and more structured compliance pathways. A well-designed regulatory sandbox regime would make regulation more effective, encourage responsible AI development and secure Europe’s leadership in digital regulation.
Farman Saeed Sedeeq and Percem Arman’s article aims to develop a framework of AI governance that avoids shortcomings in existing models such as limited enforceability and rigid data-sharing rules. The goal of the weighty undertaking is to develop a “structured yet flexible approach” to balancing AI advancements in public health with ethical imperatives. Three core “pillars” are used for evaluation: ethical accountability, regulatory adaptability, and transparency. The concept of ethical accountability is explored briefly in this commentary.
States are reshaping the global digital economy to assert control over the artificial intelligence (AI) value chain. Operating outside multilateral institutions, they pursue measures such as export controls on advanced semiconductors, infrastructure partnerships, and bans on foreign digital platforms. This digital disintegration reflects an elite-centered response to the infrastructural power that private firms wield over critical AI inputs. A handful of companies operate beyond the reach of domestic regulation and multilateral oversight, controlling access to technologies that create vulnerabilities existing institutions struggle to contain. As a result, states have asserted strategic digital sovereignty: the exercise of authority over core digital infrastructure, often through selective alliances with firms and other governments. The outcome is an emergent form of AI governance in techno-blocs: coalitions that coordinate control over key inputs while excluding others. These arrangements challenge the liberal international order by replacing multilateral cooperation with strategic—and often illiberal—alignment within competing blocs.
This study investigates unintended information flow in large language models (LLMs) by proposing a computational linguistic framework for detecting and analyzing domain anchorage. Domain anchorage is a phenomenon potentially caused by in-context learning or latent “cache” retention of prior inputs, which enables language models to infer and reinforce shared latent concepts across interactions, leading to uniformity in responses that can persist across distinct users or prompts. Using GPT-4 as a case study, our framework systematically quantifies the lexical, syntactic, semantic, and positional similarities between inputs and outputs to detect these domain anchorage effects. We introduce a structured methodology to evaluate the associated risks and highlight the need for robust mitigation strategies. By leveraging domain-aware analysis, this work provides a scalable framework for monitoring information persistence in LLMs, which can inform enterprise guardrails to ensure response consistency, privacy, and safety in real-world deployments.
Synthetic datasets, artificially generated to mimic real-world data while maintaining anonymization, have emerged as a promising technology in the financial sector, attracting support from regulators and market participants as a solution to data privacy and scarcity challenges limiting machine learning (ML) deployment. This article argues that synthetic data’s effects on financial markets depend critically on how these technologies are embedded within existing ML infrastructural ‘stacks’ rather than on their intrinsic properties. We identify three key tensions that will determine whether adoption proves beneficial or harmful: (1) data circulability versus opacity, particularly the ‘double opacity’ problem arising from stacked ML systems, (2) model-induced scattering versus model-induced herding in market participant behavior, and (3) flattening versus deepening of data platform power. These tensions directly correspond to core regulatory priorities around model risk management, systemic risk, and competition policy. Using financial audit as a case study, we demonstrate how these tensions interact in practice and propose governance frameworks, including a synthetic data labeling regime to preserve contextual information when datasets cross organizational boundaries.
We examine the arguments made by Onitiu and colleagues concerning the need to adopt a “backward-walking logic” to manage the risks arising from the use of Large Language Models (LLMs) adapted for a medical purpose. We examine what lessons can be learned from existing multi-use technologies and applied to specialized LLMs, notwithstanding their novelty, and explore the appropriate respective roles of device providers and regulators within the ecosystem of technological oversight.
This chapter provides an outline analysis of the evolving governance framework for Artificial Intelligence (AI) in the island city-state of Singapore. In broad terms, Singapore’s signature approach to AI Governance reflects its governance culture more broadly, which harnesses the productive energy of free-market capitalism contained within clear guardrails, as well as the dual nature (as a regulator and development authority) of Singapore’s lead public agency in AI policy formulation. Singapore’s approach is interesting for other jurisdictions in the region and around the world and it can already be observed to have influenced the recent Association of South East Asian Nations (ASEAN) Guide on AI Governance and Ethics which was promulgated in early 2024.
Drawing on the extensive history of study of the terms and conditions (T&Cs) and privacy policies of social media companies, this paper reports the results of pilot empirical work conducted in January-March 2023, in which T&Cs were mapped across a representative sample of generative AI providers as well as some downstream deployers. Our study looked at providers of multiple modes of output (text, image, etc.), small and large sizes, and varying countries of origin. Our early findings indicate the emergence of a “platformisation paradigm”, in which providers of generative AI attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors, and in contradiction to their function as content generators rather than mere hosts for third party content.
Amid the rapidly evolving landscape of artificial intelligence (AI) regulation, a significant concern has emerged regarding the predominant focus on preemptive measures aimed at preventing or mitigating potential AI-related harms. While these preemptive measures are undeniably important, they must be complemented by effective redress mechanisms to address and remedy materialized harms. This paper highlights the crucial role of redress in empowering individuals to challenge and rectify the adverse effects of AI systems, emphasizing that access to redress is, in fact, access to justice. We critically evaluate whether current AI governance sufficiently address the need for remedies for AI-related harms, arguing that they fall short in protecting individuals’ rights. To address this gap, we outline four key steps in the redress process: (1) initiating the redress process, (2) determining appropriate avenues for redress, (3) collecting evidence to support claims and (4) receiving and responding to decisions. Each step is explored in detail, presenting distinct challenges and requirements, illustrated with real-world examples. Our findings underscore the urgent need to integrate robust redress pathways into AI governance frameworks to safeguard individual rights as AI technologies become increasingly embedded in society.
Humanity’s increasing reliance on AI and robotics is driven by compelling narratives of efficiency in which the human is a poor substitute for the extraordinary computational power of machine learning, the creative competences of generative AI as well as the speed, accuracy and consistency of automation in so many spheres of human activity. Indeed, AI is increasingly becoming the core technological foundation of many contemporary societies. Most thinking on how to manage the downside risks to humanity of this seismic societal shift is set out in a direct fault-based relationship such as the innovative EU AI Act which is by far the most comprehensive political attempt to locate (or deter) those directly responsible for AI-generated harm. I argue that while such approaches are vital for combating injustice exacerbated by AI and robotics, too little thought goes into political approaches to the structural dynamics of AI’s impact on society. By way of example, I examine the UK ‘pro-innovation’ approach to AI governance and explore how it fails to address the structural injustices inherent in increasing AI usage.
This article focuses on the G7’s Hiroshima AI Process (HAIP) and its flagship document, the Hiroshima Code of Conduct, as pivotal elements in shaping global artificial intelligence (AI) governance. By conducting a comprehensive analysis of AI regulations in G7 member states, the article demonstrates a high degree of interoperability between these national frameworks and the Code of Conduct’s principles. The article proposes concrete steps to translate these principles into actionable policies at the G7 level and develops strategic adjustments to incorporate them into national standards. The article then proposes enhancements to the Code of Conduct, including the development of a common AI governance vocabulary, robust risk management frameworks, life cycle standards harmonization, effective stakeholder engagement mechanisms, specific redress mechanisms for AI harms and guidelines for government AI use to ensure democratic principles and human rights are upheld. Ultimately, this research aims to strengthen the G7’s role in leading a global AI landscape characterized by the rule of law, democracy, and human rights.
This paper provides an outline analysis of the evolving governance framework for artificial intelligence (AI) in Singapore. Across the Singapore government, AI solutions are being adopted in line with Singapore’s “Smart Nation Initiative” to leverage technology to make impactful changes across society and the economy. In tandem, Singaporean authorities have been assiduous to release a growing number of governance documents, which we analyse together to chart the city-state’s approach to AI governance in international comparison. The characteristics of Singapore’s AI governance approach include an emphasis on consensus-building between stakeholders (particularly government and industry but also citizens) and voluntary or “quasi” regulation, lately with an emphasis on promulgating standards (AI Standards, n.d.) and audit-like frameworks. Singaporean regulators have also been early movers (globally, and especially in the region) in the promulgation of normative instruments on AI governance including developing the world’s first AI Governance Testing Framework and Toolkit, AI Verify. The Singapore approach may be compelling for other jurisdictions in the region and around the world with an interest in a collaborative, balanced and consensual approach to governing AI outside of strict regulatory mechanisms. However, any jurisdiction adopting aspects of its evolving model would have to duly account for relevant differences in social and institutional conditions.
This commentary explores MENA”s AI governance, addressing gaps, showcasing successful strategies, and comparing national approaches. It emphasizes current deficiencies, highlights regional contributions to global AI governance, and offers insights into effective frameworks. The study reveals distinctions and trends in MENA”s national AI strategies, serving as a concise resource for policymakers and industry stakeholders.
This analysis provides a critical account of AI governance in the modern “smart city” through a feminist lens. Evaluating the case of Sidewalk Labs’ Quayside project—a smart city development that was to be implemented in Toronto, Canada—it is argued that public–private partnerships can create harmful impacts when corporate actors seek to establish new “rules of the game” regarding data regulation. While the Quayside project was eventually abandoned in 2020, it demonstrates key observations for the state of urban algorithmic governance both within Canada and internationally. Articulating the need for a revitalised and participatory smart city governance programme prioritizes meaningful engagement in the forms of transparency and accountability measures. Taking a feminist lens, it argues for a two-pronged approach to governance: integrating collective engagement from the outset in the design process and ensuring the civilian data protection through a robust yet localized rights-based privacy regulation strategy. Engaging with feminist theories of intersectionality in relation to technology and data collection, this framework articulates the need to understand the broader histories of social marginalization when implementing governance strategies regarding artificial intelligence in cities.
Ethical guidelines and policy documents destined to guide AI innovations have been heralded as the solution to guard us against harmful effects or to increase public value. However, these guidelines and policy documents face persistent challenges. While these documents are often criticized for their abstraction and disconnection from real-world contexts, it also occurs that stakeholders may influence them for political or strategic reasons. While this last issue is frequently acknowledged, there is seldom a means or a method provided to explore it. To address this gap, the paper employs a combination of social constructivism and science & technology studies perspectives, along with desk research, to investigate whether prior research has examined the influence of stakeholder interests, strategies, or agendas on guidelines and policy documents. The study contributes to the discourse on AI governance by proposing a theoretical framework and methodologies to better analyze this underexplored area, aiming to enhance comprehension of the policymaking process within the rapidly evolving AI landscape. The findings underscore the need for a critical evaluation of the methodologies found and a further exploration of their utility. In addition, the results aim to stimulate ongoing critical debates on this subject.