To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter takes the distinctive materiality of the modern stage, the homely table, as a way to place two very different productions into conversation: Forced Entertainment’s Table Top Shakespeare and Annie Dorsen’s Prometheus Firebringer. Although these two productions might trace the arc from the residual (telling a story at a table using small household items) to the emergent (a dialogue between an AI-generated reconstruction of a lost Aeschylus play and a narrative composed of citations), they also dramatize an increasing absorption of the human into the apparatus of performance, a possibly fearsome absorption traced through Dorsen’s work, and touching on a range of other contemporary performances, including Mona Pirnot’s I Love You So Much I Could Die.
This article is a proof-of-concept that archaeologists can now disseminate archaeological topics to the public easily and cheaply through video games in teaching situations or in museum or heritage communication. We argue that small but realistic, interactive, and immersive closed- or open-world 3D video games about cultural heritage with unscripted (but guardrailed) oral conversation can now be created by beginners with free software such as Unreal Engine, Reality Capture, and Convai. Thus, developing tailor-made “archaeogames” is now becoming extremely accessible, empowering heritage specialists and researchers to control audiovisual dissemination in museums and education. This unlocks new uses for 3D photogrammetry, currently mostly used for documentation, and could make learning about the past more engaging for a wider audience. Our case study is a small game with two levels, one built around 3D-scanned Neolithic long dolmens in a forest clearing and an archaeologist and a prehistoric person, who are both conversational AI characters. We later added a more open level with autonomous animals, a meadow, and a cave with a shaman guiding the player around specific cave paintings. We tested the first level on players from different backgrounds whose feedback showed great promise. Finally, we discuss ethical issues and future perspectives for this format.
Modern Elections can be conceived as a socio-technical system, as the electoral process in many ways relies on technological solutions: voter information, identification and registration, collecting, verifying and counting the votes – in some countries these steps are conducted by using innovative technologies. But how do those devices and processes actually become part of the official legislation and can finally deployed during this sensitive and important democratic procedure? Over time, the State of California has developed a robust regulatory ecosystem for integrating innovative technology into the electoral process and is also able to change and modernize its rules and regulations. Although technologies currently used are more static, hardware-based and usually do not include algorithmic systems, the overall structure of the process may also function as a blueprint for regulating more dynamic algorithm-based or even AI-based technologies.
How can existing experiences of regulatory experimentation inform AI sandbox design in Europe? This paper explores the ‘responsible AI’ sandbox of the Norwegian Data Protection Agency (DPA), a GDPR-oriented regulatory experiment created in 2020 with four projects per annum. Through an interpretive policy analysis of documents (exit reports and workshop transcripts) and semi-structured interviews with officials, we explore how the Norwegian DPA approached its mandate of ‘helping with responsible innovation’, where it identified role conflicts, and what scope conditions and challenges it perceived around sandbox work. Sandboxing represented a ‘new way of working’ for the regulatory authority: in an idea-based intervention mode, the DPA moves from rule-based interventions as a watchdog to becoming a dialogue-oriented partner in solution-finding, a concretiser of ambiguous GDPR rules, and a keen learner from sectoral and technical experts. Critical engagement with our data suggests that sandbox design should not be reduced to technical and procedural questions. It requires regulators’ critical reflexivity on their ambivalent role and power relations in the regulatory experiment: how to strategically select relevant projects and issues, how to navigate budgetary constraints and the lack of follow-ups, and how sandboxing affects more interventionist regulatory duties.
This comment interrogates the methods and conclusions of Working with AI, a recent report conducted under the auspices of Microsoft, which identified historians as the profession with the second-highest ‘AI applicability’. It finds that the authors’ conclusions are based on an erroneous simplification and misrepresentation of a historian’s typical professional tasks, which have been publicly amplified by extensive media coverage. This comment then offers a wider provocation about the report’s conception of a professional historian, and whether it is related to the public application of ‘historian’ to a number of different practitioners with varied training and qualifications. In particular, it seeks to highlight a paradox which the report exposes: that we cannot defend the specialist training and expertise of professional historians against the encroachment of AI without also separating the academic skills and qualifications of historians from those engaged in more popular forms of historical writing and communication. The comment questions how we might grapple with this paradox without reverting to academic elitism.
Regulatory sandboxes for Artificial Intelligence (AI) are designed to address challenges of rapid technological change. AI innovations create an acute need for learning about what regulation is suitable for enabling innovation while dealing with technological risks. This article argues that regulatory sandboxes should be analyzed primarily as mechanisms for enhancing policymakers’ understanding of technologies such as AI, rather than solely as spaces for experimentation that promote innovation. It discusses the role of regulatory sandboxes in facilitating policy learning that can complement the long-term learning processes of the traditional policy cycle. Six case studies serve to illustrate sandbox elements for enabling collaborative experiential learning in contexts in which the absence of AI regulation makes accelerated policy learning particularly valuable. Looking at the design and governance of regulatory sandboxes from Brazil, Colombia, Mauritius, Mexico, Rwanda, and Thailand, learning elements related to the technology and consequences for closing legal lags emerge as critical components.
The establishment of artificial intelligence regulatory sandboxes (AIRSs) poses both policy and technical challenges, especially in how to reconcile support for innovation with regulatory oversight. AIRSs are based on dynamic regulatory feedback mechanisms that allow for a deeper examination of legal norms with a view to their future evolution. These structures facilitate engagement between regulators and innovators, enabling business learning and regulatory adaptation. However, their proliferation across the European Union under the Artificial Intelligence Act (AI Act) may raise issues of coordination between competent authorities, cross-border regulatory alignment and consistency with overlapping (sectoral) rules. In view of these potential complexities, this paper makes two distinct recommendations. First, AIRSs would benefit from cross-border cooperation – efforts should therefore be made to pursue the establishment of joint AIRSs among different Member States in order to reduce regulatory fragmentation, lower the risk of forum shopping, and optimise administrative resources. Second, integrating AI and cybersecurity compliance within the same sandbox environment would be beneficial in terms of providing clearer and more structured compliance pathways. A well-designed regulatory sandbox regime would make regulation more effective, encourage responsible AI development and secure Europe’s leadership in digital regulation.
This article explores how AI-generated music challenges traditional theological understandings of creativity, spirituality, and the soul. By engaging the theological traditions of analogy and participation developed by Thomas Aquinas, Thomas de Vio Cajetan, and Francisco Suárez, this article reconsiders whether AI-generated music generates emotions and spiritual significances in listeners and whether it might disclose something meaningful about the nature of divine creativity. Rather than arguing AI music is either a technological innovation or artistic threat, this article suggests various frameworks of analogy, participation, and pneumatology to create a better theological discernment on how divine creativity works through secondary causes within creation. The exploration concludes in proposing a ‘theology of digital transcendence’ – a framework for understanding how computational creativity participates in the broader economy of divine creation.
Initially, an attempt is made to provide a precise definition of channel functions, which are so vital to the firm. The tough challenge of gaining acceptable performance of work activities in all the firm’s channels is explained. Then, an analysis is presented of how new technologies can affect the processing and delivery of customer orders. Acknowledgment is made of the impact of brand positioning and value propositions on channel functions. It follows that superior performance of critical channel functions is vital to delighting targeted end-customers and a thorough explanation is given. To conclude, a discussion is provided of the role of supply chain management in the firm and the main steps necessary to be taken in the order management cycle.
Artificial intelligence ambient voice technology (AI AVT), which uses a large language model to summarise clinical dialogue into electronic notes and GP letters, has emerged. We conducted a mixed-methods, pre–post (manual versus AVT-assisted documentation) service development pilot to evaluate its use in a child and adolescent out-patient clinic.
Results
The median administration time per clinical encounter reduced from 27 min (manual) to 10 min (AVT) (P < 0.001). On average, AVT-assisted documentation required only 45% of the time for manual documentation (P < 0.001). Clinician-rated accuracy, quality and efficiency were significantly higher for AVT-assisted documentation. Patient acceptance was high, with 97% reporting that clinicians were not distracted by note-taking. Thematic analysis from focus groups identified positive effects derived from AVT (improved productivity and clinician well-being), but was balanced by barriers (technological limitations).
Clinical implications
Integration of AVT into clinical workflows can significantly alleviate documentation burden, reduce cognitive strain and free up clinical capacity.
In recent years, evidence for extraterrestrial life has focused mainly on the following sections, meteorites, space probes, radio telescopes, and extraterrestrial intelligence and civilization. Biochemical studies on meteorites have tried to trace fossilized microorganisms or organic molecules in living structures. Images and atmospheric information obtained from various planets by space probes have been used to uncover the habitability of other celestial bodies in the solar system. Observations of radio telescopes that receive the waves emitted by cosmic objects and display them on their screens have pave the way to estimate the habitability of heavenly bodies. As the last one, claims related to extraterrestrial intelligence and civilization have been repeatedly reported in different periods of history. All of this evidence points to the possibility of extraterrestrial life, but how close we are to confirming or disproving this hypothesis is still debatable. However, recent advancements in artificial intelligence, particularly in machine learning, have significantly enhanced the ability to analyze complex astrobiological data. This technology optimizes the processing of meteoritic data, differentiates astronomical signals, and reinterprets historical evidence, opening new frontiers in the search for extraterrestrial life. In this review, we have attempted to present the above-mentioned evidence in detail to provide a suitable understanding of the level of our extraterrestrial knowledge.
Chapter 10 predicts the “future” of chilling effects – which today looks darker and more dystopian than ever in light of the proliferation of new forms of artificial intelligence, machine learning, and automation technologies in society. The author here introduces a new term “superveillance” to explain new forms of AI-driven systems of automated legal and social norm enforcement that will likely cause mass societal chilling effects at an unprecedented scale. The author also argues how chilling effects today enable this more oppressive future and proposes a comprehensive law and public policy reforms and solutions to stop it.
States are reshaping the global digital economy to assert control over the artificial intelligence (AI) value chain. Operating outside multilateral institutions, they pursue measures such as export controls on advanced semiconductors, infrastructure partnerships, and bans on foreign digital platforms. This digital disintegration reflects an elite-centered response to the infrastructural power that private firms wield over critical AI inputs. A handful of companies operate beyond the reach of domestic regulation and multilateral oversight, controlling access to technologies that create vulnerabilities existing institutions struggle to contain. As a result, states have asserted strategic digital sovereignty: the exercise of authority over core digital infrastructure, often through selective alliances with firms and other governments. The outcome is an emergent form of AI governance in techno-blocs: coalitions that coordinate control over key inputs while excluding others. These arrangements challenge the liberal international order by replacing multilateral cooperation with strategic—and often illiberal—alignment within competing blocs.
This chapter explores bias and fairness in Swedish employment testing from legal, historical, and practical perspectives. Swedish labor laws, influenced by trade unions and the welfare state, emphasize non-discrimination under the Discrimination Act. The law prohibits bias based on sex, gender identity, ethnicity, religion, disability, sexual orientation, and age, and requires preventive action. It is enforced by the Equality Ombudsman and Labour Court. Although validity evidence is not explicitly required, selection decisions should be based on a job analysis. No proof of intent is required in discrimination claims, and the burden of proof is shared. Quotas are banned, but positive action is allowed for gender balance when qualifications are equal. Psychological test certification is voluntary in Sweden; the Psychological Association offers guidelines on validity, reliability, and fairness. However, these are not mandatory, and many employers develop their own policies. International standards offer best-practice guidance for fair assessments, including for emerging artificial intelligence tools.
The intelligible world of machines and predictive modelling is an omnipresent and almost inescapable phenomenon. It is an evolution where human intelligence is being supported, supplemented or superseded by artificial intelligence (AI). Decisions once made by humans are now made by machines, learning at a faster and more accurate rate through algorithmic calculations. Jurisprudent academia has undertaken to argue the proposition of AI and its role as a decision-making mechanism in Australian criminal jurisdictions. This paper explores this proposition through predictive modelling of 101 bail decisions made in three criminal courts in the State of New South Wales (NSW), Australia. Indicatively, the models’ statistical performance and accuracy, based on nine predictor variables, proved effective. The more accurate logistic regression model achieved 78% accuracy and a performance value of 0.845 (area under the curve; AUC), while the classifier model achieved 72.5% accuracy and a performance value of 0.702 (AUC). These results provide the groundwork for AI-generated bail decisions being piloted in the NSW jurisdiction and possibly others within Australia.
Information is a key variable in International Relations, underpinning theories of foreign policy, inter-state cooperation, and civil and international conflict. Yet IR scholars have only begun to grapple with the consequences of recent shifts in the global information environment. We argue that information disorder—a media environment with low barriers to content creation, rapid spread of false or misleading material, and algorithmic amplification of sensational and fragmented narratives—will reshape the practice and study of International Relations. We identify three major implications of information disorder on international politics. First, information disorder distorts how citizens access and evaluate political information, creating effects that are particularly destabilizing for democracies. Second, it damages international cooperation by eroding shared focal points and increasing incentives for noncompliance. Finally, information disorder shifts patterns of conflict by intensifying societal cleavages, enabling foreign influence, and eroding democratic advantages in crisis bargaining. We conclude by outlining an agenda for future research.
This chapter explores bias and fairness in employment testing in Türkiye across governmental and private sectors. It distinguishes fairness – equal opportunity, transparency, and uniform outcomes – from bias, especially in relation to predictive validity. The chapter situates these issues within Türkiye’s cultural, ethnic, and socioeconomic landscape, examining how historical and regional factors shape perceptions and practices. Key legal and regulatory frameworks, such as Turkish Labor Law and constitutional mandates, are reviewed to highlight protections for equal treatment. It also evaluates bias detection methods, including differential item functioning, sensitivity reviews, and predictive bias analyses, and discusses challenges from emerging technologies such as the use of artificial intelligence in personnel selection. The chapter underscores the need for strong validity evidence and proactive strategies to promote fair and equitable hiring in Türkiye.
The human brain makes up just 2% of body mass but consumes closer to 20% of the body’s energy. Nonetheless, it is significantly more energy-efficient than most modern computers. Although these facts are well-known, models of cognitive capacities rarely account for metabolic factors. In this paper, we argue that metabolic considerations should be integrated into cognitive models. We distinguish two uses of metabolic considerations in modeling. First, metabolic considerations can be used to evaluate models. Evaluative metabolic considerations function as explanatory constraints. Metabolism limits which types of computation are possible in biological brains. Further, it structures and guides the flow of information in neural systems. Second, metabolic considerations can be used to generate new models. They provide: a starting point for inquiry into the relation between brain structure and information processing, a proof-of-concept that metabolic knowledge is relevant to cognitive modeling, and potential explanations of how a particular type of computation is implemented. Evaluative metabolic considerations allow researchers to prune and partition the space of possible models for a given cognitive capacity or neural system, while generative considerations populate that space with new models. Our account suggests cognitive models should be consistent with the brain’s metabolic limits, and modelers should assess how their models fit within these bounds. Our account offers fresh insights into the role of metabolism for cognitive models of mental effort, philosophical views of multiple realization and medium independence, and the comparison of biological and artificial computational systems.
Systematic reviews play a critical role in evidence-based research but are labor-intensive, especially during title and abstract screening. Compact large language models (LLMs) offer potential to automate this process, balancing time/cost requirements and accuracy. The aim of this study is to assess the feasibility, accuracy, and workload reduction by three compact LLMs (GPT-4o mini, Llama 3.1 8B, and Gemma 2 9B) in screening titles and abstracts. Records were sourced from three previously published systematic reviews and LLMs were requested to rate each record from 0 to 100 for inclusion, using a structured prompt. Predefined 25-, 50-, 75-rating thresholds were used to compute performance metrics (balanced accuracy, sensitivity, specificity, positive and negative predictive value, and workload-saving). Processing time and costs were registered. Across the systematic reviews, LLMs achieved high sensitivity (up to 100%) and low precision (below 10%) for records included by full text. Specificity and workload savings improved at higher thresholds, with the 50- and 75-rating thresholds offering optimal trade-offs. GPT-4o-mini, accessed via application programming interface, was the fastest model (~40 minutes max.) and had usage costs ($0.14–$1.93 per review). Llama 3.1-8B and Gemma 2-9B were run locally in longer times (~4 hours max.) and were free to use. LLMs were highly sensitive tools for the title/abstract screening process. High specificity values were reached, allowing for significant workload savings, at reasonable costs and processing time. Conversely, we found them to be imprecise. However, high sensitivity and workload reduction are key factors for their usage in the title/abstract screening phase of systematic reviews.