To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This review examines the legal, voluntary, and technical mechanisms that govern the ownership of nonpersonal agricultural data generated by IoT-enabled farm machinery, sensors, and related systems. Given that this data is not subject to personal data protection legislation such as General Data Protection Regulation (GDPR), its governance presents distinct challenges requiring alternative governance approaches. Drawing on 63 peer-reviewed studies published over the last decade, this review proposes an integrated conceptual framework comprising legal enforcement, voluntary governance, and technical enforcement mechanisms. A distinctive contribution of the study is to show that data ownership in agriculture becomes meaningful at the moment of data sharing, where rights claims are made visible, contested, or constrained, and that these three governance pathways must be understood jointly rather than in isolation. The analysis demonstrates that although farmers generate vast quantities of nonpersonal data, no existing legal framework explicitly grants them ownership, leaving ownership to be ambiguously allocated or de facto transferred through contracts in ways that limit their ability to contest access or downstream use. Technical mechanisms promise automated enforcement and accountability but risk codifying existing power asymmetries when the encoded rules reflect opaque or exclusionary terms. We argue for a shift from “ownership” to “data sovereignty” understood as the sustained capacity to define, monitor, and revoke conditions of data use. Achieving this requires three interlinked pillars: enforceable baseline access and use rights for farmers, accessible and preferably open-source technical infrastructure, and participatory governance arrangements.
The emergence of large language models, exemplified by ChatGPT, has garnered growing attention for their potential to generate feedback in second language writing, particularly automated written corrective feedback (AWCF). In this study, we examined how prompt design – a generic prompt and two domain-specific prompts (zero-shot and one-shot) enriched with comprehensive domain knowledge about written corrective feedback (WCF) – influences ChatGPT’s ability to provide AWCF. The accuracy and coverage of ChatGPT’s feedback across these three prompts were benchmarked against Grammarly, a widely used traditional automated writing evaluation (AWE) tool. We find that ChatGPT’s ability in flagging language errors grew considerably with prompt sophistication driven by the integration of domain-specific knowledge and examples. While the generic prompt resulted in substantially lower performance than Grammarly, the zero-shot prompt achieved comparable results to it and the one-shot prompt surpassed it considerably in error detection. Notably, the most pronounced improvement in ChatGPT’s performance was observed in its detection of frequent error categories, including those of word choice or expression, direct translation, sentence structure and pronoun. Nonetheless, even with the most sophisticated prompt, ChatGPT still displayed certain limitations when compared to Grammarly. Our study has both theoretical and practical implications. Theoretically, it lends empirical evidence to Knoth et al.’s (2024) proposition to separate domain-specific AI literacy from generic AI literacy. Practically, it sheds light on the pedagogical application and technical development of AWE systems.
The use of passive exoskeletons in industrial settings has gained growing interest as a means to reduce muscle fatigue and prevent work-related musculoskeletal disorders. However, translating laboratory methods into realistic occupational environments remains a challenge. This study presents a modular and wearable-sensor-based experimental protocol designed to bridge this gap by enabling the evaluation of exoskeletons in both static (STC) and dynamic (DYN) tasks while preserving natural movement variability. A total of 52 participants, including both men and women, completed tasks with and without two different passive exoskeletons, while their motor activity was assessed using surface electromyography (sEMG) and inertial motion sensors. The protocol incorporates key EMG-based metrics – Root Mean Square (RMS) and Hilbert Median Frequency (MDF) – that effectively quantify muscle activation and fatigue, along with subjective Perceived Fatigue Scores (PFS) and a task performance metric (Screwing Velocity, SV). The results confirm that the exoskeletons significantly reduce muscle activation and perceived fatigue without impairing task performance. The proposed methodology, combining rigorous metrics with wearable and non-invasive instrumentation, offers a robust framework for evaluating fatigue in both STC and DYN tasks and usability in both laboratory and field settings. This protocol represents a valuable tool for both research and industrial evaluation, facilitating the evidence-based integration of exoskeletons into real-world industrial workflows.
This paper presents an innovative hybrid approach that integrates traditional control strategies with deep reinforcement learning for robotic assembly. By fusing multimodal information from visual and force feedback, the method leverages admittance control to ensure safe force feedback while using deep reinforcement learning to process visual input, enabling precise control and real-time correction of assembly actions. This multi-sensor feedback mechanism not only enhances the stability and accuracy of the assembly process but also improves the robot’s robustness and adaptability in uncertain environments. Additionally, a twin-delay deep deterministic policy gradient algorithm based on residual reinforcement learning is proposed. The design of a task-specific reward function, which simultaneously considers visual goals, force compliance, and contact stability, effectively addresses challenges such as difficult state information acquisition and sparse rewards in assembly tasks. This improves the robot’s interaction capabilities and task execution efficiency in real-world environments. Experimental results demonstrate that the method designed in this paper effectively reduces the training time for reinforcement learning from 400 epochs to 100 epochs, significantly decreases the magnitude of contact forces during the assembly process, and shortens the contact time.
Governments across the world are leveraging artificial intelligence (AI) to render services to citizens. Emerging economies are not left behind in this transformation but remain a gaping distance behind in their integration into public-sector service delivery compared to the private sector. To ensure the effective integration of AI services by government agencies to serve citizens, it is necessary to understand the constellation of factors driving user adoption of AI. Therefore, this study answers the question: how can government-initiated AI services be successfully accepted by citizens? Leveraging non-probability sampling, a snowball sample of 245 tertiary student-workers from across Ghana was surveyed to solicit their knowledge, attitudes, readiness, and use intentions towards AI-enabled government services. The data were analysed using FsQCA and complemented by PLS-SEM to confirm the findings. The findings reveal four unique configurations, summarised into two broad groups; AI enthusiasts and AI sceptics that drive AI adoption in government services. Positive readiness factors, such as knowledge of AI and optimism towards AI, characterise AI enthusiasts. In contrast, those described as AI sceptics still adopt government AI services despite their reservations and general distrust. AI sceptics are a delicate group that sit at the boundary between adoption and rejection, requiring special attention and strategies to orient them towards adoption. The study recommends effective education and trust-building strategies to foster AI adoption in government services. The findings are essential for driving the efficient implementation of AI-enabled services among working-class citizens in emerging economies.
This paper explores the mathematical connections between the algebraic and relational semantics of Lewis’s logics for counterfactual conditionals. Specifically, we introduce topological variants of Lewis’s well-known possible-worlds semantics—based on spheres, selection functions, and orders—and establish duality results with respect to varieties of Boolean algebras equipped with a counterfactual operator, which serve as the equivalent algebraic semantics of Lewis’s main systems. These results aim to provide a solid mathematical foundation for the study of Lewis’s logics, and offer a new perspective on the most well-known possible worlds-based models. In particular, we write explicit proofs for several results that are often assumed without proof in the literature. Leveraging these duality results, we also derive alternative proofs of strong completeness for Lewis’s variably strict conditional logics with respect to their intended models, and clarify the role of the limit assumption in sphere semantics.
This introduction to quantum computing from a classical programmer's perspective is meant for students and practitioners alike. More than 50 quantum techniques and algorithms are explained with mathematical derivations and code for simulation, using an open-source code base in Python and C++. New material throughout this fully revised and expanded second edition includes new chapters on Quantum Machine Learning, State Preparation, and Similarity Tests. Coverage includes algorithms exploiting entanglement, black-box algorithms, the quantum Fourier transform, phase estimation, quantum walks, and foundational QML algorithms. Readers will find detailed, easy-to-follow derivations and implementations of Shor's algorithm, Grover's algorithm, SAT3, graph coloring, the Solovay-Kitaev algorithm, Moettoenen's algorithm, quantum mean, median, and minimum finding, Deutsch's algorithm, Bernstein-Vazirani, quantum teleportation and superdense coding, the CHSH game, and, from QML, the HHL algorithm, Euclidean distance, and PCA. The book also discusses productivity issues like quantum noise, error correction, quantum programming languages, compilers, and techniques for transpilation.
Designed for educators, researchers, and policymakers, this insightful book equips readers with practical strategies, critical perspectives, and ethical insights into integrating AI in education. First published in Swedish in 2023, and here translated, updated, and adapted for an English-speaking international audience, it provides a user-friendly guide to the digital and AI-related challenges and opportunities in today's education systems. Drawing upon cutting-edge research, Thomas Nygren outlines how technology can be usefully integrated into education, not as a replacement for humans, but as a tool that supports and reinforces students' learning. Written in accessible language, topics covered include AI literacy, source awareness, and subject-specific opportunities. The central role of the teacher is emphasized throughout, as is the importance of thoughtful engagement with technology. By guiding the reader through the fastevolving digital transformation in education globally, it ultimately enables students to become informed participants in the digital world.
Narratives shape public perceptions and policymaking around emerging technologies like quantum technologies (QTs), yet what narratives develop across different societal domains remains underexplored. This study analyzes narratives about QTs in 36 government documents, 163 business reports, and 2023 media articles published over the past 23 years, using a mixed-methods approach that combines topic modeling with qualitative thematic analysis. We find that the dystopian or utopian extremes associated with technologies such as artificial intelligence are largely absent from discourse about QTs. Media coverage tends to cover a broad range of topics, while business and government narratives emphasize technical milestones, economic competitiveness, and national security, frequently at the expense of questions about ethics, equity, and accessibility. We discuss the implications of this focus, particularly the risk that an emphasis on zero-sum geopolitical competition could foster a more closed and fragmented innovation ecosystem.
This paper proposes an energy-efficient walking generation method utilizing limit cycles generated by nonlinear model predictive control (NMPC). Conventional limit cycle walking methods rely on strong feedback, such as output zeroing control, to attract the robot’s state toward a predefined periodic trajectory. However, we hypothesize that employing feedback control that better leverages the robot’s dynamics can improve energy efficiency during walking. Our previous work confirmed that using limit cycles generated by NMPC can produce energy-efficient walking patterns. This study builds upon this foundation and proposes a new method for generating walking in a general five-link bipedal robot. Through extensive numerical simulations, we demonstrate that the proposed method achieves highly energy-efficient walking while exhibiting excellent convergence to periodic trajectories.
This study examines the role of gakushū manga, or educational Japanese comics, in shaping collective memory narratives of World War II. It explores whether these works diverge from or perpetuate Japan-centric interpretations of World War II by analysing thematic trends, representational strategies, and selective memory frameworks. The findings reveal a dominant emphasis on Japanese victimhood, mainly through graphic depictions of civilian suffering, while representations of foreign victims, such as Chinese and Korean civilians, remain abstract or marginalised. The responsibility of those in positions of leadership is selectively portrayed, often exonerating figures like Emperor Hirohito, and the actions of such militaristic leaders are contextualised within broader systemic ideologies.
These manga replicate postwar narratives by foregrounding societal complicity, deliberate omission, and the delegation of the ‘Other’ to the periphery, in line with broader patterns of media-driven nationalism. They provide nuanced critiques of Japan’s wartime conduct but simultaneously maintain a selective focus that minimises Japan’s responsibilities as an aggressor. This research underscores the need for a balanced collective memory to foster reconciliation and a more inclusive understanding of wartime legacies in East Asia.
The lower limb exoskeleton is a typical wearable robot designed to assist human motion. However, its system stability and performance are often compromised due to unknown model parameters and inadequate control strategies. Therefore, it is crucial to explore the parametric identification of the exoskeleton and the design of corresponding control strategies for human-exoskeleton cooperative motion. First, an exoskeleton platform is developed to provide experimental validation. Simultaneously, a two-degree-of-freedom (2-DOF) exoskeleton model is constructed using the Lagrange method. The neighborhood field optimization (NFO) technique is then applied to identify the unknown model parameters of the exoskeleton. Additionally, the excitation trajectories for the exoskeleton are developed by the NFO method, incorporating several motion constraints to enhance the accuracy of model identification. An admittance controller is implemented to enable active control of the exoskeleton, allowing it to align with human intention and thereby improving the wearability and comfort of the device. Finally, both simulation and experimental results are compared and verified on the platform. These results demonstrate that the NFO method achieves superior identification accuracy compared to particle swarm optimization (PSO) and genetic algorithm (GA).
What defines a correct program? What education makes a good programmer? The answers to these questions depend on whether programs are seen as mathematical entities, engineered socio-technical systems or media for assisting human thought. Programmers have developed a wide range of concepts and methodologies to construct programs of increasing complexity. This book shows how those concepts and methodologies emerged and developed from the 1940s to the present. It follows several strands in the history of programming and interprets key historical moments as interactions between five different cultures of programming. Rooted in disciplines such as mathematics, electrical engineering, business management or psychology, the different cultures of programming have exchanged ideas and given rise to novel programming concepts and methodologies. They have also clashed about the nature of programming; those clashes remain at the core of many questions about programming today. This title is also available as Open Access on Cambridge Core.
Motivated by the astonishing capabilities of large language models (LLMs) in text-generation, reasoning, and simulation of complex human behaviors, in this paper, we propose a novel multi-component LLM-based framework, namely LLM4ACOE, that fully automates the collaborative ontology engineering (COE) process using role-playing simulation of LLM agents and retrieval augmented generation (RAG) technology. The proposed solution enhances the LLM-powered role-playing simulation with RAG ‘feeding’ the LLM with three different types of external knowledge. This knowledge corresponds to the knowledge required by each of the COE roles (agents), using a component-based framework, as follows: (a) domain-specific data-centric documents, (b) OWL documentation, and (c) ReAct guidelines. The aforementioned components are evaluated in combination, with the aim of investigating their impact on the quality of generated ontologies. The aim of this work is twofold, (a) to identify the capacity of LLM-based agents to generate acceptable (by human-experts) ontologies through agentic collaborative ontology engineering (ACOE) role-playing simulation, at specific levels of acceptance (accuracy, validity, and expressiveness of ontologies) without human intervention and (b) to investigate whether and/or to what extent the selected RAG components affect the quality of the generated ontologies. The evaluation of this novel approach is performed using ChatGPT-o in the domain of search and rescue (SAR) missions. To assess the generated ontologies, quantitative and qualitative measures are employed, focusing on coverage, expressiveness, structure, and human involvement.
In order to solve the problem of poor quality of paths generated by the traditional Q-RRT* algorithm and blind random search, an improved APF-QRRT* algorithm is proposed in this paper. The improved APF-QRRT* algorithm obtains a set of discrete critical path points connecting the start point and the end point by the Q-RRT* algorithm, and then fine-tunes the paths by using the local optimization capability of the APF to improve the smoothness and safety of the paths. The traditional Q-RRT* algorithm is improved, and the fast alternating expansion of two random trees is realized by introducing a bidirectional search strategy of two random trees and adopting node greedy expansion, where the nearest node of the tree is used as the reference for the expansion of this tree during the iterative process of path node generation. The experimental results show that the improved APF-QRRT* algorithm reduces the path planning time by 20.3%, the path length by 1.8%, the number of path nodes by 33.3%, and the number of sampling points by 23.6% compared with the standard APF-QRRT* algorithm in a complex environment. In this paper, a system test platform is constructed and utilized to carry out multi-AGV path planning experiments in real environments, and the experimental results show that the proposed hybrid path planning algorithm has good path planning effects.
This chapter explores collection analysis as a tool for proving some simple invariants about Horn clause logic programs. This static analysis of Horn clauses employs higher-order quantification and linear logic. This chapter introduces different types of collection approximations, such as multiset, set, and list approximations. The chapter also briefly mentions the automation of this analysis. Bibliographic notes provide pointers to relevant research in program analysis.
This chapter introduces linear logic, highlighting its unique approach to resource management. It presents sequent calculus proof systems for linear logic. The chapter discusses the polarity of logical connectives in linear logic and the concept of multi-zone sequents. It provides an informal semantics of resource consumption to illustrate the meaning of linear logic connectives. The chapter also touches upon the implementation of proof search in linear logic, mentioning techniques like lazy splitting of multisets. Bibliographic notes guide the reader to key literature on linear logic and its proof theory.