To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the fields of meal-assisting robotics and human–robot interaction (HRI), real-time and accurate mouth pose estimation is critical for ensuring interaction safety and improving user experience. The complexity arises from the diverse opening degrees of mouths, variations in orientation, and external factors such as lighting conditions and occlusions, which pose significant challenges for real-time and accurate posture estimation of mouths. In response to the above-mentioned issues, this paper proposes a novel method for point cloud fitting and posture estimation of mouth opening degrees (FP-MODs). The proposed method leverages both RGB and depth images captured from a single viewpoint, integrating geometric modeling with advanced point cloud processing techniques to achieve robust and accurate mouth posture estimation. The innovation of this work lies in the hypothesis that different states of mouth openings can be effectively described by distinct geometric shapes: closed mouths are modeled by spatial quadratic surfaces, half-open mouths by spatial ellipses, and fully open mouths by spatial circles. Then, based on these hypotheses, we developed algorithms for fitting geometric models to point clouds obtained from mouth regions, respectively. Specifically, for the closed mouth state, we employ an algorithm based on least squares optimization to fit a spatial quadratic surface to the point cloud data. For the half-open or fully open mouth states, we combine inverse projection methods with least squares fitting to model the contour as a spatial ellipse and circle, respectively. Finally, to evaluate the effectiveness of the proposed FP-MODs method, extensive actual experiments were conducted under varying conditions, including different orientations and various types of mouths. The results demonstrate that the proposed FP-MODs method achieves high accuracy and robustness. This study can provide a theoretical foundation and technical support for improving HRI and food delivery safety in the field of robotics.
This up-to-date introduction to type theory and homotopy type theory will be essential reading for advanced undergraduate and graduate students interested in the foundations and formalization of mathematics. The book begins with a thorough and self-contained introduction to dependent type theory. No prior knowledge of type theory is required. The second part gradually introduces the key concepts of homotopy type theory: equivalences, the fundamental theorem of identity types, truncation levels, and the univalence axiom. This prepares the reader to study a variety of subjects from a univalent point of view, including sets, groups, combinatorics, and well-founded trees. The final part introduces the idea of higher inductive type by discussing the circle and its universal cover. Each part is structured into bite-size chapters, each the length of a lecture, and over 200 exercises provide ample practice material.
In dynamic environments, moving objects pose a great challenge to the accuracy and robustness of visual simultaneous localization and mapping (VSLAM) systems. Traditional dynamic VSLAM methods rely on hand-designed feature frames, and these methods usually make it difficult to fully utilize feature information in dynamic regions. To this end, this paper proposes a SLAM system (GAF-SLAM) that combines gray area feature points, weighted static probabilities, and spatio-temporal constraints. This method realizes the efficient fusion of key point detection and target detection by introducing YOLO-Point to extract gray area feature points from dynamic regions. These feature points are located within the detection frame and have potentially static feature point properties. By combining the reprojection error and polar geometry constraints, potential static feature points are effectively screened out and the identification of these gray area feature points is further optimized. Subsequently, a novel static probabilistic computational framework is designed to assign weights to these gray area feature points and dynamically adjust their influence on the optimization results during the attitude estimation process. By combining static probability with temporal continuity and spatial smoothness constraints, the system achieves significantly improved localization accuracy and robustness in dynamic environments. Finally, the proposed method was evaluated on the TUM RGB-D dataset. The experimental results demonstrate that GAF-SLAM significantly improves pose estimation accuracy and exhibits strong robustness and stability in dynamic indoor environments.
Model-based systems engineering (MBSE) is increasingly used across industries for the integrated modeling of complex systems to support model-based development and provide enhanced traceability between requirements and verification and validation of the system. This paper seeks to strengthen the function modeling methodology in MBSE by introducing an approach based on flow heuristics guided by the System State Flow Diagram schema. This provides integrated function architectures with an enhanced integrity in MBSE. The approach is illustrated with a case study of an electric bicycle implemented in the MathWorks System Composer environment.
This article investigates “livingness” at the convergence of design, human–computer interaction (HCI) and synthetic biology, emphasising the evolving role of materialism. It examines living artefacts – objects designed with life-like qualities that utilise natural, engineered or programmable materials. The study thoroughly reviews theoretical underpinnings, highlighting new materialism’s focus on the agency of matter and HCI’s material turn, underscoring the value of physical interaction with digital systems. It also discusses recent advancements in living organisms as integral elements in design, aimed at reducing environmental impact and creating new user experiences. Through a systematic literature review and an in-depth analysis of case studies, the article proposes an extended definition of “livingness” across the three disciplines, advancing the understanding of the functions of living artefacts, how life-like capabilities can be integrated into them, and the implications for regenerative design. The findings invite a reimagined relationship between humans, materials and technology, fostering sustainable and interactive design practices.
Australian public sector agencies want to improve access to public sector data to help conduct better informed policy analysis and research and have passed legislation to improve access to this data. Much of this public sector data also contains personal information or health information and is therefore governed by state and federal privacy law which places conditions on the use of personal and health information. This paper therefore analyses how these data sharing laws compare with one another, as well as whether they substantially change the grounds on which public sector data can be shared. It finds that data sharing legislation, by itself, does not substantially change the norms embedded in privacy and health information management law governing the sharing of personal and health information. However, this paper notes that there can still be breaches of social licence even where data sharing occurs lawfully. Further, this paper notes that there are several inconsistencies between data sharing legislation across Australia. This paper therefore proposes reform, policy, and technical strategies to resolve the impact of these inconsistencies.
European Union (EU) public opinion research is a rich field of study. However, as citizens often have little knowledge of the EU it remains the question to what extent their attitudes are grounded in coherent, ideologically informed belief systems. As survey research is not well equipped to study this question, this paper explores the value of the method of cognitive mapping (CM) for public opinion research by studying the cognitive maps of 504 Dutch citizens regarding the Eurozone crisis. The paper shows that respondents perceive the Eurozone crisis predominantly as a governmental debt crisis. Moreover, the concept bureaucracy unexpectedly plays a key role in their belief systems exerting an ambiguous but overall negative effect on the Eurozone and trust in the EU. In contrast to expectation, the attitudes of the respondents are more solidly grounded in (ordoliberal) ideology than that of the Dutch elite. Finally, the paper introduces new ways to measure ambivalence prompting a reevaluation of the significance of different forms of ambivalence and their impact on political behavior. Overall, the results of this study suggest that CM forms a promising addition to the toolbox of public opinion research.
Improving public policies, creating the next generation of AI systems, reducing crime, making hospitals more efficient, addressing climate change, controlling pandemics, and reducing disruption in supply chains are all problems where big picture ideas from analytics science have had large-scale impact. What are those ideas? Who came up with them? Will insights from analytics science help solve even more daunting societal challenges? This book takes readers on an engaging tour of the evolution of analytics science and how it brought together ideas and tools from many different fields – AI, machine learning, data science, OR, optimization, statistics, economics, and more – to make the world a better place. Using these ideas and tools, big picture insights emerge from simplified settings that get at the essence of a problem, leading to superior approaches to complex societal issues. A fascinating read for anyone interested in how problems can be solved by leveraging analytics.
This focused textbook demonstrates cutting-edge concepts at the intersection of machine learning (ML) and wireless communications, providing students with a deep and insightful understanding of this emerging field. It introduces students to a broad array of ML tools for effective wireless system design, and supports them in exploring ways in which future wireless networks can be designed to enable more effective deployment of federated and distributed learning techniques to enable AI systems. Requiring no previous knowledge of ML, this accessible introduction includes over 20 worked examples demonstrating the use of theoretical principles to address real-world challenges, and over 100 end-of-chapter exercises to cement student understanding, including hands-on computational exercises using Python. Accompanied by code supplements and solutions for instructors, this is the ideal textbook for a single-semester senior undergraduate or graduate course for students in electrical engineering, and an invaluable reference for academic researchers and professional engineers in wireless communications.
Design Neurocognition, a field bridging Design Research and Cognitive Neuroscience, offers new insights into the cognitive processes underlying creative ideation. This study adopts a micro-perspective on design ideation by examining convergent and divergent thinking as its core components. Using 32-channel EEG recordings, it investigates how educational background (Industrial Design Engineering vs. Engineering Design) influences designers’neural activity (alpha, beta, and gamma frequency bands), behavioral responses, and perceived stress during ideation tasks. Data from forty participants reveal a consistent and meaningful interaction between brain activity, behavior, and self-reported stress, highlighting that educational background significantly modulates cognitive and neural patterns during ideation. Importantly, perceived stress shows strong negative correlations with neural power across all frequency bands, suggesting a close alignment between subjective experience and physiological measures. By integrating neural, behavioral, and psychological data, this study advances the understanding of the neurocognitive mechanisms driving design ideation and establishes a methodological foundation for bridging Design and Cognitive Neuroscience. These findings contribute to building a unified evidence base for future human-centred and neuro-informed design research.
This article argues that the environmental contexts of memory are vulnerable to Artificial Intelligence (AI)-generated distortions. By addressing the broader ecological implications for AI’s integration into society, this article looks beyond a sociotechnical dimension to explore the potential for AI to complicate environmental memory and its role in shaping human–environment relations. First, I address how the manipulation and falsification of memory risks undermining intergenerational transmission of environmental knowledge. Second, I examine how AI-generated blurring of boundaries between real and unreal can lead to collective inaction on environmental challenges. By identifying memory’s central role in addressing environmental crisis, this article places emerging debates on memory in the AI era in direct conversation with environmental discourse and scholarship.
Since 2017, Digital Twins (DTs) have gained prominence in academic research, with researchers actively conceptualising, prototyping, and implementing DT applications across disciplines. The transformative potential of DTs has also attracted significant private sector investment, leading to substantial advancements in their development. However, their adoption in politics and public administration remains limited. While governments fund extensive DT research, their application in governance is often seen as a long-term prospect rather than an immediate priority, hindering their integration into decision-making and policy implementation. This study bridges the gap between theoretical discussions and practical adoption of DTs in governance. Using the Technology Readiness Level (TRL) and Technology Acceptance Model (TAM) frameworks, we analyse key barriers to adoption, including technological immaturity, limited institutional readiness, and scepticism regarding practical utility. Our research combines a systematic literature review of DT use cases with a case study of Germany, a country characterised by its federal governance structure, strict data privacy regulations, and strong digital innovation agenda. Our findings show that while DTs are widely conceptualised and prototyped in research, their use in governance remains scarce, particularly within federal ministries. Institutional inertia, data privacy concerns, and fragmented governance structures further constrain adoption. We conclude by emphasising the need for targeted pilot projects, clearer governance frameworks, and improved knowledge transfer to integrate DTs into policy planning, crisis management, and data-driven decision-making.
Important concepts from the diverse fields of physics, mathematics, engineering and computer science coalesce in this foundational text on the cutting-edge field of quantum information. Designed for undergraduate and graduate students with any STEM background, and written by a highly experienced author team, this textbook draws on quantum mechanics, number theory, computer science technologies, and more, to delve deeply into learning about qubits, the building blocks of quantum information, and how they are used in quantum computing and quantum algorithms. The pedagogical structure of the chapters features exercises after each section as well as focus boxes, giving students the benefit of additional background and applications without losing sight of the big picture. Recommended further reading and answers to select exercises further support learning. Written in approachable and conversational prose, this text offers a comprehensive treatment of the exciting field of quantum information while remaining accessible to students and researchers within all STEM disciplines.
An improved identification algorithm is adopted to calibrate the kinematic parameters of the serial-parallel robot, which improves the motion accuracy of the end-effector. Firstly, a kinematic model of the serial-parallel robot is constructed based on the closed-loop vector method. Secondly, a kinematic error model is established by combining geometric error analysis with the vector differential method. Then, with the effective separation of compensable and non-compensable error sources, an identification model of kinematic parameters is constructed. Finally, an improved pivot element weighted iterative algorithm is used to identify the geometric error parameters. Through actual pose measurement, MATLAB is used to simulate the calibration process. The simulation and experimental results show that after kinematic calibration, compared with the traditional least squares method, the improved identification algorithm can significantly reduce the end-effector pose error of the serial-parallel robot, thus effectively improving the motion accuracy of the end-effector.
The escalating complexity of global migration patterns renders evident the limitation of traditional reactive governance approaches and the urgent need for anticipatory and forward-thinking strategies. This Special Collection, “Anticipatory Methods in Migration Policy: Forecasting, Foresight, and Other Forward-Looking Methods in Migration Policymaking,” groups scholarly works and practitioners’ contributions dedicated to the state-of-the-art of anticipatory approaches. It showcases significant methodological evolutions, highlighting innovations from advanced quantitative forecasting using Machine Learning to predict displacement, irregular border crossings, and asylum trends, to rich, in-depth insights generated through qualitative foresight, participatory scenario building, and hybrid methodologies that integrate diverse knowledge forms. The contributions collectively emphasize the power of methodological pluralism, address a spectrum of migration drivers, including conflict and climate change, and critically examine the opportunities, ethical imperatives, and governance challenges associated with novel data sources, such as mobile phone data. By focusing on translating predictive insights and foresight into actionable policies and humanitarian action, this collection aims to advance both academic discourse and provide tangible guidance for policymakers and practitioners. It underscores the importance of navigating inherent uncertainties and strengthening ethical frameworks to ensure that innovations in anticipatory migration policy enhance preparedness, resource allocation, and uphold human dignity in an era of increasing global migration.
This study presents an innovative system for upper limb rehabilitation, combining a variable stiffness device, the ReHArm prototype, with a dynamic and engaging user interface, known as Arms Rehabilitation Management System. The proposed system offers a highly customisable approach to rehabilitation, ensuring real-time adaptability to patients’ specific needs while maintaining compactness and ease of use. Key features include a modular design allowing precise stiffness adjustments, a robust control architecture, and interactive rehabilitation phases designed to enhance user engagement. Extensive multidisciplinary analyses, including kinematic, dynamic, and structural evaluations, demonstrate the system’s ability to improve therapy effectiveness through tailored interaction and feedback. Validation tests demonstrated the prototype’s reliability and robustness, and initial usability assessments suggest its potential to improve rehabilitation outcomes. Further clinical studies involving patients will be necessary to fully evaluate its therapeutic effectiveness.
Product configuration is a successful application of answer set programming (ASP). However, challenges are still open for interactive systems to effectively guide users through the configuration process. The aim of our work is to provide an ASP-based solver for interactive configuration that can deal with large-scale industrial configuration problems and that supports intuitive user interfaces (UIs) via an application programming interface (API). In this paper, we focus on improving the performance of automatically completing a partial configuration. Our main contribution enhances the classical incremental approach for multi-shot solving by four different smart expansion functions. The core idea is to determine and add specific objects or associations to the partial configuration by exploiting cautious and brave consequences before checking for the existence of a complete configuration with the current objects in each iteration. This approach limits the number of costly unsatisfiability checks and reduces the search space, thereby improving solving performance. In addition, we present a UI that uses our API and is implemented in ASP.
Answer Set Programming (ASP) is a successful method for solving a range of real-world applications. Despite the availability of fast ASP solvers, computing answer sets demands significant computational resources, since the problem tackled is on the second level of the polynomial hierarchy. Answer set computation can be accelerated if the program is split into two disjoint parts, bottom and top. Thus, the bottom part is evaluated independently of the top part, and the results of the bottom part evaluation are used to simplify the top part. Lifschitz and Turner have introduced the concept of a splitting set, that is, a set of atoms that defines the splitting.
In a previous paper, the notion of g-splitting set, which generalize the concept of splitting sets for disjunctive logic programs, was introduced. In this paper, we further investigate the topic of splitting sets and g-splitting sets. We show that the set inclusion problem for splitting sets can be reduced to a classic Search Problem and solved in polynomial time. We also show that the task of computing g-splitting sets with desirable properties is relatively easy and straightforward. Finally, we show that stable models can be decomposed to models of rules inspired by g-splitting sets and models of the rest of the program. This interesting property can assist in incremental computation of stable models.
With increasing age, many elderly individuals will not be able to stand normally. To solve this problem, a knee exoskeleton is designed. The knee joint is designed as a variable stiffness structure. It can adjust its stiffness according to the body’s movement state, ensuring precise assistance while also enhancing human comfort. The variable stiffness mechanism consists of an elastic output actuator and a stiffness-adjusting actuator. The elastic output actuator is mainly responsible for the output of the joint torque. The stiffness-adjusting actuator is mainly responsible for adjusting the joint stiffness. These two mechanisms are analysed separately. Based on their relationship with the whole mechanism, a stiffness model of the entire knee joint is established. Experiments are subsequently conducted to evaluate the variable stiffness joint. The stiffness identification experiment indicates that the actual stiffness of the whole knee joint is essentially consistent with the theoretical value. The trajectory tracking experiment demonstrates that the joint exhibits excellent trajectory tracking capability, although stiffness has a certain effect. The exoskeleton assistive effect experiment demonstrates the ability of the exoskeleton to assist in standing. Additionally, the experiment on subjects with exoskeletons of different stiffnesses determines the impact of stiffness on human comfort.
We prove that determining the weak saturation number of a host graph $F$ with respect to a pattern graph $H$ is computationally hard, even when $H$ is the triangle. Our main tool establishes a connection between weak saturation and the shellability of simplicial complexes.