To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In certain scenarios, the large footprint of a robot is not conducive to multi-robot cooperative operations. This paper presents a generalized single-loop parallel manipulator with remote center of motion (GSLPM-RCM), which addresses this issue by incorporating a reconfigurable base. The footprint of this RCM manipulator can be adjusted by varying the parameters of the reconfigurable base. First, utilizing configuration evolution, a reconfigurable base is constructed based on the principle of forming RCM motion. Then, according to the modular analysis method, the inverse kinematics of this parallel RCM manipulator is analyzed, and the workspace is also analyzed. Subsequently, the motion/force transmissibility of this RCM manipulator is analyzed by considering its single-loop and multi-degree of freedom characteristics. Leveraging the workspace index and transmissibility indices, dimension optimization of the manipulator is implemented. Finally, the influence of the reconfigurable base on the workspace and the transmissibility performance of the optimized manipulator is studied.
In this paper, the model of bisexual branching processes affected by viral infectivity and with random control functions in independent and identically distributed (i.i.d.) random environments is established and the Markov property is given firstly. Then the relations of the probability generating functions of this model are studied, and some sufficient conditions for process extinction under common mating functions are presented. Finally, the limiting behaviors of the considered model after proper normalization, such as the sufficient conditions for the convergence in L1 and L2 and almost everywhere convergence, are investigated under the condition that the random control functions are super additive.
Traditional bulky and complex control devices such as remote control and ground station cannot meet the requirement of fast and flexible control of unmanned aerial vehicles (UAVs) in complex environments. Therefore, a data glove based on multi-sensor fusion is designed in this paper. In order to achieve the goal of gesture control of UAVs, the method can accurately recognize various gestures and convert them into corresponding UAV control commands. First, the wireless data glove fuses flexible fiber optic sensors and inertial sensors to construct a gesture dataset. Then, the trained neural network model is deployed to the STM32 microcontroller-based data glove for real-time gesture recognition, in which the convolutional neural network-Attention mechanism (CNN-Attention) network is used for static gesture recognition, and the convolutional neural network-bidirectional long and short-term memory (CNN-Bi-LSTM) network is used for dynamic gesture recognition. Finally, the gestures are converted into control commands and sent to the vehicle terminal to control the UAV. Through the UAV simulation test on the simulation platform, the average recognition accuracy of 32 static gestures reaches 99.7%, and the average recognition accuracy of 13 dynamic gestures reaches 99.9%, which indicates that the system’s gesture recognition effect is perfect. The task test in the scene constructed in the real environment shows that the UAV can respond to the gestures quickly, and the method proposed in this paper can realize the real-time stable control of the UAV on the terminal side.
Vibration-based structural health monitoring (SHM) of (large) infrastructure through operational modal analysis (OMA) is a commonly adopted strategy. This is typically a four-step process, comprising estimation, tracking, data normalization, and decision-making. These steps are essential to ensure structural modes are correctly identified, and results are normalized for environmental and operational variability (EOV). Other challenges, such as nonstructural modes in the OMA, for example, rotor harmonics in (offshore) wind turbines (OWTs), further complicate the process. Typically, these four steps are considered independently, making the method simple and robust, but rather limited in challenging applications, such as OWTs. Therefore, this study aims to combine tracking, data normalization, and decision-making through a single machine learning (ML) model. The presented SHM framework starts by identifying a “healthy” training dataset, representative of all relevant EOV, for all structural modes. Subsequently, operational and weather data are used for feature selection and a comparative analysis of ML models, leading to the selection of tree-based learners for natural frequency prediction. Uncertainty quantification (UQ) is introduced to identify out-of-distribution instances, crucial to guarantee low modeling error and ensure only high-fidelity structural modes are tracked. This study uses virtual ensembles for UQ through the variance between multiple truncated submodel predictions. Practical application to monopile-supported OWT data demonstrates the tracking abilities, separating structural modes from rotor dynamics. Control charts show improved decision-making compared to traditional reference-based methods. A synthetic dataset further confirms the approach’s robustness in identifying relevant natural frequency shifts. This study presents a comprehensive data-driven approach for vibration-based SHM.
Fenwick trees, also known as binary indexed trees are a clever solution to the problem of maintaining a sequence of values while allowing both updates and range queries in sublinear time. Their implementation is concise and efficient—but also somewhat baffling, consisting largely of nonobvious bitwise operations on indices. We begin with segment trees, a much more straightforward, easy-to-verify, purely functional solution to the problem, and use equational reasoning to explain the implementation of Fenwick trees as an optimized variant, making use of a Haskell EDSL for operations on infinite two’s complement binary numbers.
The design provides innovative solutions to problems in the medical field. Collaboration between design and medicine can be fostered in several ways; however, educational programs linking these two academic fields are limited, and their frameworks and effectiveness are unknown. Hence, we launched an educational project to address medical problems through design. The framework and creative outcomes are based on the results of two consecutive one-year programs. The research subjects were 35 participants from three departments. The majority (22/35, 63%) were master’s and doctoral students in design. Eight participants were doctoral students and researchers who volunteered from the surgery, oral surgery, neurology and nursing departments at the Graduate School of Medicine and Hospital. The impact of the program on creativity was evaluated by the quality of ideas and the participants’ assessments. In total, 424 problems were identified and 387 ideas were created. Nine prototypes with mock-ups and functional models of products, games or service designs were created and positively evaluated for novelty, workability and relevance. Participants benefitted from the collaboration and gained new perspectives. Career expectations increased after the class, whereas motivation and skills remained high. A framework for a continuing educational program was suggested.
Displacement continues to increase at a global scale and is increasingly happening in complex, multicrisis settings, leading to more complex and deeper humanitarian needs. Humanitarian needs are therefore increasingly outgrowing the available humanitarian funding. Thus, responding to vulnerabilities before disaster strikes is crucial but anticipatory action is contingent on the ability to accurately forecast what will happen in the future. Forecasting and contingency planning are not new in the humanitarian sector, where scenario-building continues to be an exercise conducted in most humanitarian operations to strategically plan for coming events. However, the accuracy of these exercises remains limited. To address this challenge and work with the objective of providing the humanitarian sector with more accurate forecasts to enhance the protection of vulnerable groups, the Danish Refugee Council has already developed several machine learning models. The Anticipatory Humanitarian Action for Displacement uses machine learning to forecast displacement in subdistricts in the Liptako-Gourma region in Sahel, covering Burkina Faso, Mali, and Niger. The model is mainly built on data related to conflict, food insecurity, vegetation health, and the prevalence of underweight to forecast displacement. In this article, we will detail how the model works, the accuracy and limitations of the model, and how we are translating the forecasts into action by using them for anticipatory action in South Sudan and Burkina Faso, including concrete examples of activities that can be implemented ahead of displacement in the place of origin, along routes and in place of destination.
The pervasive use of media at current-day festivals thoroughly impacts how these live events are experienced, anticipated, and remembered. This empirical study examined eventgoers’ live media practices – taking photos, making videos, and in-the-moment sharing of content on social media platforms – at three large cultural events in the Netherlands. Taking a practice approach (Ahva 2017; Couldry 2004), the author studied online and offline event environments through extensive ethnographic fieldwork: online and offline observations, and interviews with 379 eventgoers. Analysis of this research material shows that through their live media practices eventgoers are continuously involved in mediated memory work (Lohmeier and Pentzold 2014; Van Dijck 2007), a form of live storytelling that revolves around how they want to remember the event. The article focuses on the impact of mediated memory work on the live experience in the present. It distinguishes two types of mediatised experience of live events: live as future memory and the experiential live. The author argues that memory is increasingly incorporated into the live experience in the present, so much so that, for many eventgoers, mediated memory-making is crucial to having a full live event experience. The article shows how empirical research in media studies can shed new light on key questions within memory studies.
In this paper, we explore the crucial role and challenges of computational reproducibility in geosciences, drawing insights from the Climate Informatics Reproducibility Challenge (CICR) in 2023. The competition aimed at (1) identifying common hurdles to reproduce computational climate science; and (2) creating interactive reproducible publications for selected papers of the Environmental Data Science journal. Based on lessons learned from the challenge, we emphasize the significance of open research practices, mentorship, transparency guidelines, as well as the use of technologies such as executable research objects for the reproduction of geoscientific published research. We propose a supportive framework of tools and infrastructure for evaluating reproducibility in geoscientific publications, with a case study for the climate informatics community. While the recommendations focus on future CIRCs, we expect they would be beneficial for wider umbrella of reproducibility initiatives in geosciences.
Machine learning models have been used extensively in hydrology, but issues persist with regard to their transparency, and there is currently no identifiable best practice for forcing variables in streamflow or flood modeling. In this paper, using data from the Centre for Ecology & Hydrology’s National River Flow Archive and from the European Centre for Medium-Range Weather Forecasts, we present a study that focuses on the input variable set for a neural network streamflow model to demonstrate how certain variables can be internalized, leading to a compressed feature set. By highlighting this capability to learn effectively using proxy variables, we demonstrate a more transferable framework that minimizes sensing requirements and that enables a route toward generalizing models.
Parameter learning is a crucial task in the field of Statistical Relational Artificial Intelligence: given a probabilistic logic program and a set of observations in the form of interpretations, the goal is to learn the probabilities of the facts in the program such that the probabilities of the interpretations are maximized. In this paper, we propose two algorithms to solve such a task within the formalism of Probabilistic Answer Set Programming, both based on the extraction of symbolic equations representing the probabilities of the interpretations. The first solves the task using an off-the-shelf constrained optimization solver while the second is based on an implementation of the Expectation Maximization algorithm. Empirical results show that our proposals often outperform existing approaches based on projected answer set enumeration in terms of quality of the solution and in terms of execution time.
This paper describes a semantics for pure Prolog programs with negation that provides meaning to metaprograms. Metaprograms are programs that construct and use data structures as programs. In Prolog a primary mataprogramming construct is the use of a variable as a literal in the body of a clause. The traditional Prolog 3-line metainterpreter is another example of a metaprogram. The account given here also supplies a meaning for clauses that have a variable as head, even though most Prolog systems do not support such clauses. This semantics naturally includes such programs, giving them their intuitive meaning. Ideas from Denecker and his colleagues form the basis of this approach. The key idea is to notice that if we give meanings to all propositional programs and treat Prolog rules with variables as the set of their ground instances, then we can give meanings to all programs. We must treat Prolog rules (which may be metarules) as templates for generating ground propositional rules, and not as first-order formulas, which they may not be. We use parameterized inductive definitions to give propositional models to Prolog programs, in which the propositions are expressions. Then the set of expressions of a propositional model determine a first-order Herbrand Model, providing a first-order logical semantics for all (pure) Prolog programs, including metaprograms. We give examples to show the applicability of this theory. We also demonstrate how this theory makes proofs of some important properties of metaprograms very straightforward.
This paper proposes a new methodology for early validation of high-level requirements on cyber-physical systems with the aim of improving their quality and, thus, lowering chances of specification errors propagating into later stages of development where it is much more expensive to fix them. The paper presents a transformation of a real-world requirements specification of a medical device—the Patient-Controlled Analgesia (PCA) Pump—into an Event Calculus model that is then evaluated using Answer Set Programming and the s(CASP) system. The evaluation under s(CASP) allowed deductive as well as abductive reasoning about the specified functionality of the PCA pump on the conceptual level with minimal implementation or design dependent influences and led to fully automatically detected nuanced violations of critical safety properties. Further, the paper discusses scalability and non-termination challenges that had to be faced in the evaluation and techniques proposed to (partially) solve them. Finally, ideas for improving s(CASP) to overcome its evaluation limitations that still persist as well as to increase its expressiveness are presented.
The dominating set reconfiguration problem is defined as determining, for a given dominating set problem and two among its feasible solutions, whether one is reachable from the other via a sequence of feasible solutions subject to a certain adjacency relation. This problem is PSPACE-complete in general. The concept of the dominating set is known to be quite useful for analyzing wireless networks, social networks, and sensor networks. We develop an approach to solve the dominating set reconfiguration problem based on answer set programming (ASP). Our declarative approach relies on a high-level ASP encoding, and both the grounding and solving tasks are delegated to an ASP-based combinatorial reconfiguration solver. To evaluate the effectiveness of our approach, we conduct experiments on a newly created benchmark set.
Recent efforts in interpreting convolutional neural networks (CNNs) focus on translating the activation of CNN filters into a stratified Answer Set Program (ASP) rule-sets. The CNN filters are known to capture high-level image concepts, thus the predicates in the rule-set are mapped to the concept that their corresponding filter represents. Hence, the rule-set exemplifies the decision-making process of the CNN w.r.t the concepts that it learns for any image classification task. These rule-sets help understand the biases in CNNs, although correcting the biases remains a challenge. We introduce a neurosymbolic framework called NeSyBiCor for bias correction in a trained CNN. Given symbolic concepts, as ASP constraints, that the CNN is biased toward, we convert the concepts to their corresponding vector representations. Then, the CNN is retrained using our novel semantic similarity loss that pushes the filters away from (or toward) learning the desired/undesired concepts. The final ASP rule-set obtained after retraining, satisfies the constraints to a high degree, thus showing the revision in the knowledge of the CNN. We demonstrate that our NeSyBiCor framework successfully corrects the biases of CNNs trained with subsets of classes from the Places dataset while sacrificing minimal accuracy and improving interpretability.
The development of large language models (LLMs), such as GPT, has enabled the construction of several socialbots, like ChatGPT, that are receiving a lot of attention for their ability to simulate a human conversation. However, the conversation is not guided by a goal and is hard to control. In addition, because LLMs rely more on pattern recognition than deductive reasoning, they can give confusing answers and have difficulty integrating multiple topics into a cohesive response. These limitations often lead the LLM to deviate from the main topic to keep the conversation interesting. We propose AutoCompanion, a socialbot that uses an LLM model to translate natural language into predicates (and vice versa) and employs commonsense reasoning based on answer set programming (ASP) to hold a social conversation with a human. In particular, we rely on s(CASP), a goal-directed implementation of ASP as the backend. This paper presents the framework design and how an LLM is used to parse user messages and generate a response from the s(CASP) engine output. To validate our proposal, we describe (real) conversations in which the chatbot’s goal is to keep the user entertained by talking about movies and books, and s(CASP) ensures (i) correctness of answers, (ii) coherence (and precision) during the conversation—which it dynamically regulates to achieve its specific purpose—and (iii) no deviation from the main topic.