To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The assist-as-needed (AAN) controller is effective in robot-assisted rehabilitation. However, variations of the engagement of subjects with fixed controller often lead to unsatisfying results. Therefore, adaptive AAN that adjusts control parameters based on individualized engagement is essential to enhance the training effect further. Nevertheless, current approaches mainly focus on the within-trial real-time engagement estimation, and the presence of measurement noise may cause improper evaluation of engagement. In addition, most studies on human-in-loop optimization strategies modulate the controller by greedy strategies, which are prone to fall into local optima. These shortcomings in previous studies could significantly limit the efficacy of AAN. This paper proposes an adaptive AAN to promote engagement by providing subjects with a subject-adaptive assistance level based on trial-wise engagement estimation and performance. Firstly, the engagement is estimated from energy information, which assesses the work done by the subject during a full trial to reduce the influence of measurement outliers. Secondly, the AAN controller is adapted by Bayesian optimization (BO) to maximize the subject’s performance according to historical trial-wise performance. The BO algorithm is good at dealing with noisy signals within limited steps. Experiments with ten healthy subjects resulted in a decrease of 34.59$\%$ in their average trajectory error, accompanied by a reduction of 9.71$\%$ in their energy consumption, thus verifying the superiority of the proposed method to prior work. These results suggest that the proposed method could potentially improve the effect of upper limb rehabilitation.
Orthomodular logic is a weakening of quantum logic in the sense of Birkhoff and von Neumann. Orthomodular logic is shown to be a nonlinear noncommutative logic. Sequents are given a physically motivated semantics that is consistent with exactly one semantics for propositional formulas that use negation, conjunction, and implication. In particular, implication must be interpreted as the Sasaki arrow, which satisfies the deduction theorem in this logic. As an application, this deductive system is extended to two systems of predicate logic: the first is sound for Takeuti’s quantum set theory, and the second is sound for a variant of Weaver’s quantum logic.
Artificial Intelligence plays a main role in supporting and improving smart manufacturing and Industry 4.0, by enabling the automation of different types of tasks manually performed by domain experts. In particular, assessing the compliance of a product with the relative schematic is a time-consuming and prone-to-error process. In this paper, we address this problem in a specific industrial scenario. In particular, we define a Neuro-Symbolic approach for automating the compliance verification of the electrical control panels. Our approach is based on the combination of Deep Learning techniques with Answer Set Programming (ASP), and allows for identifying possible anomalies and errors in the final product even when a very limited amount of training data is available. The experiments conducted on a real test case provided by an Italian Company operating in electrical control panel production demonstrate the effectiveness of the proposed approach.
Adaptation-relevant predictions of climate change are often derived by combining climate model simulations in a multi-model ensemble. Model evaluation methods used in performance-based ensemble weighting schemes have limitations in the context of high-impact extreme events. We introduce a locally time-invariant method for evaluating climate model simulations with a focus on assessing the simulation of extremes. We explore the behavior of the proposed method in predicting extreme heat days in Nairobi and provide comparative results for eight additional cities.
Robust feature selection is vital for creating reliable and interpretable machine-learning (ML) models. When designing statistical prediction models in cases where domain knowledge is limited and underlying interactions are unknown, choosing the optimal set of features is often difficult. To mitigate this issue, we introduce a multidata (M) causal feature selection approach that simultaneously processes an ensemble of time series datasets and produces a single set of causal drivers. This approach uses the causal discovery algorithms PC$ {}_1 $ or PCMCI that are implemented in the Tigramite Python package. These algorithms utilize conditional independence tests to infer parts of the causal graph. Our causal feature selection approach filters out causally spurious links before passing the remaining causal features as inputs to ML models (multiple linear regression and random forest) that predict the targets. We apply our framework to the statistical intensity prediction of Western Pacific tropical cyclones (TCs), for which it is often difficult to accurately choose drivers and their dimensionality reduction (time lags, vertical levels, and area-averaging). Using more stringent significance thresholds in the conditional independence tests helps eliminate spurious causal relationships, thus helping the ML model generalize better to unseen TC cases. M-PC$ {}_1 $ with a reduced number of features outperforms M-PCMCI, noncausal ML, and other feature selection methods (lagged correlation and random), even slightly outperforming feature selection based on explainable artificial intelligence. The optimal causal drivers obtained from our causal feature selection help improve our understanding of underlying relationships and suggest new potential drivers of TC intensification.
Engineered products have economic, environmental, and social impacts, which comprise the major dimensions of sustainability. This paper seeks to explore interactions between design parameters when social impacts are incorporated into the concept development phase of the systems design process. Social impact evaluation is increasing in importance similar to what has happened in recent years with environmental impact consideration in the design of engineered products. Concurrently, research into new airship design has increased. Airships have yet to be reintroduced at a large scale or for a range of applications in society. Although airships have the potential for positive environmental and economic impacts, the social impacts are still rarely considered. This paper presents a case study of the hypothetical introduction of airships in the Amazon region of Brazil to help local farmers transport their produce to market. It explores the design space in terms of both engineering parameters and social impacts using a discrete-event simulation to model the system. The social impacts are found to be dependent not only on the social factors and airship design parameters but also on the farmer-airship system, suggesting that socio-technical systems design will benefit from integrated social impact metric analysis.
This paper presents a language, Alda, that supports all of logic rules, sets, functions, updates, and objects as seamlessly integrated built-ins. The key idea is to support predicates in rules as set-valued variables that can be used and updated in any scope, and support queries using rules as either explicit or implicit automatic calls to an inference function. We have defined a formal semantics of the language, implemented a prototype compiler that builds on an object-oriented language that supports concurrent and distributed programming and on an efficient logic rule system, and successfully used the language and implementation on benchmarks and problems from a wide variety of application domains. We describe the compilation method and results of experimental evaluation.
This paper explores finite-time formation control of multi-agent systems (MASs) with high-order nonaffine nonlinear dynamics and saturated input. Based on active disturbance rejection control theory, extended state observer is employed to identify unknown nonaffine nonlinear functions in MASs. The proposed control law consisting of backstepping control, tracking differentiator, and finite-time performance function is adopted for MASs to achieve the desired formation while reaching performance requirements. An auxiliary dynamic compensator is introduced to correct the control deviation caused by input saturation. Lyapunov stability theory is utilized to analyze the stability of the closed-loop system, which guarantees that the formation tracking error can asymptotically converge to an arbitrarily small neighborhood around zero in finite time. Finally, the simulation results show that compared to the adaptive, cooperative learning, and virtual structure methods, the proposed control algorithm has stronger tracking ability and faster setting time (1.8 s) under the influence of nonaffine nonlinear uncertainties. The integral square error for the formation control strategy in this paper is 0.16, which is much smaller than the abovementioned methods and is therefore provided to manifest the validity and feasibility of the proposed control strategy.
Automatic differentiation (AD) is a range of algorithms to compute the numeric value of a function’s (partial) derivative, where the function is typically given as a computer program or abstract syntax tree. AD has become immensely popular as part of many learning algorithms, notably for neural networks. This paper uses Prolog to systematically derive gradient-based forward- and reverse-mode AD variants from a simple executable specification: evaluation of the symbolic derivative. Along the way we demonstrate that several Prolog features (DCGs, co-routines) contribute to the succinct formulation of the algorithm. We also discuss two applications in probabilistic programming that are enabled by our Prolog algorithms. The first is parameter learning for the Sum-Product Loop Language and the second consists of both parameter learning and variational inference for probabilistic logic programming.
This work presents a new formulation to holistically control four cooperative multi-rotor drones controlled in two pairs. This approach uses a modular relative Jacobian with components consisting of the Jacobians of each individual drone. This type of controller relies mainly on the relative motion between the drones, consequently releasing unnecessary constraints inherent to the control of drones in absolute motion. We present the derivations of all the necessary equations of the modular relative Jacobian to control the four multi-rotor drones. We also present the derivations of the Jacobian for each drone. We implement our proposed method in the Gazebo RotorS simulation using four hexa-rotor drones modeled from Ascending Technologies called firefly drones. We present the simulation results and analyze them to show the effectiveness of our proposed approach.
There has been a rapid rise of interest in the potential of digital twins to transform a vast range of Cyber-Physical System (CPS) applications, from national infrastructure to surgical robots. But what frameworks, methods and tools are needed to create and maintain digital twins on which we can depend?
Atmospheric aerosols influence the Earth’s climate, primarily by affecting cloud formation and scattering visible radiation. However, aerosol-related physical processes in climate simulations are highly uncertain. Constraining these processes could help improve model-based climate predictions. We propose a scalable statistical framework for constraining the parameters of expensive climate models by comparing model outputs with observations. Using the C3.AI Suite, a cloud computing platform, we use a perturbed parameter ensemble of the UKESM1 climate model to efficiently train a surrogate model. A method for estimating a data-driven model discrepancy term is described. The strict bounds method is applied to quantify parametric uncertainty in a principled way. We demonstrate the scalability of this framework with 2 weeks’ worth of simulated aerosol optical depth data over the South Atlantic and Central African region, written from the model every 3 hr and matched in time to twice-daily MODIS satellite observations. When constraining the model using real satellite observations, we establish constraints on combinations of two model parameters using much higher time-resolution outputs from the climate model than previous studies. This result suggests that within the limits imposed by an imperfect climate model, potentially very powerful constraints may be achieved when our framework is scaled to the analysis of more observations and for longer time periods.
Time is a crucial factor in modelling dynamic behaviours of intelligent agents: activities have a determined temporal duration in a real-world environment, and previous actions influence agents’ behaviour. In this paper, we propose a language for modelling concurrent interaction between agents that also allows the specification of temporal intervals in which particular actions occur. Such a language exploits a timed version of Abstract Argumentation Frameworks to realise a shared memory used by the agents to communicate and reason on the acceptability of their beliefs with respect to a given time interval. An interleaving model on a single processor is used for basic computation steps, with maximum parallelism for time elapsing. Following this approach, only one of the enabled agents is executed at each moment. To demonstrate the capabilities of the language, we also show how it can be used to model interactions such as debates and dialogue games taking place between intelligent agents. Lastly, we present an implementation of the language that can be accessed via a web interface.
Given $\alpha \gt 0$ and an integer $\ell \geq 5$, we prove that every sufficiently large $3$-uniform hypergraph $H$ on $n$ vertices in which every two vertices are contained in at least $\alpha n$ edges contains a copy of $C_\ell ^{-}$, a tight cycle on $\ell$ vertices minus one edge. This improves a previous result by Balogh, Clemen, and Lidický.
Answer Set Programming with Quantifiers ASP(Q) extends Answer Set Programming (ASP) to allow for declarative and modular modeling of problems from the entire polynomial hierarchy. The first implementation of ASP(Q), called QASP, was based on a translation to Quantified Boolean Formulae (QBF) with the aim of exploiting the well-developed and mature QBF-solving technology. However, the implementation of the QBF encoding employed in qasp is very general and might produce formulas that are hard to evaluate for existing QBF solvers because of the large number of symbols and subclauses. In this paper, we present a new implementation that builds on the ideas of QASP and features both a more efficient encoding procedure and new optimized encodings of ASP(Q) programs in QBF. The new encodings produce smaller formulas (in terms of the number of quantifiers, variables, and clauses) and result in a more efficient evaluation process. An algorithm selection strategy automatically combines several QBF-solving back-ends to further increase performance. An experimental analysis, conducted on known benchmarks, shows that the new system outperforms QASP.
Robinson’s unification algorithm can be identified as the underlying machinery of both C. Meredith’s rule D (condensed detachment) in propositional logic and of the construction of principal types in lambda calculus and combinatory logic. In combinatory logic, it also plays a crucial role in the construction of Meyer, Bunder & Powers’ Fool’s model. This paper now considers pattern matching, the unidirectional variant of unification, as a basis for logical inference, typing, and a very simple and natural model for untyped combinatory logic. An analysis of the new typing scheme will enable us to characterize a large class of terms of combinatory logic which do not change their principal type when being weakly reduced. We also consider the question whether the major or the minor premisse should be used as the fixed pattern.
Community detection is one of the most important methodological fields of network science, and one which has attracted a significant amount of attention over the past decades. This area deals with the automated division of a network into fundamental building blocks, with the objective of providing a summary of its large-scale structure. Despite its importance and widespread adoption, there is a noticeable gap between what is arguably the state-of-the-art and the methods which are actually used in practice in a variety of fields. The Elements attempts to address this discrepancy by dividing existing methods according to whether they have a 'descriptive' or an 'inferential' goal. While descriptive methods find patterns in networks based on context-dependent notions of community structure, inferential methods articulate a precise generative model, and attempt to fit it to data. In this way, they are able to provide insights into formation mechanisms and separate structure from noise. This title is also available as open access on Cambridge Core.
Cirrus clouds are key modulators of Earth’s climate. Their dependencies on meteorological and aerosol conditions are among the largest uncertainties in global climate models. This work uses 3 years of satellite and reanalysis data to study the link between cirrus drivers and cloud properties. We use a gradient-boosted machine learning model and a long short-term memory network with an attention layer to predict the ice water content and ice crystal number concentration. The models show that meteorological and aerosol conditions can predict cirrus properties with R2 = 0.49. Feature attributions are calculated with SHapley Additive exPlanations to quantify the link between meteorological and aerosol conditions and cirrus properties. For instance, the minimum concentration of supermicron-sized dust particles required to cause a decrease in ice crystal number concentration predictions is 2 × 10−4 mg/m3. The last 15 hr before the observation predict all cirrus properties.
This work aims to classify catchments through the lens of causal inference and cluster analysis. In particular, it uses causal effects (CEs) of meteorological variables on river discharge while only relying on easily obtainable observational data. The proposed method combines time series causal discovery with CE estimation to develop features for a subsequent clustering step. Several ways to customize and adapt the features to the problem at hand are discussed. In an application example, the method is evaluated on 358 European river catchments. The found clusters are analyzed using the causal mechanisms that drive them and their environmental attributes.