To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cyber-Physical Systems (CPSs) combine cyber, physical and human activities through computing and network technologies, creating opportunities for benign and malign actions that affect organisations in both the physical and computational spheres. The US National Cyber Security Strategy (US White House, 2023) warns that this exposes crucial systems to disruption over a wide CPS attack surface. The UK National Cyber Security Centre Annual Review (UK National Cyber Security Centre, 2023) acknowledges that, although some organisations are evolving ‘a more holistic view of critical systems rather than purely physical assets’, this is not reflected in governance structures that still tend to treat cyber and physical security separately.
Complex physical processes that are inherent to rainfall lead to the challenging task of its prediction. To contribute to the improvement of rainfall prediction, artificial neural network (ANN) models were developed using a multilayer perceptron (MLP) approach to predict monthly rainfall 2 months in advance for six geographically diverse weather stations across the Benin Republic. For this purpose, 12 lagged values of atmospheric data were used as predictors. The models were trained using data from 1959 to 2017 and tested for 4 years (2018–2021). The proposed method was compared to long short-term memory (LSTM) and climatology forecasts (CFs). The prediction performance was evaluated using five statistical measures: root mean square error, mean absolute error, mean absolute percentage error, coefficient of determination, and Nash–Sutcliffe efficiency (NSE) coefficient. Furthermore, Taylor diagrams, violin plots, box error, and Kruskal–Wallis test were used to assess the robustness of the model’s forecast. The results revealed that MLP gives better results than LSTM and CF. The NSE obtained with the MLP, LSTM, and CF models during the test period ranges from 0.373 to 0.885, 0.297 to 0.875, and 0.335 to 0.845, respectively, depending on the weather station. Rainfall predictability was more accurate, with 0.512 improvement in NSE using MLP at higher latitudes across the country, showing the effect of geographic regions on prediction model results. In summary, this research has revealed the potential of ANN techniques in predicting monthly rainfall 2 months ahead, supplying valuable insights for decision-makers in the Republic of Benin.
With the rise of deep reinforcement learning (RL) methods, many complex robotic manipulation tasks are being solved. However, harnessing the full power of deep learning requires large datasets. Online RL does not suit itself readily into this paradigm due to costly and time-consuming agent-environment interaction. Therefore, many offline RL algorithms have recently been proposed to learn robotic tasks. But mainly, all such methods focus on a single-task or multitask learning, which requires retraining whenever we need to learn a new task. Continuously learning tasks without forgetting previous knowledge combined with the power of offline deep RL would allow us to scale the number of tasks by adding them one after another. This paper investigates the effectiveness of regularisation-based methods like synaptic intelligence for sequentially learning image-based robotic manipulation tasks in an offline-RL setup. We evaluate the performance of this combined framework against common challenges of sequential learning: catastrophic forgetting and forward knowledge transfer. We performed experiments with different task combinations to analyse the effect of task ordering. We also investigated the effect of the number of object configurations and the density of robot trajectories. We found that learning tasks sequentially helps in the retention of knowledge from previous tasks, thereby reducing the time required to learn a new task. Regularisation-based approaches for continuous learning, like the synaptic intelligence method, help mitigate catastrophic forgetting but have shown only limited transfer of knowledge from previous tasks.
Payroll management is a critical business task that is subject to a large number of rules, which vary widely between companies, sectors, and countries. Moreover, the rules are often complex and change regularly. Therefore, payroll management systems must be flexible in design. In this paper, we suggest an approach based on a flexible answer set programming (ASP) model and an easy-to-read tabular representation based on the decision model and notation standard. It allows HR consultants to represent complex rules without the need for a software engineer and to ultimately design payroll systems for a variety of different scenarios. We show how the multi-shot solving capabilities of the clingo ASP system can be used to reach the performance that is necessary to handle real-world instances.
We have developed probabilistic models to estimate the likelihood of harmful algae presence and outbreaks along the Norwegian coast, which can help optimization of the national monitoring program and the planning of mitigation actions. We employ support vector machines to calibrate probabilistic models for estimating the presence and harmful abundance (HA) of eight toxic algae found along the Norwegian coast, including Alexandrium spp., Alexandrium tamarense, Dinophysis acuta, Dinophysis acuminata, Dinophysis norvegica, Pseudo-nitzschia spp., Protoceratium reticulatum, and Azadinium spinosum. The inputs are sea surface temperature, photosynthetically active radiation, mixed layer depth, and sea surface salinity. The probabilistic models are trained with data from 2006 to 2013 and tested with data from 2014 to 2019. The presence models demonstrate good statistical performance across all taxa, with R (observed presence frequency vs. predicted probability) ranging from 0.69 to 0.98 and root mean squared error ranging from 0.84% to 7.84%. Predicting the probability of HA is more challenging, and the HA models only reach skill with four taxa (Alexandrium spp., A. tamarense, D. acuta, and A. spinosum). There are large differences in seasonal and geographical variability and sensitivity to the model input of different taxa, which are presented and discussed. The models estimate geographical regions and periods with relatively higher risk of toxic species presence and HA, and might optimize the harmful algae monitoring. The method can be extended to other regions as it relies only on remote sensing and model data as input and running national programs of toxic algae monitoring.
This paper presents a compliant variable admittance adaptive fixed-time sliding mode control (SMC) algorithm for trajectory tracking of robotic manipulators. Specifically, a compliant variable admittance algorithm and an adaptive fixed-time SMC algorithm are combined to construct a double-loop control structure. In the outer loop, the variable admittance algorithm is developed to adjust admittance parameters during a collision to minimize the collision time, which gives the robot compliance property and reduce the rigid collision influence. Then, by employing the Lyapunov theory and the fixed-time stability theory, a new nonsingular sliding mode manifold is proposed and an adaptive fixed-time SMC algorithm is presented in the inner loop. More precisely, this approach enables rapid convergence, enhanced steady-state tracking precision, and a settling time that is independent of system initial states. As a result, the effectiveness and improved performance of the proposed algorithm are demonstrated through extensive simulations and experimental results.
In a Model Predictive Control (MPC) setting, the precise simulation of the behavior of the system over a finite time window is essential. This application-oriented benchmark study focuses on a robot arm that exhibits various nonlinear behaviors. For this arm, we have a physics-based model with approximate parameter values and an open benchmark dataset for system identification. However, the long-term simulation of this model quickly diverges from the actual arm’s measurements, indicating its inaccuracy. We compare the accuracy of black-box and purely physics-based approaches with several physics-informed approaches. These involve different combinations of a neural network’s output with information from the physics-based model or feeding the physics-based model’s information into the neural network. One of the physics-informed model structures can improve accuracy over a fully black-box model.
This book is designed to provide in-depth knowledge on how search plays a fundamental role in problem solving. Meant for undergraduate and graduate students pursuing courses in computer science and artificial intelligence, it covers a wide spectrum of search methods. Readers will be able to begin with simple approaches and gradually progress to more complex algorithms applied to a variety of problems. It demonstrates that search is all pervasive in artificial intelligence and equips the reader with the relevant skills. The text starts with an introduction to intelligent agents and search spaces. Basic search algorithms like depth first search and breadth first search are the starting points. Then, it proceeds to discuss heuristic search algorithms, stochastic local search, algorithm A*, and problem decomposition. It also examines how search is used in playing board games, deduction in logic and automated planning. The book concludes with a coverage on constraint satisfaction.
This paper deals with generally routed, pre-bent cable-driven continuum robots (CCR). A CCR consists of a flexible backbone to which multiple disks are attached. Cables are passed through holes in the disk, and when pulled, the flexible backbone and the CCR can attain different shapes based on their routing and backbone configuration. An optimization-based approach, using minimization of strain energy, is shown to give good results for the pose and motion of the CCR and to determine contact with external objects. The pose, motion, and the contact obtained from the model are shown to match very well with experimental results obtained from a 3D-printed CCR. An algorithm is proposed to generate the pre-bent backbone for a CCR which on actuation can attain the desired shape. Using the algorithm, three 3D-printed CCRs with pre-bent backbones are fabricated and these are used to demonstrate a compliant gripper that can grip a spherical object similar to that done by tentacles, and another three-fingered gripper with straight backbone CCRs is used to orient a square object gripped at the end.
In this paper, a method of planning the expanded S-curve trajectory of robotic manipulators is proposed to minimize the execution time as well as to achieve the smoother trajectory generation in the deceleration stage for point-to-point motions. An asymmetric parameter is added to the piecewise sigmoid function for an improved jerk profile. This asymmetric profile is continuous and infinitely differentiable. Based on this profile, two analytical algorithms are presented. One is applied to determine the suitable time intervals of trajectory satisfying the time optimality under the kinematic constraints, and the other is to determine the asymmetric parameter generating the minimum execution time. Also, the calculation procedure for the time-scaled synchronization for all joints is given to decrease unnecessary loads onto the actuators. The velocity, acceleration, jerk and snap (the derivative of jerk) of the joints and the end-effector are equal to zero at two end points of motion. The simulation results through 3 DOF and 6 DOF robotic manipulators show that our approach reduces the jerk and snap of the deceleration stage effectively while decreasing the total execution time. Also, the analysis for a single DOF mass-spring-damper system indicates that the residual vibration could be reduced to 10% more than the benchmark techniques in case velocity, acceleration and jerk are limited to 1.24 m/s, 6 m/s2 and 80 m/s3, respectively and displacement is set to 0.8m. These results manifest that the performance of reducing residual vibrations is good and demonstrate an important characteristic of the proposed profile suitable for point-to-point motion.
This article is on algorithmically generated memories: data on past events that are stored and automatically ranked and classified by digital platforms, before they are presented to the user as memories. By mobilising Henri Bergson's philosophy, I centre my analysis on three of their aspects: the spatialisation and calculation of time in algorithmic systems, algorithmic remembrance, and algorithmic perception. I argue that algorithmically generated memories are a form of automated remembrance best understood as perception, and not recollection. Perception never captures the totality of our surroundings but is partial and the parts of the world we perceive are the parts that are of interest to us. When conscious beings perceive, our perception is always coupled with memory, which allows us to transcend the immediate needs of our body. I argue that algorithmic systems based on machine learning can perceive, but that they cannot remember. As such, their perception operates only in the present. The present they perceive in is characterised by immense amounts of data that are beyond human perceptive capabilities. I argue that perception relates to a capacity to act as an extended field of perception involves a greater power to act within what one perceives. As such, our memories are increasingly governed by a perception that operates in a present beyond human perceptual capacities, motivated by interests and needs that lie somewhat beyond interests of needs formulated by humans. Algorithmically generated memories are not only trying to remember for us, but they are also perceiving for us.
Given a graph $F$, we consider the problem of determining the densest possible pseudorandom graph that contains no copy of $F$. We provide an embedding procedure that improves a general result of Conlon, Fox, and Zhao which gives an upper bound on the density. In particular, our result implies that optimally pseudorandom graphs with density greater than $n^{-1/3}$ must contain a copy of the Peterson graph, while the previous best result gives the bound $n^{-1/4}$. Moreover, we conjecture that the exponent $1/3$ in our bound is tight. We also construct the densest known pseudorandom $K_{2,3}$-free graphs that are also triangle-free. Finally, we give a different proof for the densest known construction of clique-free pseudorandom graphs due to Bishnoi, Ihringer, and Pepe that they have no large clique.
This paper discusses the challenges and opportunities in accessing data to improve workplace relations law enforcement, with reference to minimum employment standards such as wages and working hours regulation. Our paper highlights some innovative examples of government and trade union efforts to collect and use data to improve the detection of noncompliance. These examples reveal the potential of data science as a compliance tool but also suggest the importance of realizing a data ecosystem that is capable of being utilized by machine learning applications. The effectiveness of using data and data science tools to improve workplace law enforcement is impacted by the ability of regulatory actors to access useful data they do not collect or hold themselves. Under “open data” principles, government data is increasingly made available to the public so that it can be combined with nongovernment data to generate value. Through mapping and analysis of the Australian workplace relations data ecosystem, we show that data availability relevant to workplace law compliance falls well short of open data principles. However, we argue that with the right protocols in place, improved data collection and sharing will assist regulatory actors in the effective enforcement of workplace laws.
We investigate here the behaviour of a large typical meandric system, proving a central limit theorem for the number of components of a given shape. Our main tool is a theorem of Gao and Wormald that allows us to deduce a central limit theorem from the asymptotics of large moments of our quantities of interest.
When people are asked to recall their social networks, theoretical and empirical work tells us that they rely on shortcuts, or heuristics. Cognitive social structures (CSSs) are multilayer social networks where each layer corresponds to an individual’s perception of the network. With multiple perceptions of the same network, CSSs contain rich information about how these heuristics manifest, motivating the question, Can we identify people who share the same heuristics? In this work, we propose a method for identifying cognitive structure across multiple network perceptions, analogous to how community detection aims to identify social structure in a network. To simultaneously model the joint latent social and cognitive structure, we study CSSs as three-dimensional tensors, employing low-rank nonnegative Tucker decompositions (NNTuck) to approximate the CSS—a procedure closely related to estimating a multilayer stochastic block model (SBM) from such data. We propose the resulting latent cognitive space as an operationalization of the sociological theory of social cognition by identifying individuals who share relational schema. In addition to modeling cognitively independent, dependent, and redundant networks, we propose a specific model instance and related statistical test for testing when there is social-cognitive agreement in a network: when the social and cognitive structures are equivalent. We use our approach to analyze four different CSSs and give insights into the latent cognitive structures of those networks.
In this paper, an online adaptive super twisting sliding mode controller is proposed for a non-linear system. The adaptive controller has been designed in order to deal with the unknown dynamic uncertainties and give the best trajectory tracking. The adaptation is based on an optimal Particle Swarm Optimization (PSO) algorithm whose goal is online tuning the parameters through focusing on decreasing the objective function. The novelty of this study is online handling parameters setting in the conventional super twisting algorithm, bypass heavy offline calculation, and also avoid the instability and abrupt changing of the controller’s parameters for better actuators lifetime. This novel approach has been applied on an upper limb exoskeleton robot for arm rehabilitation. Despite the changes of the dynamic model of the system which defers from one patient to another due to the direct interactions between the wearer and the exoskeleton, this control technique preserves its robustness with respect to bounded external disturbances. The effectiveness of the proposed adaptive controller has been proved in simulation and then in real-time experiment with two human subjects. A comparison between the proposed approach and classic super twisting algorithm has been conducted. The obtained results show the performance and efficiency of the proposed controller.
We establish here an integral inequality for real log-concave functions, which can be viewed as an average monotone likelihood property. This inequality is then applied to examine the monotonicity of failure rates.
The $d$-process generates a graph at random by starting with an empty graph with $n$ vertices, then adding edges one at a time uniformly at random among all pairs of vertices which have degrees at most $d-1$ and are not mutually joined. We show that, in the evolution of a random graph with $n$ vertices under the $d$-process with $d$ fixed, with high probability, for each $j \in \{0,1,\dots,d-2\}$, the minimum degree jumps from $j$ to $j+1$ when the number of steps left is on the order of $\ln (n)^{d-j-1}$. This answers a question of Ruciński and Wormald. More specifically, we show that, when the last vertex of degree $j$ disappears, the number of steps left divided by $\ln (n)^{d-j-1}$ converges in distribution to the exponential random variable of mean $\frac{j!}{2(d-1)!}$; furthermore, these $d-1$ distributions are independent.