To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this work, we consider two sets of dependent variables $\{X_{1},\ldots,X_{n}\}$ and $\{Y_{1},\ldots,Y_{n}\}$, where $X_{i}\sim EW(\alpha_{i},\lambda_{i},k_{i})$ and $Y_{i}\sim EW(\beta_{i},\mu_{i},l_{i})$, for $i=1,\ldots, n$, which are coupled by Archimedean copulas having different generators. We then establish different inequalities between two extremes, namely, $X_{1:n}$ and $Y_{1:n}$ and $X_{n:n}$ and $Y_{n:n}$, in terms of the usual stochastic, star, Lorenz, hazard rate, reversed hazard rate and dispersive orders. Several examples and counterexamples are presented for illustrating all the results established here. Some of the results here extend the existing results of [5] (Barmalzan, G., Ayat, S.M., Balakrishnan, N., & Roozegar, R. (2020). Stochastic comparisons of series and parallel systems with dependent heterogeneous extended exponential components under Archimedean copula. Journal of Computational and Applied Mathematics380: Article No. 112965).
Thermohaline staircases are a widespread stratification feature that impacts the vertical transport of heat and nutrients and are consistently observed throughout the Canada Basin of the Arctic Ocean. Observations of staircases from the same time period and geographic region form clusters in temperature-salinity (T–S) space. Here, for the first time, we use an automated clustering algorithm called the hierarchical density-based spatial clustering of applications with noise to detect and connect individual well-mixed staircase layers across profiles from ice-tethered profilers. Our application only requires an estimate of the typical layer thickness and expected salinity range of staircases. We compare this method to two previous studies that used different approaches to detect layers and reproduce several results, including the mean lateral density ratio $ {R}_L $ and that the difference in salinity between neighboring layers is a magnitude larger than the salinity variance within a layer. We find that we can accurately and automatically track individual layers in coherent staircases across time and space between different profiles. In evaluating the algorithm’s performance, we find evidence of different physical features, namely splitting or merging layers and remnant intrusions. Further, we find a dependence of $ {R}_L $ on pressure, whereas previous studies have reported constant $ {R}_L $. Our results demonstrate that clustering algorithms are an effective and parsimonious method of identifying staircases in ocean profile data.
Based on the theories of radical education, this article discusses the education of electronic music in Venezuela. After a historiographical review of the state of music education in the country, which shows that there is little information on the subject, the institutional life that has promoted electroacoustic music in Venezuela is approached from a critical perspective. This documentary research gathers and analyses data provided by a bibliographic review and unstructured interviews with experts in the field. Among the most salient findings is the discontinuity in the teaching of electroacoustic music, as well as a critical review of the notions of radical education in the case of Venezuela, where the educational system shows stagnation in the face of the global context.
Autonomous navigation has been a long-standing research topic, and researchers have worked on many challenging problems in indoor and outdoor environments. One application area of navigation solutions is material handling in industrial environments. With Industry 4.0, the simple problem in traditional factories has evolved into the use of autonomous mobile robots within flexible production islands in a self-decision-making structure. Two main stages of such a navigation system are safe transportation of the vehicle from one point to another and reaching destinations at industrial standards. The main concern in the former is roughly determining the vehicle’s pose to follow the route, while the latter aims to reach the target with high accuracy and precision. Often, it may not be possible or require extra effort to satisfy requirements with a single localization method. Therefore, a multi-stage localization approach is proposed in this study. Particle filter-based large-scale localization approaches are utilized during the vehicle’s movement from one point to another, while scan-matching-based methods are used in the docking stage. The localization system enables the appropriate approach based on the vehicle’s status and task through a decision-making mechanism. The decision-making mechanism uses a similarity metric obtained through the correntropy criterion to decide when and how to switch from large-scale localization to precise localization. The feasibility and performance of the developed method are corroborated through field tests. These evaluations demonstrate that the proposed method accomplishes tasks with sub-centimeter and sub-degree accuracy and precision without affecting the operation of the navigation algorithms in real time.
To address the question of how to deliver time-sensitive software for cyber-physical systems (CPS) requires a range of modelling and analysis techniques to be developed and integrated. A number of these required techniques are unique to time-sensitive software where timeliness is a correctness property rather than a performance attribute. This paper focuses on how to obtain worst-case estimates of the software’s execution time; in particular, it considers how workload models are derived from assumptions about the system’s run-time behaviour. The specific contribution of this paper is the exploration of the notion that a system can be subject to more than one workload model. Examples illustrate how such multi-models can lead to improved schedulability and hence more efficient CPS. An important property of the approach is that the derived analysis exhibits model-bounded behaviour. This ensures that the maximum load on the system is never higher than that implied by the individual models.
Artificial intelligence and cognitive science are two core research areas in design. Artificial intelligence shows the capability of analysing massive amounts of data which supports making predictions, uncovering patterns and generating insights in varying design activities, while cognitive science provides the advantage of revealing the inherent mental processes and mechanisms of humans in design. Both artificial intelligence and cognitive science in design research are focused on delivering more innovative and efficient design outcomes and processes. Therefore, this thematic collection on “Applications of Artificial Intelligence and Cognitive Science in Design” brings together state-of-the-art research in artificial intelligence and cognitive science to showcase the emerging trend of applying artificial intelligence techniques and neurophysiological and biometric measures in design research. Three promising future research directions: 1) human-in-the-loop AI for design, 2) multimodal measures for design, and 3) AI for design cognitive data analysis and interpretation, are suggested by analysing the research papers collected. A framework for integration of artificial intelligence and cognitive science in design, incorporating the three research directions, is proposed to inspire and guide design researchers in exploring human-centred design methods, strategies, solutions, tools and systems.
Cyber-Physical Systems (CPSs) combine cyber, physical and human activities through computing and network technologies, creating opportunities for benign and malign actions that affect organisations in both the physical and computational spheres. The US National Cyber Security Strategy (US White House, 2023) warns that this exposes crucial systems to disruption over a wide CPS attack surface. The UK National Cyber Security Centre Annual Review (UK National Cyber Security Centre, 2023) acknowledges that, although some organisations are evolving ‘a more holistic view of critical systems rather than purely physical assets’, this is not reflected in governance structures that still tend to treat cyber and physical security separately.
Complex physical processes that are inherent to rainfall lead to the challenging task of its prediction. To contribute to the improvement of rainfall prediction, artificial neural network (ANN) models were developed using a multilayer perceptron (MLP) approach to predict monthly rainfall 2 months in advance for six geographically diverse weather stations across the Benin Republic. For this purpose, 12 lagged values of atmospheric data were used as predictors. The models were trained using data from 1959 to 2017 and tested for 4 years (2018–2021). The proposed method was compared to long short-term memory (LSTM) and climatology forecasts (CFs). The prediction performance was evaluated using five statistical measures: root mean square error, mean absolute error, mean absolute percentage error, coefficient of determination, and Nash–Sutcliffe efficiency (NSE) coefficient. Furthermore, Taylor diagrams, violin plots, box error, and Kruskal–Wallis test were used to assess the robustness of the model’s forecast. The results revealed that MLP gives better results than LSTM and CF. The NSE obtained with the MLP, LSTM, and CF models during the test period ranges from 0.373 to 0.885, 0.297 to 0.875, and 0.335 to 0.845, respectively, depending on the weather station. Rainfall predictability was more accurate, with 0.512 improvement in NSE using MLP at higher latitudes across the country, showing the effect of geographic regions on prediction model results. In summary, this research has revealed the potential of ANN techniques in predicting monthly rainfall 2 months ahead, supplying valuable insights for decision-makers in the Republic of Benin.
With the rise of deep reinforcement learning (RL) methods, many complex robotic manipulation tasks are being solved. However, harnessing the full power of deep learning requires large datasets. Online RL does not suit itself readily into this paradigm due to costly and time-consuming agent-environment interaction. Therefore, many offline RL algorithms have recently been proposed to learn robotic tasks. But mainly, all such methods focus on a single-task or multitask learning, which requires retraining whenever we need to learn a new task. Continuously learning tasks without forgetting previous knowledge combined with the power of offline deep RL would allow us to scale the number of tasks by adding them one after another. This paper investigates the effectiveness of regularisation-based methods like synaptic intelligence for sequentially learning image-based robotic manipulation tasks in an offline-RL setup. We evaluate the performance of this combined framework against common challenges of sequential learning: catastrophic forgetting and forward knowledge transfer. We performed experiments with different task combinations to analyse the effect of task ordering. We also investigated the effect of the number of object configurations and the density of robot trajectories. We found that learning tasks sequentially helps in the retention of knowledge from previous tasks, thereby reducing the time required to learn a new task. Regularisation-based approaches for continuous learning, like the synaptic intelligence method, help mitigate catastrophic forgetting but have shown only limited transfer of knowledge from previous tasks.
Payroll management is a critical business task that is subject to a large number of rules, which vary widely between companies, sectors, and countries. Moreover, the rules are often complex and change regularly. Therefore, payroll management systems must be flexible in design. In this paper, we suggest an approach based on a flexible answer set programming (ASP) model and an easy-to-read tabular representation based on the decision model and notation standard. It allows HR consultants to represent complex rules without the need for a software engineer and to ultimately design payroll systems for a variety of different scenarios. We show how the multi-shot solving capabilities of the clingo ASP system can be used to reach the performance that is necessary to handle real-world instances.
We have developed probabilistic models to estimate the likelihood of harmful algae presence and outbreaks along the Norwegian coast, which can help optimization of the national monitoring program and the planning of mitigation actions. We employ support vector machines to calibrate probabilistic models for estimating the presence and harmful abundance (HA) of eight toxic algae found along the Norwegian coast, including Alexandrium spp., Alexandrium tamarense, Dinophysis acuta, Dinophysis acuminata, Dinophysis norvegica, Pseudo-nitzschia spp., Protoceratium reticulatum, and Azadinium spinosum. The inputs are sea surface temperature, photosynthetically active radiation, mixed layer depth, and sea surface salinity. The probabilistic models are trained with data from 2006 to 2013 and tested with data from 2014 to 2019. The presence models demonstrate good statistical performance across all taxa, with R (observed presence frequency vs. predicted probability) ranging from 0.69 to 0.98 and root mean squared error ranging from 0.84% to 7.84%. Predicting the probability of HA is more challenging, and the HA models only reach skill with four taxa (Alexandrium spp., A. tamarense, D. acuta, and A. spinosum). There are large differences in seasonal and geographical variability and sensitivity to the model input of different taxa, which are presented and discussed. The models estimate geographical regions and periods with relatively higher risk of toxic species presence and HA, and might optimize the harmful algae monitoring. The method can be extended to other regions as it relies only on remote sensing and model data as input and running national programs of toxic algae monitoring.
This paper presents a compliant variable admittance adaptive fixed-time sliding mode control (SMC) algorithm for trajectory tracking of robotic manipulators. Specifically, a compliant variable admittance algorithm and an adaptive fixed-time SMC algorithm are combined to construct a double-loop control structure. In the outer loop, the variable admittance algorithm is developed to adjust admittance parameters during a collision to minimize the collision time, which gives the robot compliance property and reduce the rigid collision influence. Then, by employing the Lyapunov theory and the fixed-time stability theory, a new nonsingular sliding mode manifold is proposed and an adaptive fixed-time SMC algorithm is presented in the inner loop. More precisely, this approach enables rapid convergence, enhanced steady-state tracking precision, and a settling time that is independent of system initial states. As a result, the effectiveness and improved performance of the proposed algorithm are demonstrated through extensive simulations and experimental results.
In a Model Predictive Control (MPC) setting, the precise simulation of the behavior of the system over a finite time window is essential. This application-oriented benchmark study focuses on a robot arm that exhibits various nonlinear behaviors. For this arm, we have a physics-based model with approximate parameter values and an open benchmark dataset for system identification. However, the long-term simulation of this model quickly diverges from the actual arm’s measurements, indicating its inaccuracy. We compare the accuracy of black-box and purely physics-based approaches with several physics-informed approaches. These involve different combinations of a neural network’s output with information from the physics-based model or feeding the physics-based model’s information into the neural network. One of the physics-informed model structures can improve accuracy over a fully black-box model.
This book is designed to provide in-depth knowledge on how search plays a fundamental role in problem solving. Meant for undergraduate and graduate students pursuing courses in computer science and artificial intelligence, it covers a wide spectrum of search methods. Readers will be able to begin with simple approaches and gradually progress to more complex algorithms applied to a variety of problems. It demonstrates that search is all pervasive in artificial intelligence and equips the reader with the relevant skills. The text starts with an introduction to intelligent agents and search spaces. Basic search algorithms like depth first search and breadth first search are the starting points. Then, it proceeds to discuss heuristic search algorithms, stochastic local search, algorithm A*, and problem decomposition. It also examines how search is used in playing board games, deduction in logic and automated planning. The book concludes with a coverage on constraint satisfaction.
This paper deals with generally routed, pre-bent cable-driven continuum robots (CCR). A CCR consists of a flexible backbone to which multiple disks are attached. Cables are passed through holes in the disk, and when pulled, the flexible backbone and the CCR can attain different shapes based on their routing and backbone configuration. An optimization-based approach, using minimization of strain energy, is shown to give good results for the pose and motion of the CCR and to determine contact with external objects. The pose, motion, and the contact obtained from the model are shown to match very well with experimental results obtained from a 3D-printed CCR. An algorithm is proposed to generate the pre-bent backbone for a CCR which on actuation can attain the desired shape. Using the algorithm, three 3D-printed CCRs with pre-bent backbones are fabricated and these are used to demonstrate a compliant gripper that can grip a spherical object similar to that done by tentacles, and another three-fingered gripper with straight backbone CCRs is used to orient a square object gripped at the end.
In this paper, a method of planning the expanded S-curve trajectory of robotic manipulators is proposed to minimize the execution time as well as to achieve the smoother trajectory generation in the deceleration stage for point-to-point motions. An asymmetric parameter is added to the piecewise sigmoid function for an improved jerk profile. This asymmetric profile is continuous and infinitely differentiable. Based on this profile, two analytical algorithms are presented. One is applied to determine the suitable time intervals of trajectory satisfying the time optimality under the kinematic constraints, and the other is to determine the asymmetric parameter generating the minimum execution time. Also, the calculation procedure for the time-scaled synchronization for all joints is given to decrease unnecessary loads onto the actuators. The velocity, acceleration, jerk and snap (the derivative of jerk) of the joints and the end-effector are equal to zero at two end points of motion. The simulation results through 3 DOF and 6 DOF robotic manipulators show that our approach reduces the jerk and snap of the deceleration stage effectively while decreasing the total execution time. Also, the analysis for a single DOF mass-spring-damper system indicates that the residual vibration could be reduced to 10% more than the benchmark techniques in case velocity, acceleration and jerk are limited to 1.24 m/s, 6 m/s2 and 80 m/s3, respectively and displacement is set to 0.8m. These results manifest that the performance of reducing residual vibrations is good and demonstrate an important characteristic of the proposed profile suitable for point-to-point motion.
This article is on algorithmically generated memories: data on past events that are stored and automatically ranked and classified by digital platforms, before they are presented to the user as memories. By mobilising Henri Bergson's philosophy, I centre my analysis on three of their aspects: the spatialisation and calculation of time in algorithmic systems, algorithmic remembrance, and algorithmic perception. I argue that algorithmically generated memories are a form of automated remembrance best understood as perception, and not recollection. Perception never captures the totality of our surroundings but is partial and the parts of the world we perceive are the parts that are of interest to us. When conscious beings perceive, our perception is always coupled with memory, which allows us to transcend the immediate needs of our body. I argue that algorithmic systems based on machine learning can perceive, but that they cannot remember. As such, their perception operates only in the present. The present they perceive in is characterised by immense amounts of data that are beyond human perceptive capabilities. I argue that perception relates to a capacity to act as an extended field of perception involves a greater power to act within what one perceives. As such, our memories are increasingly governed by a perception that operates in a present beyond human perceptual capacities, motivated by interests and needs that lie somewhat beyond interests of needs formulated by humans. Algorithmically generated memories are not only trying to remember for us, but they are also perceiving for us.