To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Real-time systems need to be built out of tasks for which the worst-case execution time is known. To enable accurate estimates of worst-case execution time, some researchers propose to build processors that simplify that analysis. These architectures are called precision-timed machines or time-predictable architectures. However, what does this term mean? This paper explores the meaning of time predictability and how it can be quantified. We show that time predictability is hard to quantify. Rather, the worst-case performance as the combination of a processor, a compiler, and a worst-case execution time analysis tool is an important property in the context of real-time systems. Note that the actual software has implications as well on the worst-case performance. We propose to define a standard set of benchmark programs that can be used to evaluate a time-predictable processor, a compiler, and a worst-case execution time analysis tool. We define worst-case performance as the geometric mean of worst-case execution time bounds on a standard set of benchmark programs.
Glivenko’s theorem says that classical provability of a propositional formula entails intuitionistic provability of the double negation of that formula. This stood right at the beginning of the success story of negative translations, indeed mainly designed for converting classically derivable formulae into intuitionistically derivable ones. We now generalise this approach: simultaneously from double negation to an arbitrary nucleus; from provability in a calculus to an inductively generated abstract consequence relation; and from propositional logic to any set of objects whatsoever. In particular, we give sharp criteria for the generalisation of classical logic to be a conservative extension of the one of intuitionistic logic with double negation.
Research in decentralized computing, specifically in consensus algorithms, has focused on providing resistance to an adversary with a minority stake. This has resulted in systems that are majoritarian in the extreme, ignoring valuable lessons learned in law and politics over centuries. In this article, we first detail this phenomenon of majoritarianism and point out how minority protections in the nondigital world have been implemented. We motivate adding minority protections to collaborative systems with examples. We also show how current software deployment models exacerbate majoritarianism, highlighting the problem of monoculture in client software in particular. We conclude by giving some suggestions on how to make decentralized computing less hostile to those in the minority.
For a given graph $H$, we say that a graph $G$ has a perfect $H$-subdivision tiling if $G$ contains a collection of vertex-disjoint subdivisions of $H$ covering all vertices of $G.$ Let $\delta _{\mathrm {sub}}(n, H)$ be the smallest integer $k$ such that any $n$-vertex graph $G$ with minimum degree at least $k$ has a perfect $H$-subdivision tiling. For every graph $H$, we asymptotically determined the value of $\delta _{\mathrm {sub}}(n, H)$. More precisely, for every graph $H$ with at least one edge, there is an integer $\mathrm {hcf}_{\xi }(H)$ and a constant $1 \lt \xi ^*(H)\leq 2$ that can be explicitly determined by structural properties of $H$ such that $\delta _{\mathrm {sub}}(n, H) = \left (1 - \frac {1}{\xi ^*(H)} + o(1) \right )n$ holds for all $n$ and $H$ unless $\mathrm {hcf}_{\xi }(H) = 2$ and $n$ is odd. When $\mathrm {hcf}_{\xi }(H) = 2$ and $n$ is odd, then we show that $\delta _{\mathrm {sub}}(n, H) = \left (\frac {1}{2} + o(1) \right )n$.
Personas are hypothetical representations of real-world people used as storytelling tools to help designers identify the goals, constraints, and scenarios of particular user groups. A well-constructed persona can provide enough detail to trigger recognition and empathy while leaving room for varying interpretations of users. While a traditional persona is a static representation of a potential user group, a chatbot representation of a persona is dynamic, in that it allows designers to “converse with” the representation. Such representations are further augmented by the use of large language models (LLMs), displaying more human-like characteristics such as emotions, priorities, and values. In this paper, we introduce the term “Synthetic User” to describe such representations of personas that are informed by traditional data and augmented by synthetic data. We study the effect of one example of such a Synthetic User – embodied as a chatbot – on the designers’ process, outcome, and their perception of the persona using a between-subjects study comparing it to a traditional persona summary. While designers showed comparable diversity in the ideas that emerged from both conditions, we find in the Synthetic User condition a greater variation in how designers perceive the persona’s attributes. We also find that the Synthetic User allows novel interactions such as seeking feedback and testing assumptions. We make suggestions for balancing consistency and variation in Synthetic User performance and propose guidelines for future development.
This paper introduces a novel bipedal robot model designed for adaptive transition between walking and running gaits solely through changes in locomotion speed. The bipedal robot model comprises two sub-components: a mechanical model for the legs that accommodates both walking and running and a continuous state model that does not explicitly switch states. The mechanical model employs a structure combining a linear cylinder with springs, dampers, and stoppers, designed to have mechanistic properties of both the inverted pendulum model used for walking and the spring-loaded inverted pendulum model used for running. The state model utilizes a virtual leg representation to abstractly describe the actual support leg, capable of commonly representing both a double support leg in walking and a single support leg in running. These models enable a simple gait controller to determine the kick force and the foot touchdown point based solely on the parameter of the target speed, thus allowing a robot to walk and run stably. Hence, simulation validation demonstrates the adaptive robot transition to an energy-efficient gait depending on locomotion speed without explicit gait-type instructions and maintaining stable locomotion across a wide range of speeds.
Human activity recognition (HAR) is a vital component of human–robot collaboration. Recognizing the operational elements involved in an operator’s task is essential for realizing this vision, and HAR plays a key role in achieving this. However, recognizing human activity in an industrial setting differs from recognizing daily living activities. An operator’s activity must be divided into fine elements to ensure efficient task completion. Despite this, there is relatively little related research in the literature. This study aims to develop machine learning models to classify the sequential movement elements of a task. To illustrate this, three logistic operations in an integrated circuit (IC) design house were studied, with participants wearing 13 inertial measurement units manufactured by XSENS to mimic the tasks. The kinematics data were collected to develop the machine learning models. The time series data preprocessing involved applying two normalization methods and three different window lengths. Eleven features were extracted from the processed data to train the classification models. Model validation was carried out using the subject-independent method, with data from three participants excluded from the training dataset. The results indicate that the developed model can efficiently classify operational elements when the operator performs the activity accurately. However, incorrect classifications occurred when the operator missed an operation or awkwardly performed the task. RGB video clips helped identify these misclassifications, which can be used by supervisors for training purposes or by industrial engineers for work improvement.
Prediction of dynamic environmental variables in unmonitored sites remains a long-standing challenge for water resources science. The majority of the world’s freshwater resources have inadequate monitoring of critical environmental variables needed for management. Yet, the need to have widespread predictions of hydrological variables such as river flow and water quality has become increasingly urgent due to climate and land use change over the past decades, and their associated impacts on water resources. Modern machine learning methods increasingly outperform their process-based and empirical model counterparts for hydrologic time series prediction with their ability to extract information from large, diverse data sets. We review relevant state-of-the art applications of machine learning for streamflow, water quality, and other water resources prediction and discuss opportunities to improve the use of machine learning with emerging methods for incorporating watershed characteristics and process knowledge into classical, deep learning, and transfer learning methodologies. The analysis here suggests most prior efforts have been focused on deep learning frameworks built on many sites for predictions at daily time scales in the United States, but that comparisons between different classes of machine learning methods are few and inadequate. We identify several open questions for time series predictions in unmonitored sites that include incorporating dynamic inputs and site characteristics, mechanistic understanding and spatial context, and explainable AI techniques in modern machine learning frameworks.
For each uniformity $k \geq 3$, we construct $k$ uniform linear hypergraphs $G$ with arbitrarily large maximum degree $\Delta$ whose independence polynomial $Z_G$ has a zero $\lambda$ with $\left \vert \lambda \right \vert = O\left (\frac {\log \Delta }{\Delta }\right )$. This disproves a recent conjecture of Galvin, McKinley, Perkins, Sarantis, and Tetali.
We discuss the emerging technology of digital twins (DTs) and the expected demands as they scale to represent increasingly complex, interconnected systems. Several examples are presented to illustrate core use cases, highlighting a progression to represent both natural and engineered systems. The forthcoming challenges are discussed around a hierarchy of scales, which recognises systems of increasing aggregation. Broad implications are discussed, encompassing sensing, modelling, and deployment, alongside ethical and privacy concerns. Importantly, we endorse a modular and peer-to-peer view for aggregate (interconnected) DTs. This mindset emphasises that DT complexity emerges from the framework of connections (Wagg et al. [2024, The philosophical foundations of digital twinning, Preprint]) as well as the (interpretable) units that constitute the whole.
The information deployment on social networks through word-of-mouth spreading by online users has contributed well to forming opinions, social groups, and connections. This process of information deployment is known as information diffusion. Its process and models play a significant role in social network analysis. Seeing this importance, the present paper focuses on the process, model, deployment, and applications of information diffusion analysis. First, this article discusses the background of the diffusion process, such as process, components, and models. Next, information deployment in social networks and their application have been discussed. A comparative analysis of literature corresponding to applications like influence maximization, link prediction, and community detection is presented. A brief description of performative evaluation metrics is illustrated. Current research challenges and the future direction of information diffusion analysis regarding social network applications have been discussed. In addition, some open problems of information diffusion for social network analysis are also presented.
Reliability analysis of stress–strength models usually assumes that the stress and strength variables are independent. However, in numerous real-world scenarios, stress and strength variables exhibit dependence. This paper investigates the reliability estimation in a multicomponent stress–strength model for parallel-series system assuming that the dependence between stress and strength is based on the Clayton copula. The estimators for the unknown parameters and system reliability are derived using the two-step maximum likelihood estimation and the maximum product spacing methods. Additionally, confidence intervals are constructed by utilizing asymptotically normal distribution theory and bootstrap method. Furthermore, Monte Carlo simulations are conducted to compare the effectiveness of the proposed inference methods. Finally, a real dataset is analyzed for illustrative purposes.
In certain scenarios, the large footprint of a robot is not conducive to multi-robot cooperative operations. This paper presents a generalized single-loop parallel manipulator with remote center of motion (GSLPM-RCM), which addresses this issue by incorporating a reconfigurable base. The footprint of this RCM manipulator can be adjusted by varying the parameters of the reconfigurable base. First, utilizing configuration evolution, a reconfigurable base is constructed based on the principle of forming RCM motion. Then, according to the modular analysis method, the inverse kinematics of this parallel RCM manipulator is analyzed, and the workspace is also analyzed. Subsequently, the motion/force transmissibility of this RCM manipulator is analyzed by considering its single-loop and multi-degree of freedom characteristics. Leveraging the workspace index and transmissibility indices, dimension optimization of the manipulator is implemented. Finally, the influence of the reconfigurable base on the workspace and the transmissibility performance of the optimized manipulator is studied.
In this paper, the model of bisexual branching processes affected by viral infectivity and with random control functions in independent and identically distributed (i.i.d.) random environments is established and the Markov property is given firstly. Then the relations of the probability generating functions of this model are studied, and some sufficient conditions for process extinction under common mating functions are presented. Finally, the limiting behaviors of the considered model after proper normalization, such as the sufficient conditions for the convergence in L1 and L2 and almost everywhere convergence, are investigated under the condition that the random control functions are super additive.
Traditional bulky and complex control devices such as remote control and ground station cannot meet the requirement of fast and flexible control of unmanned aerial vehicles (UAVs) in complex environments. Therefore, a data glove based on multi-sensor fusion is designed in this paper. In order to achieve the goal of gesture control of UAVs, the method can accurately recognize various gestures and convert them into corresponding UAV control commands. First, the wireless data glove fuses flexible fiber optic sensors and inertial sensors to construct a gesture dataset. Then, the trained neural network model is deployed to the STM32 microcontroller-based data glove for real-time gesture recognition, in which the convolutional neural network-Attention mechanism (CNN-Attention) network is used for static gesture recognition, and the convolutional neural network-bidirectional long and short-term memory (CNN-Bi-LSTM) network is used for dynamic gesture recognition. Finally, the gestures are converted into control commands and sent to the vehicle terminal to control the UAV. Through the UAV simulation test on the simulation platform, the average recognition accuracy of 32 static gestures reaches 99.7%, and the average recognition accuracy of 13 dynamic gestures reaches 99.9%, which indicates that the system’s gesture recognition effect is perfect. The task test in the scene constructed in the real environment shows that the UAV can respond to the gestures quickly, and the method proposed in this paper can realize the real-time stable control of the UAV on the terminal side.
Vibration-based structural health monitoring (SHM) of (large) infrastructure through operational modal analysis (OMA) is a commonly adopted strategy. This is typically a four-step process, comprising estimation, tracking, data normalization, and decision-making. These steps are essential to ensure structural modes are correctly identified, and results are normalized for environmental and operational variability (EOV). Other challenges, such as nonstructural modes in the OMA, for example, rotor harmonics in (offshore) wind turbines (OWTs), further complicate the process. Typically, these four steps are considered independently, making the method simple and robust, but rather limited in challenging applications, such as OWTs. Therefore, this study aims to combine tracking, data normalization, and decision-making through a single machine learning (ML) model. The presented SHM framework starts by identifying a “healthy” training dataset, representative of all relevant EOV, for all structural modes. Subsequently, operational and weather data are used for feature selection and a comparative analysis of ML models, leading to the selection of tree-based learners for natural frequency prediction. Uncertainty quantification (UQ) is introduced to identify out-of-distribution instances, crucial to guarantee low modeling error and ensure only high-fidelity structural modes are tracked. This study uses virtual ensembles for UQ through the variance between multiple truncated submodel predictions. Practical application to monopile-supported OWT data demonstrates the tracking abilities, separating structural modes from rotor dynamics. Control charts show improved decision-making compared to traditional reference-based methods. A synthetic dataset further confirms the approach’s robustness in identifying relevant natural frequency shifts. This study presents a comprehensive data-driven approach for vibration-based SHM.