To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Graph-based semi-supervised learning methods combine the graph structure and labeled data to classify unlabeled data. In this work, we study the effect of a noisy oracle on classification. In particular, we derive the maximum a posteriori (MAP) estimator for clustering a degree corrected stochastic block model when a noisy oracle reveals a fraction of the labels. We then propose an algorithm derived from a continuous relaxation of the MAP, and we establish its consistency. Numerical experiments show that our approach achieves promising performance on synthetic and real data sets, even in the case of very noisy labeled data.
Real-time strategy (RTS) games have provided a fertile ground for AI research with notable recent successes based on deep reinforcement learning (RL). However, RL remains a data-hungry approach featuring a high sample complexity. In this paper, we focus on a sample complexity reduction technique called reinforcement learning as a rehearsal (RLaR) and on the RTS game of MicroRTS to formulate and evaluate it. RLaR has been formulated in the context of action-value function based RL before. Here, we formulate it for a different RL framework, called actor-critic RL. We show that on the one hand the actor-critic framework allows RLaR to be much simpler, but on the other hand, it leaves room for a key component of RLaR–a prediction function that relates a learner’s observations with that of its opponent. This function, when leveraged for exploration, accelerates RL as our experiments in MicroRTS show. Further experiments provide evidence that RLaR may reduce actor noise compared to a variant that does not utilize RLaR’s exploration. This study provides the first evaluation of RLaR’s efficacy in a domain with a large strategy space.
In single-zone multi-node systems (SZMNSs), temperature controls rely on a single probe near the thermostat, resulting in temperature discrepancies that cause thermal discomfort and energy waste. Augmenting smart thermostats (STs) with per-room sensors has gained acceptance by major ST manufacturers. This paper leverages additional sensory information to empirically characterize the services provided by buildings, including thermal comfort, energy efficiency, and demand response (DR). Utilizing room-level time-series data from 1000 houses, metadata from 110,000 houses across the United States, and data from two real-world testbeds, we examine the limitations of SZMNSs and explore the potential of remote sensors. We discover that comfortable DR durations (CDRDs) for rooms are typically 70% longer or 40% shorter than for the room with the thermostat. When averaging, rooms at the control temperature’s bounds are typically deviated around −3 °F to 2.5 °F from the average. Moreover, in 95% of houses, we identified rooms experiencing notably higher solar gains compared to the rest of the rooms, while 85% and 70% of houses demonstrated lower heat input and poor insulation, respectively. Lastly, it became evident that the consumption of cooling energy escalates with the increase in the number of sensors, whereas heating usage experiences fluctuations ranging from −19% to +25%. This study serves as a benchmark for assessing the thermal comfort and DR services in the existing housing stock, while also highlighting the energy efficiency impacts of sensing technologies. Our approach sets the stage for more granular, precise control strategies of SZMNSs.
We develop realizability models of intensional type theory, based on groupoids, wherein realizers themselves carry non-trivial (non-discrete) homotopical structure. In the spirit of realizability, this is intended to formalize a homotopical BHK interpretation, whereby evidence for an identification is a path. Specifically, we study partitioned groupoidal assemblies. Categories of such are parameterized by “realizer categories” (instead of the usual partial combinatory algebras) that come equipped with an interval qua internal cogroupoid. The interval furnishes a notion of homotopy as well as a fundamental groupoid construction. Objects in a base groupoid are realized by points in the fundamental groupoid of some object from the realizer category; isomorphisms in the base groupoid are realized by paths in said fundamental groupoid. The main result is that, under mild conditions on the realizer category, the ensuing category of partitioned groupoidal assemblies models intensional (1-truncated) type theory without function extensionality. Moreover, when the underlying realizer category is “untyped,” there exists an impredicative universe of 1-types (the modest fibrations). This is a groupoidal analog of the traditional situation.
Eight major supply chains contribute to more than 50% of the global greenhouse gas emissions (GHG). These supply chains range from raw materials to end-product manufacturing. Hence, it is critical to accurately estimate the carbon footprint of these supply chains, identify GHG hotspots, explain the factors that create the hotspots, and carry out what-if analysis to reduce the carbon footprint of supply chains. Towards this, we propose an enterprise decarbonization accelerator framework with a modular structure that automates carbon footprint estimation, identification of hotspots, explainability, and what-if analysis to recommend measures to reduce the carbon footprint of supply chains. To illustrate the working of the framework, we apply it to the cradle-to-gate extent of the palm oil supply chain of a leading palm oil producer. The framework identified that the farming stage is the hotspot in the considered supply chain. As the next level of analysis, the framework identified the hotspots in the farming stage and provided explainability on factors that created hotspots. We discuss the what-if scenarios and the recommendations generated by the framework to reduce the carbon footprint of the hotspots and the resulting impact on palm oil tree yield.
In this paper, we study the approximate minimization problem of weighted finite automata (WFAs): to compute the best possible approximation of a WFA given a bound on the number of states. By reformulating the problem in terms of Hankel matrices, we leverage classical results on the approximation of Hankel operators, namely the celebrated Adamyan-Arov-Krein (AAK) theory. We solve the optimal spectral-norm approximate minimization problem for irredundant WFAs with real weights, defined over a one-letter alphabet. We present a theoretical analysis based on AAK theory and bounds on the quality of the approximation in the spectral norm and $\ell ^2$ norm. Moreover, we provide a closed-form solution, and an algorithm, to compute the optimal approximation of a given size in polynomial time.
Classical approaches for flood prediction apply numerical methods for the solution of partial differential equations that capture the physics of inundation processes (e.g., the 2D Shallow Water equations). However, traditional inundation models are still unable to satisfy the requirements of many relevant applications, including early-warning systems, high-resolution (or large spatial domain) simulations, and robust inference over distributions of inputs (e.g., rainfall events). Machine learning (ML) approaches are a promising alternative to physics-based models due to their ability to efficiently capture correlations between relevant inputs and outputs in a data-driven fashion. In particular, once trained, ML models can be tested/deployed much more efficiently than classical approaches. Yet, few ML-based solutions for spatio-temporal flood prediction have been developed, and their reliability/accuracy is poorly understood. In this paper, we propose FloodGNN-GRU, a spatio-temporal flood prediction model that combines a graph neural network (GNN) and a gated recurrent unit (GRU) architecture. Compared to existing approaches, FloodGNN-GRU (i) employs a graph-based model (GNN); (ii) operates on both spatial and temporal dimensions; and (iii) processes the water flow velocities as vector features, instead of scalar features. We evaluate FloodGNN-GRU using a LISFLOOD-FP simulation of Hurricane Harvey (2017) in Houston, Texas. Our results, based on several metrics, show that FloodGNN-GRU outperforms several data-driven alternatives in terms of accuracy. Moreover, our approach can be trained 100x faster and tested 1000x faster than the time required to run a comparable simulation. These findings illustrate the potential of ML-based methods to efficiently emulate physics-based inundation models, especially for short-term predictions.
Imitation from Observation (IfO) prompts the robot to imitate tasks from unlabeled videos via reinforcement learning (RL). The performance of the IfO algorithm depends on its ability to extract task-relevant representations since images are informative. Existing IfO algorithms extract image representations by using a simple encoding network or pre-trained network. Due to the lack of action labels, it is challenging to design a supervised task-relevant proxy task to train the simple encoding network. Representations extracted by a pre-trained network such as Resnet are often task-irrelevant. In this article, we propose a new approach for robot IfO via multimodal observations. Different modalities describe the same information from different sides, which can be used to design an unsupervised proxy task. Our approach contains two modules: the unsupervised cross-modal representation (UCMR) module and a self-behavioral cloning (self-BC)-based RL module. The UCMR module learns to extract task-relevant representations via a multimodal unsupervised proxy task. The Self-BC for further offline policy optimization collects successful experiences during the RL training. We evaluate our approach on the real robot pouring water task, quantitative pouring task, and pouring sand task. The robot achieves state-of-the-art performance.
The cable-driven parallel mechanism (CDPM) is known as an interesting application in industry to pick and place objects owing to its advantages such as large workspaces. In addition to the advantages of this mechanism, there are some challenges to improving performance by considering constraints in different components, such as the behavior of cables, shape, size of the end effector and base, and model of pulleys and actuators. Moreover, the impact of online geometry reconfiguration must be analyzed. This paper demonstrates the impact of these constraints on the performance of reconfigurable CDPM. The methodology is based on the systematic review and meta-analysis guidelines to report the results. The databases used to find the papers are extracted from Scopus and Google Scholar, using related keywords. As a result, the impact of physical constraints on system performance is discussed. A total of 90 and 37 articles are selected, respectively. After removing duplicates and unrelated papers, 88 studies that met the inclusion criteria are selected for review. Even when considering the physical constraints in modeling the mechanism, simplifications in designing a model for the reconfigurable CDPM generate errors. There is a gap in designing high-performance controllers to track desired trajectories while reconfiguring the geometry, and the satisfaction of physical constraints needs to be satisfied. In conclusion, this review presents several constraints in designing a controller to track desired trajectories and improve performance in future work. This paper presents an integrated controller architecture that includes physical constraints and predictive control.
We derive an asymptotic expansion for the critical percolation density of the random connection model as the dimension of the encapsulating space tends to infinity. We calculate rigorously the first expansion terms for the Gilbert disk model, the hyper-cubic model, the Gaussian connection kernel, and a coordinate-wise Cauchy kernel.
Mixed Reality enables individuals to visualise and interact with artefacts and environments through a combination of physical and virtual assets. It has received increased interest from the design community as a means to accelerate, enrich and enhance prototyping activities. This article concerns MR’s ability to deceive an individual through the combination of virtual and physical assets and their underlying traits (e.g., mass, size), and a user’s cognitive ability to ‘join the dots’. If properly implemented, MR could save time and resources by reducing the required prototype fidelity and the need to fully realise variants. However, there is a gap in understanding how the traits of physical and virtual assets and cognition combine to form reality. This article presents a study investigated the role mass, virtual and physical model size played on users perception of an MR prototype. The relative impact of these factors was determined by varying these parameters and assessing the user’s perceived change. The key finding from this study was that the virtual model size had a far greater influence on prototype perceived by the user. This suggests that the required physical fidelity of an MR prototype can be lower than the virtual. Furthermore, exploring size design variants can be achieved exclusively through changes to the virtual model.
Probabilistic Answer Set Programming under the credal semantics extends Answer Set Programming with probabilistic facts that represent uncertain information. The probabilistic facts are discrete with Bernoulli distributions. However, several real-world scenarios require a combination of both discrete and continuous random variables. In this paper, we extend the PASP framework to support continuous random variables and propose Hybrid Probabilistic Answer Set Programming. Moreover, we discuss, implement, and assess the performance of two exact algorithms based on projected answer set enumeration and knowledge compilation and two approximate algorithms based on sampling. Empirical results, also in line with known theoretical results, show that exact inference is feasible only for small instances, but knowledge compilation has a huge positive impact on performance. Sampling allows handling larger instances but sometimes requires an increasing amount of memory.
Maintaining object grasp stability represents a pivotal challenge within the domain of robotic manipulation and upper-limb prosthetics. Perturbations originating from external sources frequently disrupt the stability of grasps, resulting in slippage occurrences. Also, if the grasping forces are not optimal while controlling the slip, it may result in the deformation of the objects. This study investigates the robustification of a reinforcement learning (RL) policy for implementing intelligent bionic reflex control, i.e., slip and deformation prevention of the grasped objects. RL-derived policies are vulnerable to failures in environments characterized by dynamic variability. To mitigate this vulnerability, we propose a methodology involving the incorporation of an adaptive sliding mode controller into a pre-trained RL policy. By exploiting the inherent invariance property of the sliding mode algorithm in the presence of uncertainties, our approach strengthens the robustness of the RL policies against diverse and dynamic variations. Numerical simulations substantiate the efficacy of our approach in robustifying RL policies trained within simulated environments.
As machine learning applications gain widespread adoption and integration in a variety of applications, including safety and mission-critical systems, the need for robust evaluation methods grows more urgent. This book compiles scattered information on the topic from research papers and blogs to provide a centralized resource that is accessible to students, practitioners, and researchers across the sciences. The book examines meaningful metrics for diverse types of learning paradigms and applications, unbiased estimation methods, rigorous statistical analysis, fair training sets, and meaningful explainability, all of which are essential to building robust and reliable machine learning products. In addition to standard classification, the book discusses unsupervised learning, regression, image segmentation, and anomaly detection. The book also covers topics such as industry-strength evaluation, fairness, and responsible AI. Implementations using Python and scikit-learn are available on the book's website.
Continuum robots offer unique advantages in performing tasks within extremely confined environments due to their exceptional dexterity and adaptability. However, their soft materials and elastic structures inherently introduce nonlinearity and shape instability, especially when the robot encounters external contact forces. To address these challenges, this paper presents a comprehensive model and experimental study to estimate the shape deformation of a switchable rigid-continuum robot (SRC-Bot). The kinematic analysis is first conducted to specify the degrees of freedom (DoF) and basic motions of SRC-Bot, including motion of bending, rotating, and elongating. This analysis assumes that the curvature varies along the central axis and maps the relationship between joint space and driven space. Subsequently, an equivalence concept is proposed to unify the stiffness addressing each DoF, which is then utilized in the establishment of the dynamic model. According to the mechanical structural design, the deformed posture of SRC-Bot is discretized into five segments, corresponding to the distribution of the guiders. The dynamics model is then derived using Newton’s second law and Euler’s method to simulate the deformation under gravity, friction, and external forces. Additionally, the stiffness in three directions is quantified through an identification process to complete the theoretical model. Furthermore, a series of experiments are conducted and compared with simulated results to validate the response and deformed behavior of SRC-Bot. The comparative results demonstrate that the proposed model-based simulation accurately captures the deformable characteristics of the robot, encompassing both static deformed postures and dynamic time-domain responses induced by external and actuation forces.
This study focuses on the kinematic and dynamic modeling of a wheeled-legged robot (WLR), taking into account kinematic and dynamic slippage. In this regard, the Gibbs–Appell formulation was utilized to derive dynamic equations. Determining the slippage in the wheels for movement equations is a challenging task due to its dependency on factors such as the robot’s postures, velocities, and surface characteristics. To address this challenge, machine vision was used to quantify the slippage of the wheels on the body based on the pose estimation method. This data served as input for movement equations to analyze the robot’s deviation from its path and posture. In the following, the robot’s movement was simulated using Webots and MATLAB, followed by various experimental tests involving acceleration and changes in leg angles on the WLR. The results were then compared to the simulations to demonstrate the accuracy of the developed system modeling. Additionally, an IMU sensor was utilized to measure the robot’s motion and validate against machine vision data. The findings revealed that neglecting the slippage of the wheels in the robot’s motion modeling resulted in errors ranging from 5% to 11.5%. Furthermore, lateral slippage ranging from 1.1 to 5.2 cm was observed in the robot’s accelerated movement. This highlights the importance of including lateral slippage in the equations for a more precise modeling of the robot’s behavior.
Datafication—the increase in data generation and advancements in data analysis—offers new possibilities for governing and tackling worldwide challenges such as climate change. However, employing data in policymaking carries various risks, such as exacerbating inequalities, introducing biases, and creating gaps in access. This paper articulates 10 core tensions related to climate data and its implications for climate data governance, ranging from the diversity of data sources and stakeholders to issues of quality, access, and the balancing act between local needs and global imperatives. Through examining these tensions, the article advocates for a paradigm shift towards multi-stakeholder governance, data stewardship, and equitable data practices to harness the potential of climate data for the public good. It underscores the critical role of data stewards in navigating these challenges, fostering a responsible data ecology, and ultimately contributing to a more sustainable and just approach to climate action and broader social issues.
As the field of migration studies evolves in the digital age, big data analytics emerge as a potential game-changer, promising unprecedented granularity, timeliness, and dynamism in understanding migration patterns. However, the epistemic value added by this data explosion remains an open question. This paper critically appraises the claim, investigating the extent to which big data augments, rather than merely replicates, traditional data insights in migration studies. Through a rigorous literature review of empirical research, complemented by a conceptual analysis, we aim to map out the methodological shifts and intellectual advancements brought forth by big data. The potential scientific impact of this study extends into the heart of the discipline, providing critical illumination on the actual knowledge contribution of big data to migration studies. This, in turn, delivers a clarified roadmap for navigating the intersections of data science, migration research, and policymaking.
Objective: The study aims to build a comprehensive network structure of psychopathology based on patient narratives by combining the merits of both qualitative and quantitative research methodologies. Research methods: The study web-scraped data from 10,933 people who disclosed a prior DSM/ICD11 diagnosed mental illness when discussing their lived experiences of mental ill health. The study then used Python 3 and its associated libraries to run network analyses and generate a network graph. Key findings: The results of the study revealed 672 unique experiences or symptoms that generated 30023 links or connections. The study also identified that of all 672 reported experiences/symptoms, five were deemed the most influential; “anxiety,” “fear,” “auditory hallucinations,” “sadness,” and “depressed mood and loss of interest.” Additionally, the study uncovered some unusual connections between the reported experiences/symptoms. Discussion and recommendations: The study demonstrates that applying a quantitative analytical framework to qualitative data at scale is a useful approach for understanding the nuances of psychopathological experiences that may be missed in studies relying solely on either a qualitative or a quantitative survey-based approach. The study discusses the clinical implications of its results and makes recommendations for potential future directions.