To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The disassembly of end-of-life lithium–ion batteries (EOL-LIBs) is inherently complex, owing to their multi-state and multi-type characteristics. To mitigate these challenges, a human–robot collaboration disassembly (HRCD) model is developed. This model capitalizes on the cognitive abilities of humans combined with the advanced automation capabilities of robots, thereby substantially improving the disassembly process’s flexibility and efficiency. Consequently, this method has become the benchmark for disassembling EOL-LIBs, given its enhanced ability to manage intricate and adaptable disassembly tasks. Furthermore, effective disassembly sequence planning (DSP) for components is crucial for guiding the entire disassembly process. Therefore, this research proposes an approach for the generation of HRCD sequences for EOL-LIBs based on knowledge graph, providing assistance to individuals lacking relevant knowledge to complete disassembly tasks. Firstly, a well-defined disassembly process knowledge graph integrates structural information from CAD models and disassembly operating procedure. Based on the acquired information, DSP is conducted to generate a disassembly sequence knowledge graph (DSKG), which serves as a repository in graphical form. Subsequently, knowledge graph matching is employed to align nodes in the existing DSKG, thereby reusing node sequence knowledge and completing the sequence information for the target disassembly task. Finally, the proposed method is validated using retired power LIBs as a case study product.
Face milling is performed on aluminum alloy A96061-T6 at diverse cutting parameters proposed by the design of experiments. Surface roughness is predicted by examining the effects of cutting parameters (CP), vibrations (Vib), and sound characteristics (SC). Sound characteristics based on surface roughness estimation determine the rarity of the work. In this study, a unique ANN-TLBO hybrid model (Artificial Neural Networks: Teaching Learning Based Algorithm) is created to predict the surface roughness from CP, Vib, and SC. To ascertain their correctness and efficacy in evaluating surface roughness, the performance of these models is evaluated. First off, the CP hybrid model demonstrated an amazing accuracy of 95.1%, demonstrating its capacity to offer trustworthy forecasts of surface roughness values. The Vib hybrid model, in addition, demonstrated a respectable accuracy of 85.4%. Although it was not as accurate as the CP model, it nevertheless showed promise in forecasting surface roughness. The SC-based hybrid model outperformed the other two models in terms of accuracy with a remarkable accuracy of 96.2%, making it the most trustworthy and efficient technique for assessing surface roughness in this investigation. An analysis of error percentages revealed the exceptional performance of SC-based Model-3, exhibiting an average error percentage of 3.77%. This outperformed Vib Model-2 (14.52%) and CP-based Model-1 (4.75%). The SC model is the best option, and given its outstanding accuracy, it may end up becoming the go-to technique for industrial applications needing accurate surface roughness measurement. The SC model’s exceptional performance highlights the importance of optimization strategies in improving the prediction capacities of ANN-based models, leading to significant advancements in the field of surface roughness assessment and related fields. An IoT platform is developed to link the model’s output with other systems. The system created eliminates the need for manual, physical surface roughness measurement and allows for the display of surface roughness data on the cloud and other platforms.
In an era where artificial intelligence (AI) permeates every facet of our lives, the imperative to steer AI development toward enhancing human wellbeing has never been more critical. However, the development of such positive AI poses substantial challenges due to the current lack of mature methods for addressing the complexities that designing AI for wellbeing poses. This article presents and evaluates the positive AI design method aimed at addressing this gap. The method provides a human-centered process for translating wellbeing aspirations into concrete interventions. First, we explain the method’s key steps: (1) contextualizing, (2) operationalizing, (3) designing, and (4) implementing supported by (5) continuous measurement for iterative feedback cycles. We then present a multi-case study where novice designers applied the method, revealing strengths and weaknesses related to efficacy and usability. Next, an expert evaluation study assessed the quality of the case studies’ outcomes, rating them moderately high for feasibility, desirability, and plausibility of achieving intended wellbeing benefits. Together, these studies provide preliminary validation of the method’s ability to improve AI design, while identifying opportunities for enhancement. Building on these insights, we propose adaptations for future iterations of the method, such as the inclusion of wellbeing-related heuristics, suggesting promising avenues for future work. This human-centered approach shows promise for realizing a vision of “AI for wellbeing” that does not just avoid harm, but actively promotes human flourishing.
With global wind energy capacity ramping up, accurately predicting damage equivalent loads (DELs) and fatigue across wind turbine populations is critical, not only for ensuring the longevity of existing wind farms but also for the design of new farms. However, the estimation of such quantities of interests is hampered by the inherent complexity in modeling critical underlying processes, such as the aerodynamic wake interactions between turbines that increase mechanical stress and reduce useful lifetime. While high-fidelity computational fluid dynamics and aeroelastic models can capture these effects, their computational requirements limits real-world usage. Recently, fast machine learning-based surrogates which emulate more complex simulations have emerged as a promising solution. Yet, most surrogates are task-specific and lack flexibility for varying turbine layouts and types. This study explores the use of graph neural networks (GNNs) to create a robust, generalizable flow and DEL prediction platform. By conceptualizing wind turbine populations as graphs, GNNs effectively capture farm layout-dependent relational data, allowing extrapolation to novel configurations. We train a GNN surrogate on a large database of PyWake simulations of random wind farm layouts to learn basic wake physics, then fine-tune the model on limited data for a specific unseen layout simulated in HAWC2Farm for accurate adapted predictions. This transfer learning approach circumvents data scarcity limitations and leverages fundamental physics knowledge from the source low-resolution data. The proposed platform aims to match simulator accuracy, while enabling efficient adaptation to new higher-fidelity domains, providing a flexible blueprint for wake load forecasting across varying farm configurations.
Early phases of the design process require designers to select into view elements of the problem that they deem important. This exploration process is commonly referred to as problem framing and is essential to solution generation. There have recently been calls in the literature for more precise representations of framing activity and how individual designers come to negotiate shared frames in team settings. This paper presents a novel research approach to understand design framing activity using a system thinking lens. Systems thinking is the way that we understand a system’s components and the interrelations to create interventions, which can be used to move the system outcomes in a more favorable direction. The proposed approach is based on the observation that systems as mental representations of the problem bear some similarity to frames as collections of concepts implicit in the designer’s cognition. Systems mapping – a common visualization tool used to facilitate systems thinking – could then be used to model external representations of framing, made explicit through speech, and sketches. We thus adapt systems mapping to develop a coding scheme to analyze verbal protocols of design activity to retrospectively represent framing activity. The coding scheme is applied on two distinct datasets. The resulting system maps are analyzed to highlight team problem frames, individual contributions, and how the framing activity evolves over time. This approach is well suited to visualize the framing activity that occurs in open-ended problem contexts, where designers are more focused on problem finding and analysis rather than concept generation and detailed design. Several future research avenues for which this approach could be used or extended, including using new computational methods, are presented.
Human creativity originates from brain cortical networks that are specialized in idea generation, processing, and evaluation. The concurrent verbalization of our inner thoughts during the execution of a design task enables the use of dynamic semantic networks as a tool for investigating, evaluating, and monitoring creative thought. The primary advantage of using lexical databases such as WordNet for reproducible information-theoretic quantification of convergence or divergence of design ideas in creative problem solving is the simultaneous handling of both words and meanings, which enables interpretation of the constructed dynamic semantic networks in terms of underlying functionally active brain cortical regions involved in concept comprehension and production. In this study, the quantitative dynamics of semantic measures computed with a moving time window is investigated empirically in the DTRS10 dataset with design review conversations and detected divergent thinking is shown to predict success of design ideas. Thus, dynamic semantic networks present an opportunity for real-time computer-assisted detection of critical events during creative problem solving, with the goal of employing this knowledge to artificially augment human creativity.
The intersection of physics and machine learning has given rise to the physics-enhanced machine learning (PEML) paradigm, aiming to improve the capabilities and reduce the individual shortcomings of data- or physics-only methods. In this paper, the spectrum of PEML methods, expressed across the defining axes of physics and data, is discussed by engaging in a comprehensive exploration of its characteristics, usage, and motivations. In doing so, we present a survey of recent applications and developments of PEML techniques, revealing the potency of PEML in addressing complex challenges. We further demonstrate the application of select such schemes on the simple working example of a single degree-of-freedom Duffing oscillator, which allows to highlight the individual characteristics and motivations of different “genres” of PEML approaches. To promote collaboration and transparency, and to provide practical examples for the reader, the code generating these working examples is provided alongside this paper. As a foundational contribution, this paper underscores the significance of PEML in pushing the boundaries of scientific and engineering research, underpinned by the synergy of physical insights and machine learning capabilities.
Gripping devices for harvesting fruits have such types of work as cutting, tearing and unscrewing. For apples, it is preferable to use slicing or unscrewing, while the fruit leg should not remain, damaging the apple during storage. In this article, we are developing a grab for harvesting apples. The gripper is used both for holding the fruit and for jamming, followed by unscrewing. One of the advantages is that the proposed method of collecting apples allows you not to waste time moving the manipulator from the tree to the basket, but only to grab and tear them off. The fruit enters the gripper device; after which it enters the fruit collection container through a rigid or flexible pipe. The gripper device is built on the basis of a ball-screw transmission, which is supplemented by a gear drive along the helical surface. This allows for rotation and rectilinear movement of the held fruit. The gripping device has a ratchet mechanism that allows you to fix the fruit. A mathematical model of the gripper device has been developed, which allows determining the torque of the engine depending on the position of the fingers. The parameters of the mechanism were optimized using a genetic algorithm, and the results are presented in the form of a Pareto set. A 3D model of the gripper device has been built and a layout has been developed using 3D printing. Experimental laboratory and field tests of the gripping device were carried out.
Precipitation is one of the most relevant weather and climate processes. Its formation rate is sensitive to perturbations such as by the interactions between aerosols, clouds, and precipitation. These interactions constitute one of the biggest uncertainties in determining the radiative forcing of climate change. High-resolution simulations such as the ICOsahedral non-hydrostatic large-eddy model (ICON-LEM) offer valuable insights into these interactions. However, due to exceptionally high computation costs, it can only be employed for a limited period and area. We address this challenge by developing new models powered by emerging machine learning approaches capable of forecasting autoconversion rates—the rate at which small droplets collide and coalesce becoming larger droplets—from satellite observations providing long-term global spatial coverage for more than two decades. In particular, our approach involves two phases: (1) we develop machine learning models which are capable of predicting autoconversion rates by leveraging high-resolution climate model data, (2) we repurpose our best machine learning model to predict autoconversion rates directly from satellite observations. We compare the performance of our machine learning models against simulation data under several different conditions, showing from both visual and statistical inspections that our approaches are able to identify key features of the reference simulation data to a high degree. Additionally, the autoconversion rates obtained from the simulation output and satellite data (predicted) demonstrate statistical concordance. By efficiently predicting this, we advance our comprehension of one of the key processes in precipitation formation, crucial for understanding cloud responses to anthropogenic aerosols and, ultimately, climate change.
Graph-based semi-supervised learning methods combine the graph structure and labeled data to classify unlabeled data. In this work, we study the effect of a noisy oracle on classification. In particular, we derive the maximum a posteriori (MAP) estimator for clustering a degree corrected stochastic block model when a noisy oracle reveals a fraction of the labels. We then propose an algorithm derived from a continuous relaxation of the MAP, and we establish its consistency. Numerical experiments show that our approach achieves promising performance on synthetic and real data sets, even in the case of very noisy labeled data.
Real-time strategy (RTS) games have provided a fertile ground for AI research with notable recent successes based on deep reinforcement learning (RL). However, RL remains a data-hungry approach featuring a high sample complexity. In this paper, we focus on a sample complexity reduction technique called reinforcement learning as a rehearsal (RLaR) and on the RTS game of MicroRTS to formulate and evaluate it. RLaR has been formulated in the context of action-value function based RL before. Here, we formulate it for a different RL framework, called actor-critic RL. We show that on the one hand the actor-critic framework allows RLaR to be much simpler, but on the other hand, it leaves room for a key component of RLaR–a prediction function that relates a learner’s observations with that of its opponent. This function, when leveraged for exploration, accelerates RL as our experiments in MicroRTS show. Further experiments provide evidence that RLaR may reduce actor noise compared to a variant that does not utilize RLaR’s exploration. This study provides the first evaluation of RLaR’s efficacy in a domain with a large strategy space.
In single-zone multi-node systems (SZMNSs), temperature controls rely on a single probe near the thermostat, resulting in temperature discrepancies that cause thermal discomfort and energy waste. Augmenting smart thermostats (STs) with per-room sensors has gained acceptance by major ST manufacturers. This paper leverages additional sensory information to empirically characterize the services provided by buildings, including thermal comfort, energy efficiency, and demand response (DR). Utilizing room-level time-series data from 1000 houses, metadata from 110,000 houses across the United States, and data from two real-world testbeds, we examine the limitations of SZMNSs and explore the potential of remote sensors. We discover that comfortable DR durations (CDRDs) for rooms are typically 70% longer or 40% shorter than for the room with the thermostat. When averaging, rooms at the control temperature’s bounds are typically deviated around −3 °F to 2.5 °F from the average. Moreover, in 95% of houses, we identified rooms experiencing notably higher solar gains compared to the rest of the rooms, while 85% and 70% of houses demonstrated lower heat input and poor insulation, respectively. Lastly, it became evident that the consumption of cooling energy escalates with the increase in the number of sensors, whereas heating usage experiences fluctuations ranging from −19% to +25%. This study serves as a benchmark for assessing the thermal comfort and DR services in the existing housing stock, while also highlighting the energy efficiency impacts of sensing technologies. Our approach sets the stage for more granular, precise control strategies of SZMNSs.
We develop realizability models of intensional type theory, based on groupoids, wherein realizers themselves carry non-trivial (non-discrete) homotopical structure. In the spirit of realizability, this is intended to formalize a homotopical BHK interpretation, whereby evidence for an identification is a path. Specifically, we study partitioned groupoidal assemblies. Categories of such are parameterized by “realizer categories” (instead of the usual partial combinatory algebras) that come equipped with an interval qua internal cogroupoid. The interval furnishes a notion of homotopy as well as a fundamental groupoid construction. Objects in a base groupoid are realized by points in the fundamental groupoid of some object from the realizer category; isomorphisms in the base groupoid are realized by paths in said fundamental groupoid. The main result is that, under mild conditions on the realizer category, the ensuing category of partitioned groupoidal assemblies models intensional (1-truncated) type theory without function extensionality. Moreover, when the underlying realizer category is “untyped,” there exists an impredicative universe of 1-types (the modest fibrations). This is a groupoidal analog of the traditional situation.
Eight major supply chains contribute to more than 50% of the global greenhouse gas emissions (GHG). These supply chains range from raw materials to end-product manufacturing. Hence, it is critical to accurately estimate the carbon footprint of these supply chains, identify GHG hotspots, explain the factors that create the hotspots, and carry out what-if analysis to reduce the carbon footprint of supply chains. Towards this, we propose an enterprise decarbonization accelerator framework with a modular structure that automates carbon footprint estimation, identification of hotspots, explainability, and what-if analysis to recommend measures to reduce the carbon footprint of supply chains. To illustrate the working of the framework, we apply it to the cradle-to-gate extent of the palm oil supply chain of a leading palm oil producer. The framework identified that the farming stage is the hotspot in the considered supply chain. As the next level of analysis, the framework identified the hotspots in the farming stage and provided explainability on factors that created hotspots. We discuss the what-if scenarios and the recommendations generated by the framework to reduce the carbon footprint of the hotspots and the resulting impact on palm oil tree yield.
In this paper, we study the approximate minimization problem of weighted finite automata (WFAs): to compute the best possible approximation of a WFA given a bound on the number of states. By reformulating the problem in terms of Hankel matrices, we leverage classical results on the approximation of Hankel operators, namely the celebrated Adamyan-Arov-Krein (AAK) theory. We solve the optimal spectral-norm approximate minimization problem for irredundant WFAs with real weights, defined over a one-letter alphabet. We present a theoretical analysis based on AAK theory and bounds on the quality of the approximation in the spectral norm and $\ell ^2$ norm. Moreover, we provide a closed-form solution, and an algorithm, to compute the optimal approximation of a given size in polynomial time.
Classical approaches for flood prediction apply numerical methods for the solution of partial differential equations that capture the physics of inundation processes (e.g., the 2D Shallow Water equations). However, traditional inundation models are still unable to satisfy the requirements of many relevant applications, including early-warning systems, high-resolution (or large spatial domain) simulations, and robust inference over distributions of inputs (e.g., rainfall events). Machine learning (ML) approaches are a promising alternative to physics-based models due to their ability to efficiently capture correlations between relevant inputs and outputs in a data-driven fashion. In particular, once trained, ML models can be tested/deployed much more efficiently than classical approaches. Yet, few ML-based solutions for spatio-temporal flood prediction have been developed, and their reliability/accuracy is poorly understood. In this paper, we propose FloodGNN-GRU, a spatio-temporal flood prediction model that combines a graph neural network (GNN) and a gated recurrent unit (GRU) architecture. Compared to existing approaches, FloodGNN-GRU (i) employs a graph-based model (GNN); (ii) operates on both spatial and temporal dimensions; and (iii) processes the water flow velocities as vector features, instead of scalar features. We evaluate FloodGNN-GRU using a LISFLOOD-FP simulation of Hurricane Harvey (2017) in Houston, Texas. Our results, based on several metrics, show that FloodGNN-GRU outperforms several data-driven alternatives in terms of accuracy. Moreover, our approach can be trained 100x faster and tested 1000x faster than the time required to run a comparable simulation. These findings illustrate the potential of ML-based methods to efficiently emulate physics-based inundation models, especially for short-term predictions.
Imitation from Observation (IfO) prompts the robot to imitate tasks from unlabeled videos via reinforcement learning (RL). The performance of the IfO algorithm depends on its ability to extract task-relevant representations since images are informative. Existing IfO algorithms extract image representations by using a simple encoding network or pre-trained network. Due to the lack of action labels, it is challenging to design a supervised task-relevant proxy task to train the simple encoding network. Representations extracted by a pre-trained network such as Resnet are often task-irrelevant. In this article, we propose a new approach for robot IfO via multimodal observations. Different modalities describe the same information from different sides, which can be used to design an unsupervised proxy task. Our approach contains two modules: the unsupervised cross-modal representation (UCMR) module and a self-behavioral cloning (self-BC)-based RL module. The UCMR module learns to extract task-relevant representations via a multimodal unsupervised proxy task. The Self-BC for further offline policy optimization collects successful experiences during the RL training. We evaluate our approach on the real robot pouring water task, quantitative pouring task, and pouring sand task. The robot achieves state-of-the-art performance.
The cable-driven parallel mechanism (CDPM) is known as an interesting application in industry to pick and place objects owing to its advantages such as large workspaces. In addition to the advantages of this mechanism, there are some challenges to improving performance by considering constraints in different components, such as the behavior of cables, shape, size of the end effector and base, and model of pulleys and actuators. Moreover, the impact of online geometry reconfiguration must be analyzed. This paper demonstrates the impact of these constraints on the performance of reconfigurable CDPM. The methodology is based on the systematic review and meta-analysis guidelines to report the results. The databases used to find the papers are extracted from Scopus and Google Scholar, using related keywords. As a result, the impact of physical constraints on system performance is discussed. A total of 90 and 37 articles are selected, respectively. After removing duplicates and unrelated papers, 88 studies that met the inclusion criteria are selected for review. Even when considering the physical constraints in modeling the mechanism, simplifications in designing a model for the reconfigurable CDPM generate errors. There is a gap in designing high-performance controllers to track desired trajectories while reconfiguring the geometry, and the satisfaction of physical constraints needs to be satisfied. In conclusion, this review presents several constraints in designing a controller to track desired trajectories and improve performance in future work. This paper presents an integrated controller architecture that includes physical constraints and predictive control.