To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 9 we discussed point estimation for a parameter or a vector of parameters. In Chapters 10 and 11, on confidence intervals and hypothesis testing, we needed the idea of the standard error of an estimator.
A common problem in statistics is to compare groups. Does a new drug work better at reducing the time of hospitalization from COVID? Which pop-up ad generates a higher click-rate? Which type of metal – aluminum, brass, or stainless steel – will produce the most reliable product? Usually, the question involves either the mean response or the proportion of responses.
The problem of statistical inference can be described as follows. There is a population and we would like to know certain aspects of the units that make up the population. For example, we might want to know what proportion have a certain property, or what the mean value (of some measure) of all units in the population is. The population is too large to sample in its entirety, so we rely on information from a sample taken from the population.
In Chapter 14 we studied multiple regression and polynomial regression and how these techniques can be used to determine the relationship between an outcome and several predictor variables .
In Chapter 3 we learned about the fundamental ideas of probability, and in Chapter 4 we generalized the notion of probability from working with sets to working with random variables and distributions. In many ways, random variables and their associated distributions can simplify probability calculations and, appropriately applied, are useful models for real-world phenomena.
This paper presents the novel concept of a singularity-free tube (SFT) in the constant orientation workspace of a spatial parallel manipulator. The concept is developed and demonstrated in the context of a $6$-$6$ spatial parallel manipulator, namely, the semi-regular Stewart platform manipulator. Given two points in the said workspace, the SFT is a tubular volume which contains these points and is free of gain-type or forward-kinematic singularities. The purpose of identifying such regions in space is to allow abundant freedom to the path-planner to connect the said points by a path, which can be free of gain-type singularities simply by remaining inside the SFT at all times. To demonstrate the concept, two smooth paths obtained by formulating two different optimisation problems have been presented as examples. The SFT can be of great help in singularity-free path-planning in many similar manipulators.
In this work, we propose a novel approach for tomato pollination that utilizes visual servo control. The objective is to meet the growing demand for automated robotic pollinators to overcome the decline in bee populations. Our approach focuses on addressing this challenge by leveraging visual servo control to guide the pollination process. The proposed method leverages deep learning to estimate the orientations and depth of detected flower, incorporating CAD-based synthetic images to ensure dataset diversity. By utilizing a 3D camera, the system accurately estimates flower depth information for visual servoing. The robustness of the approach is validated through experiments conducted in a laboratory environment with a 3D printed tomato flower plant. The results demonstrate a high detection rate, with a mean average precision of 91.2 %. Furthermore, the average depth error for accurately localizing the pollination target is impressively minimal, measuring only 1.1 cm. This research presents a promising solution for tomato pollination, showcasing the effectiveness of visual-guided servo control and its potential to address the challenges posed by diminishing bee populations in greenhouses.
We propose a method for generating rule sets as global and local explanations for tree-ensemble learning methods using answer set programming (ASP). To this end, we adopt a decompositional approach where the split structures of the base decision trees are exploited in the construction of rules, which in turn are assessed using pattern mining methods encoded in ASP to extract explanatory rules. For global explanations, candidate rules are chosen from the entire trained tree-ensemble models, whereas for local explanations, candidate rules are selected by only considering rules that are relevant to the particular predicted instance. We show how user-defined constraints and preferences can be represented declaratively in ASP to allow for transparent and flexible rule set generation, and how rules can be used as explanations to help the user better understand the models. Experimental evaluation with real-world datasets and popular tree-ensemble algorithms demonstrates that our approach is applicable to a wide range of classification tasks.
Predictive maintenance attempts to prevent unscheduled downtime by scheduling maintenance before expected failures and/or breakdowns while maximally optimizing uptime. However, this is a non-trivial problem, which requires sufficient data analytics knowledge and labeled data, either to design supervised fault detection models or to evaluate the performance of unsupervised models. While today most companies collect data by adding sensors to their machinery, the majority of this data is unfortunately not labeled. Moreover, labeling requires expert knowledge and is very cumbersome. To solve this mismatch, we present an architecture that guides experts, only requiring them to label a very small subset of the data compared to today’s standard labeling campaigns that are used when designing predictive maintenance solutions. We use auto-encoders to highlight potential anomalies and clustering approaches to group these anomalies into (potential) failure types. The accompanied dashboard then presents the anomalies to domain experts for labeling. In this way, we enable domain experts to enrich routinely collected machine data with business intelligence via a user-friendly hybrid model, combining auto-encoder models with labeling steps and supervised models. Ultimately, the labeled failure data allows for creating better failure prediction models, which in turn enables more effective predictive maintenance. More specifically, our architecture gets rid of cumbersome labeling tasks, allowing companies to make maximum use of their data and expert knowledge to ultimately increase their profit. Using our methodology, we achieve a labeling gain of 90% at best compared to standard labeling tasks.
The disassembly of end-of-life lithium–ion batteries (EOL-LIBs) is inherently complex, owing to their multi-state and multi-type characteristics. To mitigate these challenges, a human–robot collaboration disassembly (HRCD) model is developed. This model capitalizes on the cognitive abilities of humans combined with the advanced automation capabilities of robots, thereby substantially improving the disassembly process’s flexibility and efficiency. Consequently, this method has become the benchmark for disassembling EOL-LIBs, given its enhanced ability to manage intricate and adaptable disassembly tasks. Furthermore, effective disassembly sequence planning (DSP) for components is crucial for guiding the entire disassembly process. Therefore, this research proposes an approach for the generation of HRCD sequences for EOL-LIBs based on knowledge graph, providing assistance to individuals lacking relevant knowledge to complete disassembly tasks. Firstly, a well-defined disassembly process knowledge graph integrates structural information from CAD models and disassembly operating procedure. Based on the acquired information, DSP is conducted to generate a disassembly sequence knowledge graph (DSKG), which serves as a repository in graphical form. Subsequently, knowledge graph matching is employed to align nodes in the existing DSKG, thereby reusing node sequence knowledge and completing the sequence information for the target disassembly task. Finally, the proposed method is validated using retired power LIBs as a case study product.
Face milling is performed on aluminum alloy A96061-T6 at diverse cutting parameters proposed by the design of experiments. Surface roughness is predicted by examining the effects of cutting parameters (CP), vibrations (Vib), and sound characteristics (SC). Sound characteristics based on surface roughness estimation determine the rarity of the work. In this study, a unique ANN-TLBO hybrid model (Artificial Neural Networks: Teaching Learning Based Algorithm) is created to predict the surface roughness from CP, Vib, and SC. To ascertain their correctness and efficacy in evaluating surface roughness, the performance of these models is evaluated. First off, the CP hybrid model demonstrated an amazing accuracy of 95.1%, demonstrating its capacity to offer trustworthy forecasts of surface roughness values. The Vib hybrid model, in addition, demonstrated a respectable accuracy of 85.4%. Although it was not as accurate as the CP model, it nevertheless showed promise in forecasting surface roughness. The SC-based hybrid model outperformed the other two models in terms of accuracy with a remarkable accuracy of 96.2%, making it the most trustworthy and efficient technique for assessing surface roughness in this investigation. An analysis of error percentages revealed the exceptional performance of SC-based Model-3, exhibiting an average error percentage of 3.77%. This outperformed Vib Model-2 (14.52%) and CP-based Model-1 (4.75%). The SC model is the best option, and given its outstanding accuracy, it may end up becoming the go-to technique for industrial applications needing accurate surface roughness measurement. The SC model’s exceptional performance highlights the importance of optimization strategies in improving the prediction capacities of ANN-based models, leading to significant advancements in the field of surface roughness assessment and related fields. An IoT platform is developed to link the model’s output with other systems. The system created eliminates the need for manual, physical surface roughness measurement and allows for the display of surface roughness data on the cloud and other platforms.
In an era where artificial intelligence (AI) permeates every facet of our lives, the imperative to steer AI development toward enhancing human wellbeing has never been more critical. However, the development of such positive AI poses substantial challenges due to the current lack of mature methods for addressing the complexities that designing AI for wellbeing poses. This article presents and evaluates the positive AI design method aimed at addressing this gap. The method provides a human-centered process for translating wellbeing aspirations into concrete interventions. First, we explain the method’s key steps: (1) contextualizing, (2) operationalizing, (3) designing, and (4) implementing supported by (5) continuous measurement for iterative feedback cycles. We then present a multi-case study where novice designers applied the method, revealing strengths and weaknesses related to efficacy and usability. Next, an expert evaluation study assessed the quality of the case studies’ outcomes, rating them moderately high for feasibility, desirability, and plausibility of achieving intended wellbeing benefits. Together, these studies provide preliminary validation of the method’s ability to improve AI design, while identifying opportunities for enhancement. Building on these insights, we propose adaptations for future iterations of the method, such as the inclusion of wellbeing-related heuristics, suggesting promising avenues for future work. This human-centered approach shows promise for realizing a vision of “AI for wellbeing” that does not just avoid harm, but actively promotes human flourishing.
With global wind energy capacity ramping up, accurately predicting damage equivalent loads (DELs) and fatigue across wind turbine populations is critical, not only for ensuring the longevity of existing wind farms but also for the design of new farms. However, the estimation of such quantities of interests is hampered by the inherent complexity in modeling critical underlying processes, such as the aerodynamic wake interactions between turbines that increase mechanical stress and reduce useful lifetime. While high-fidelity computational fluid dynamics and aeroelastic models can capture these effects, their computational requirements limits real-world usage. Recently, fast machine learning-based surrogates which emulate more complex simulations have emerged as a promising solution. Yet, most surrogates are task-specific and lack flexibility for varying turbine layouts and types. This study explores the use of graph neural networks (GNNs) to create a robust, generalizable flow and DEL prediction platform. By conceptualizing wind turbine populations as graphs, GNNs effectively capture farm layout-dependent relational data, allowing extrapolation to novel configurations. We train a GNN surrogate on a large database of PyWake simulations of random wind farm layouts to learn basic wake physics, then fine-tune the model on limited data for a specific unseen layout simulated in HAWC2Farm for accurate adapted predictions. This transfer learning approach circumvents data scarcity limitations and leverages fundamental physics knowledge from the source low-resolution data. The proposed platform aims to match simulator accuracy, while enabling efficient adaptation to new higher-fidelity domains, providing a flexible blueprint for wake load forecasting across varying farm configurations.
Early phases of the design process require designers to select into view elements of the problem that they deem important. This exploration process is commonly referred to as problem framing and is essential to solution generation. There have recently been calls in the literature for more precise representations of framing activity and how individual designers come to negotiate shared frames in team settings. This paper presents a novel research approach to understand design framing activity using a system thinking lens. Systems thinking is the way that we understand a system’s components and the interrelations to create interventions, which can be used to move the system outcomes in a more favorable direction. The proposed approach is based on the observation that systems as mental representations of the problem bear some similarity to frames as collections of concepts implicit in the designer’s cognition. Systems mapping – a common visualization tool used to facilitate systems thinking – could then be used to model external representations of framing, made explicit through speech, and sketches. We thus adapt systems mapping to develop a coding scheme to analyze verbal protocols of design activity to retrospectively represent framing activity. The coding scheme is applied on two distinct datasets. The resulting system maps are analyzed to highlight team problem frames, individual contributions, and how the framing activity evolves over time. This approach is well suited to visualize the framing activity that occurs in open-ended problem contexts, where designers are more focused on problem finding and analysis rather than concept generation and detailed design. Several future research avenues for which this approach could be used or extended, including using new computational methods, are presented.
Human creativity originates from brain cortical networks that are specialized in idea generation, processing, and evaluation. The concurrent verbalization of our inner thoughts during the execution of a design task enables the use of dynamic semantic networks as a tool for investigating, evaluating, and monitoring creative thought. The primary advantage of using lexical databases such as WordNet for reproducible information-theoretic quantification of convergence or divergence of design ideas in creative problem solving is the simultaneous handling of both words and meanings, which enables interpretation of the constructed dynamic semantic networks in terms of underlying functionally active brain cortical regions involved in concept comprehension and production. In this study, the quantitative dynamics of semantic measures computed with a moving time window is investigated empirically in the DTRS10 dataset with design review conversations and detected divergent thinking is shown to predict success of design ideas. Thus, dynamic semantic networks present an opportunity for real-time computer-assisted detection of critical events during creative problem solving, with the goal of employing this knowledge to artificially augment human creativity.
The intersection of physics and machine learning has given rise to the physics-enhanced machine learning (PEML) paradigm, aiming to improve the capabilities and reduce the individual shortcomings of data- or physics-only methods. In this paper, the spectrum of PEML methods, expressed across the defining axes of physics and data, is discussed by engaging in a comprehensive exploration of its characteristics, usage, and motivations. In doing so, we present a survey of recent applications and developments of PEML techniques, revealing the potency of PEML in addressing complex challenges. We further demonstrate the application of select such schemes on the simple working example of a single degree-of-freedom Duffing oscillator, which allows to highlight the individual characteristics and motivations of different “genres” of PEML approaches. To promote collaboration and transparency, and to provide practical examples for the reader, the code generating these working examples is provided alongside this paper. As a foundational contribution, this paper underscores the significance of PEML in pushing the boundaries of scientific and engineering research, underpinned by the synergy of physical insights and machine learning capabilities.
Gripping devices for harvesting fruits have such types of work as cutting, tearing and unscrewing. For apples, it is preferable to use slicing or unscrewing, while the fruit leg should not remain, damaging the apple during storage. In this article, we are developing a grab for harvesting apples. The gripper is used both for holding the fruit and for jamming, followed by unscrewing. One of the advantages is that the proposed method of collecting apples allows you not to waste time moving the manipulator from the tree to the basket, but only to grab and tear them off. The fruit enters the gripper device; after which it enters the fruit collection container through a rigid or flexible pipe. The gripper device is built on the basis of a ball-screw transmission, which is supplemented by a gear drive along the helical surface. This allows for rotation and rectilinear movement of the held fruit. The gripping device has a ratchet mechanism that allows you to fix the fruit. A mathematical model of the gripper device has been developed, which allows determining the torque of the engine depending on the position of the fingers. The parameters of the mechanism were optimized using a genetic algorithm, and the results are presented in the form of a Pareto set. A 3D model of the gripper device has been built and a layout has been developed using 3D printing. Experimental laboratory and field tests of the gripping device were carried out.
Precipitation is one of the most relevant weather and climate processes. Its formation rate is sensitive to perturbations such as by the interactions between aerosols, clouds, and precipitation. These interactions constitute one of the biggest uncertainties in determining the radiative forcing of climate change. High-resolution simulations such as the ICOsahedral non-hydrostatic large-eddy model (ICON-LEM) offer valuable insights into these interactions. However, due to exceptionally high computation costs, it can only be employed for a limited period and area. We address this challenge by developing new models powered by emerging machine learning approaches capable of forecasting autoconversion rates—the rate at which small droplets collide and coalesce becoming larger droplets—from satellite observations providing long-term global spatial coverage for more than two decades. In particular, our approach involves two phases: (1) we develop machine learning models which are capable of predicting autoconversion rates by leveraging high-resolution climate model data, (2) we repurpose our best machine learning model to predict autoconversion rates directly from satellite observations. We compare the performance of our machine learning models against simulation data under several different conditions, showing from both visual and statistical inspections that our approaches are able to identify key features of the reference simulation data to a high degree. Additionally, the autoconversion rates obtained from the simulation output and satellite data (predicted) demonstrate statistical concordance. By efficiently predicting this, we advance our comprehension of one of the key processes in precipitation formation, crucial for understanding cloud responses to anthropogenic aerosols and, ultimately, climate change.