To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The inverse dynamics model of an industrial robot can predict and control the robot’s motion and torque output, improving its motion accuracy, efficiency, and adaptability. However, the existing inverse rigid body dynamics models still have some unmodelled residuals, and their calculation results differ significantly from the actual industrial robot conditions. The bootstrap aggregating (bagging) algorithm is combined with a long short-term memory network, the linear layer is introduced as the network optimization layer, and a compensation method of hybrid inverse dynamics model for robots based on the BLL residual prediction algorithm is proposed to meet the above needs. The BLL residual prediction algorithm framework is presented. Based on the rigid body inverse dynamics of the Newton–Euler method, the BLL residual prediction network is used to perform error compensation on the inverse dynamics model of the Franka robot. The experimental results show that the hybrid inverse dynamics model based on the BLL residual prediction algorithm can reduce the average residuals of the robot joint torque from 0.5651 N·m to 0.1096 N·m, which improves the accuracy of the inverse dynamics model compared with those of the rigid body inverse dynamics model. This study lays the foundation for performing more accurate operation tasks using industrial robots.
The application of data analytics to product usage data has the potential to enhance engineering and decision-making in product planning. To achieve this effectively for cyber-physical systems (CPS), it is necessary to possess specialized expertise in technical products, innovation processes, and data analytics. An understanding of the process from domain knowledge to data analysis is of critical importance for the successful completion of projects, even for those without expertise in these areas. In this paper, we set out the foundation for a toolbox for data analytics, which will enable the creation of domain-specific pipelines for product planning. The toolbox includes a morphological box that covers the necessary pipeline components, based on a thorough analysis of literature and practitioner surveys. This comprehensive overview is unique. The toolbox based on it promises to support and enable domain experts and citizen data scientists, enhancing efficiency in product design, speeding up time to market, and shortening innovation cycles.
Evaluating the algorithmic behavior of interactive systems is complex and time-consuming. Developers increasingly recognize the importance of accountability for their algorithmic creations’ unanticipated behavior and resulting implications. To mitigate this phenomenon, developers not only need to concentrate on the observable inaccuracies that can be measured quantitatively but also the more subjective outcomes that can perpetuate social bias, which are challenging to identify. We require a new approach that involves humans in scrutinizing algorithmic behavior. It leverages a combination of quantitative and qualitative methods to support an ethical, value-aligned design and a system’s lifecycle, informed by users’ perception and values. To date, the literature lacks an agreed-upon framework for such an approach. Consequently, we propose an oversight framework, Modular Oversight Methodology (MOM), which aids developers in assessing the behavior of their systems by involving a carefully crowdsourced society-in-the-loop. The framework facilitates the development and execution of an oversight process and can be tweaked according to the domain and application of use. Through such an oversight process, developers can assess the human perception of the algorithmic behavior under inspection, and extract valuable insights that will aid in assessing its implications. We present the MOM framework, as a first step toward tailoring more robust, domain-specific solutions to exercise human oversight over algorithms, as a means for software developers to keep the generated output of their solutions fair and trustworthy.
The design of gas turbine combustors for optimal operation at different power ratings is a multifaceted engineering task, as it requires the consideration of several objectives that must be evaluated under different test conditions. We address this challenge by presenting a data-driven approach that uses multiple probabilistic surrogate models derived from Gaussian process regression to automatically select optimal combustor designs from a large parameter space, requiring only a few experimental data points. We present two strategies for surrogate model training that differ in terms of required experimental and computational efforts. Depending on the measurement time and cost for a target, one of the strategies may be preferred. We apply the methodology to train three surrogate models under operating conditions where the corresponding design objectives are critical: reduction of NOx emissions, prevention of lean flame extinction, and mitigation of thermoacoustic oscillations. Once trained, the models can be flexibly used for different forms of a posteriori design optimization, as we demonstrate in this study.
We consider the problem of parameter estimation for the superposition of square-root diffusions. We first derive the explicit formulas for the moments and auto-covariances based on which we develop our moment estimators. We then establish a central limit theorem for the estimators with the explicit formulas for the asymptotic covariance matrix. Finally, we conduct numerical experiments to validate our method.
Motivated by the impact of emerging technologies on (toll) parks, this paper studies a problem of customers’ strategic behavior, social optimization, and revenue maximization for infinite-server queues. More specifically, we assume that a customer’s utility consists of a positive reward for receiving service minus a cost caused by the other customers in the system. In the observable setting, we show the existence, uniqueness, and expressions of the individual equilibrium threshold, the socially optimal threshold, and the optimal revenue threshold, respectively. Then, we prove that the optimal revenue threshold is smaller than the socially optimal threshold, which is smaller than the individual one. Furthermore, we also extend the cost functions to any finite polynomial function with nonnegative coefficients. In the unobservable setting, we derive the joining probabilities of individual equilibrium and optimal revenue. Finally, using numerical experiments, we complement our results and compare the social welfare and the revenue under these two information levels.
In this paper, we define weighted failure rate and their means from the stand point of an application. We begin by emphasizing that the formation of n independent component series system having weighted failure rates with sum of weight functions being unity is same as a mixture of n distributions. We derive some parametric and non-parametric characterization results. We discuss on the form invariance property of baseline failure rate for a specific choice of weight function. Some bounds on means of aging functions are obtained. Here, we establish that weighted increasing failure rate average (IFRA) class is not closed under formation of coherent systems unlike the IFRA class. An interesting application of the present work is credited to the fact that the quantile version of means of failure rate is obtained as a special case of weighted means of failure rate.
This paper investigates the precise large deviations of the net loss process in a two-dimensional risk model with consistently varying tails and dependence structures, and gives some asymptotic formulas which hold uniformly for all x varying in t-intervals. The study is among the initial efforts to analyze potential risk via large deviation results for the net loss process of the two-dimensional risk model, and can provide a novel insight to assess the operation risk in a long run by fully considering the premium income factors of the insurance company.
In this paper, a new multivariate counting process model (called Multivariate Poisson Generalized Gamma Process) is developed and its main properties are studied. Some basic stochastic properties of the number of events in the new multivariate counting process are initially derived. It is shown that this new multivariate counting process model includes the multivariate generalized Pólya process as a special case. The dependence structure of the multivariate counting process model is discussed. Some results on multivariate stochastic comparisons are also obtained.
The reorientation to remote teaching due to the impact of COVID-19 restrictions proved to be both challenging and compromising, particularly in the context of delivering practice-based design education. Central to the challenges faced by many design tutors was the loss of the design studio as a focal point for engagement and learning. However, delivering teaching remotely through a period of enforced separation also proved that through adversity comes new insights, with the accelerated use of emergent technologies to support distributed working revealing new behaviours and opportunities for learning to take place. In response to COVID-19 restrictions, Miro, the digital whiteboard platform was widely adopted within the UK creative industries and universities alike to facilitate remote engagement. Following a return to campus-based delivery through the Autumn/Fall of 2021, it became evident that some of the pragmatic approaches adopted through necessity had the potential to hold lasting value beyond crisis modes of teaching. This position paper presents a series of reflective studies gathered over three academic years with the aim of (1) understanding the impacts of remote learning as experienced by design students (2) establish clear benefits for the application of online platforms within a blended campus-based delivery and (3) identify emergent characteristics in students’ navigation of the post-COVID design studio.
The material removal rate (MRR) serves as a crucial indicator in the chemical mechanical polishing (CMP) process of semiconductor wafers. Currently, the mainstream method to ascertain the MRR through offline measurements proves time inefficient and struggles to represent process variability accurately. An efficient MRR prediction model based on stacking ensemble learning that integrates models with disparate architectures was proposed in this study. First, the processing signals collected during wafer polishing, as available in the PHM2016 dataset, were analyzed and preprocessed to extract statistical and neighbor domain features. Subsequently, Pearson correlation coefficient analysis (PCCA) and principal component analysis (PCA) were employed to fuse the extracted features. Ultimately, random forest (RF), light gradient boosting machine (LightGBM), and backpropagation neural network (BPNN) with hyperparameters optimized by the Bayesian Optimization Algorithm were integrated to establish an MRR prediction model based on stacking ensemble learning. The developed model was verified on the PHM2016 benchmark test set, and a Mean Square Error (MSE) of 7.72 and a coefficient of determination (R2) of 95.82% were achieved. This indicates that the stacking ensemble learning based model, integrated with base models of disparate architectures, offers considerable potential for real-time MRR prediction in the CMP process of semiconductor wafers.
The Traveling Salesman Problem (TSP) is a central topic in discrete mathematics and theoretical computer science. It has been one of the driving forces in combinatorial optimization. The design and analysis of better and better approximation algorithms for the TSP has proved challenging but very fruitful. This is the first book on approximation algorithms for the TSP, featuring a comprehensive collection of all major results and an overview of the most intriguing open problems. Many of the presented results have been discovered only recently, and some are published here for the first time, including better approximation algorithms for the asymmetric TSP and its path version. This book constitutes and advances the state of the art and makes it accessible to a wider audience. Featuring detailed proofs, over 170 exercises, and 100 color figures, this book is an excellent resource for teaching, self-study, and further research.
In Chapter 3 we learned how to do basic probability calculations and even put them to use solving some fairly complicated probability problems. In this chapter and the next two, we generalize how we do probability calculations, where we will transition from working with sets and events to working with random variables.
To do statistics you must first be able to “speak probability.” In this chapter we are going to concentrate on the basic ideas of probability. In probability, the mechanism that generates outcomes is assumed known and the problems focus on calculating the chance of observing particular types or sets of outcomes. Classical problems include flipping “fair” coins (where fair means that on one flip of the coin the chance it comes up heads is equal to the chance it comes up tails) and “fair” dice (where fair now means the chance of landing on any side of the die is equal to that of landing on any other side).
In Chapter 5 we learned about a number of discrete distributions. In this chapter we focus on continuous distributions, which are useful as models of various real-world events. By the end of this chapter you will know nine continuous and eight discrete distributions. There are many more continuous distributions, but these nine will suffice for our purposes. These continuous distributions are useful for modeling various types of processes and phenomena that are encountered in the real world.
Sampling joke: “If you don’t believe in random sampling, the next time you have a blood test, tell the doctor to take it all.” At the beginning of Chapter 7 we introduced the ideas of population vs. sample and parameter vs. statistic. We build on this in the current chapter. The key concept in this chapter is that if we were to take different samples from a distribution and compute some statistic, such as the sample mean, then we would get different results.
The last two chapters have covered the basic concepts of estimation. In Chapter 9 we studied the problem of giving a single number to estimate a parameter. In Chapter 10 we looked at ways to give an interval that we believe will include the true parameter. In many applications, we want to ask some very specific questions about the parameter(s).
We begin this chapter with a review of hypothesis testing from Chapter 12. A hypothesis is a statement about one or more parameters of a model. The null hypothesis is usually a specific statement that encapsulates “no effect.” For example, if we apply one of the two treatments, A or B, to volunteers we may be interested in testing whether the population mean outcomes are equal.