To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper focuses on configuration design, dimensional synthesis, and engineering application of a novel asymmetric 2R1T parallel mechanism (PM) with zero-coupling degree. The analytical forward and inverse displacement solutions are deduced by the means of vector method. The mathematical models between Euler angles and the orientational parameters (i.e., azimuth and tilt angles) of the offset output axis are established. Using screw theory as mathematical tool, this paper worked out evaluation indices of motion/force transmissibility and presented the definitions and calculation methods of good transmission orientation workspace and good transmission orientation capacity (GTOC). Furthermore, a comparative example with respect to kinematic performance of asymmetric UPS-RPU-PU PM and planar symmetric 2UPS-PU PM is carried out, and the result demonstrates that UPS-RPU-PU significantly outperforms 2UPS-PU in terms of GTOC. The constrained optimization model is constructed to formulate optimal problem of dimensional parameters on the maximizing GTOC, which is then solved by differential evolution algorithm. Finally, an engineering case demonstrates that the optimized mechanism has a good application prospect in hydraulic support test bed.
This chapter aims to motivate the study of machine learning, having in mind as the intended audience students and researchers with an engineering background.
We study approximations for the Lévy area of Brownian motion which are based on the Fourier series expansion and a polynomial expansion of the associated Brownian bridge. Comparing the asymptotic convergence rates of the Lévy area approximations, we see that the approximation resulting from the polynomial expansion of the Brownian bridge is more accurate than the Kloeden–Platen–Wright approximation, whilst still only using independent normal random vectors. We then link the asymptotic convergence rates of these approximations to the limiting fluctuations for the corresponding series expansions of the Brownian bridge. Moreover, and of interest in its own right, the analysis we use to identify the fluctuation processes for the Karhunen–Loève and Fourier series expansions of the Brownian bridge is extended to give a stand-alone derivation of the values of the Riemann zeta function at even positive integers.
This final chapter covers topics that build on the material discussed in the book, with the aim of pointing to avenues for further study and research. The selection of topics is clearly a matter of personal choice, but care has been taken to present both well-established topics, such as probabilistic graphical models, and emerging ones, such as causality and quantum machine learning. The topics are distinct, and each section can be read separately. The presentation is brief, and only meant as a launching pad for exploration.
As discussed so far in this book, the standard formulation of machine learning makes the following two basic assumptions: 1. Statistical equivalence of training and testing. The statistical properties of the data observed during training match those to be experienced during testing – i.e., the population distribution underlying the generation of the data is the same during both training and testing. 2. Separation of learning tasks. Training is carried out separately for each separate learning task – i.e., for any new data set and/or loss function, training is viewed as a new problem to be addressed from scratch.
In this chapter, we use the optimization tools presented in Chapter 5 to develop supervised learning algorithms that move beyond the simple settings studied in Chapter 4 for which the training problem could be solved exactly, typically by addressing an LS problem. We will focus specifically on binary and multi-class classification, with a brief discussion at the end of the chapter about the (direct) extension to regression problems. Following Chapter 4, the presentation will mostly concentrate on parametric model classes, but we will also touch upon mixture models and non-parametric methods.
This chapter focuses on three key problems that underlie the formulation of many machine learning methods for inference and learning, namely variational inference (VI), amortized VI, and variational expectation maximization (VEM). We have already encountered these problems in simplified forms in previous chapters, and they will be essential in developing the more advanced techniques to be covered in the rest of the book. Notably, VI and amortized VI underpin optimal Bayesian inference, which was used, e.g., in Chapter 6 to design optimal predictors for generative models; and VEM generalizes the EM algorithm that was introduced in Chapter 7 for training directed generative latent-variable models.
The previous chapters have adopted a limited range of probabilistic models, namely Bernoulli and categorical distributions for discrete rvs and Gaussian distributions for continuous rvs. While these are common modeling choices, they clearly do not represent many important situations of interest for machine learning applications. For instance, discrete data may a priori take arbitrarily large values, making categorical models unsuitable. Continuous data may need to satisfy certain constraints, such as non-negativity, rendering Gaussian models far from ideal.
So far, this book has focused on conventional centralized learning settings in which data are collected at a central server, which carries out training. When data originate at distributed agents, such as personal devices, organizations, or factories run by different companies, this approach has two clear drawbacks: • First, it requires transferring data from the agents to the server, which may incur a prohibitive communication load. • Second, in the process of transferring, storing, and processing the agents’ data, sensitive information may be exposed or exploited.
This paper studies the open-loop equilibrium strategies for a class of non-zero-sum reinsurance–investment stochastic differential games between two insurers with a state-dependent mean expectation in the incomplete market. Both insurers are able to purchase proportional reinsurance contracts and invest their wealth in a risk-free asset and a risky asset whose price is modeled by a general stochastic volatility model. The surplus processes of two insurers are driven by two standard Brownian motions. The objective for each insurer is to find the equilibrium investment and reinsurance strategies to balance the expected return and variance of relative terminal wealth. Incorporating the forward backward stochastic differential equations (FBSDEs), we derive the sufficient conditions and obtain the general solutions of equilibrium controls for two insurers. Furthermore, we apply our theoretical results to two special stochastic volatility models (Hull–White model and Heston model). Numerical examples are also provided to illustrate our results.