To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The optimization algorithms from Chapters 4 and 6 require only rather simple tools from Riemannian geometry, all covered in Chapters 3 and 5 for embedded submanifolds then generalized in Chapter 8. This chapter provides additional geometric tools to gain deeper insight and help develop more sophisticated algorithms. It opens with the Riemannian distance then discusses exponential maps as retractions which generate geodesics. This is paired with a careful discussion of what it means to invert the exponential map. Then, the chapter defines parallel transport to compare tangent vectors in different tangent spaces. Later, the chapter defines transporters which can been seen as a relaxed type of parallel transport. Before that, we take a deep dive into the notion of Lipschitz continuity for gradients and Hessians on Riemannian manifolds, aiming to connect these concepts with the Lipschitz-type regularity assumptions we required to analyze gradient descent and trust regions. The chapter closes with a discussion of how to approximate Riemannian Hessians with finite differences of gradients via transporters, and with an introduction to the differentiation of tensor fields of all orders.
Convexity is one of the most fruitful concepts in classical optimization. Geodesic convexity generalizes that concept to optimization on Riemannian manifolds. There are several ways to carry out such a generalization: This chapter favors permissive definitions which are sufficient to retain the most important properties for optimization purposes (e.g., local optima are global optima). Alternative definitions are discussed, highlighting the fact that all coincide for the special case of Hadamard manifolds (essentially, negatively curved Riemannian manifolds). The chapter continues with a discussion of the special properties of differentiable geodesically (strictly, strongly) convex functions, and builds on them to show global linear convergence of Riemannian gradient descent, assuming strong geodesic convexity and Lipschitz continuous gradients (via the Polyak–Łojasiewicz inequality). The chapter closes with two examples of manifolds where geodesic convexity has proved useful, namely, the positive orthant with a log-barrier metric (recovering geometric programming), and the cone of positive definite matrices with the log-Euclidean and the affine invariant Riemannian metrics.
As an entry point to differential geometry, this chapter defines embedded submanifolds as subsets of linear spaces which can be locally defined by equations satisfying certain regularity conditions. Such sets can be linearized, yielding the notion of tangent space. The chapter further defines what it means for a map to and from a submanifold to be smooth, and how to differentiate such maps. The (disjoint) union of all tangent spaces forms the tangent bundle which is also a manifold. That makes it possible to define vector fields (maps which select a tangent vector at each point) and retractions (smooth maps which generate curves passing through any point with any given velocity). The chapter then proceeds to endow each tangent space with an inner product (turning each one into a Euclidean space). Under some regularity conditions, this extra structure turns the manifold into a Riemannian manifold. This makes it possible to define the Riemannian gradient of a real function. Taken together, these concepts are sufficient to build simple algorithms in the next chapter. An optional closing section defines local frames: They are useful for proofs but can be skipped for practical matters.
The main purpose of this chapter is to motivate and analyze the Riemannian trust-region method (RTR). This optimization algorithm shines brightest when it uses both the Riemannian gradient and the Riemannian Hessian. It applies for optimization on manifolds in general, thus for embedded submanifolds of linear spaces in particular. For that setting, the previous chapters introduce the necessary geometric tools. Toward RTR, the chapter first introduces a Riemannian version of Newton's method. It is motivated by first developing second-order optimality conditions. Each iteration of Newton's method requires solving a linear system of equations in a tangent space. To this end, the classical conjugate gradients method (CG) is reviewed. Then, RTR is presented with a worst-case convergence analysis guaranteeing it can find points which approximately satisfy first- and second-order necessary optimality conditions under some assumptions. Subproblems can be solved with a variant of CG called truncated-CG (tCG). The chapter closes with three optional sections: one about local convergence, one providing simpler conditions to ensure convergence, and one about checking Hessians numerically.
Optimization on Riemannian manifolds-the result of smooth geometry and optimization merging into one elegant modern framework-spans many areas of science and engineering, including machine learning, computer vision, signal processing, dynamical systems and scientific computing. This text introduces the differential geometry and Riemannian geometry concepts that will help students and researchers in applied mathematics, computer science and engineering gain a firm mathematical grounding to use these tools confidently in their research. Its charts-last approach will prove more intuitive from an optimizer's viewpoint, and all definitions and theorems are motivated to build time-tested optimization algorithms. Starting from first principles, the text goes on to cover current research on topics including worst-case complexity and geodesic convexity. Readers will appreciate the tricks of the trade for conducting research and for numerical implementations sprinkled throughout the book.
We derive a nonlinear Schrödinger equation for the propagation of the three-dimensional broader bandwidth gravity-capillary waves including the effect of depth-uniform current. In this derivation, the restriction of narrow bandwidth constraint is extended, so that this equation will be more appropriate for application to a realistic sea wave spectrum. From this equation, an instability condition is obtained and then instability regions in the perturbed wavenumber space for a uniform wave train are drawn, which are in good agreement with the exact numerical results. As it turns out, the corrections to the stability properties that occur at the fourth-order term arise from an interaction between the mean flow and the frequency-dispersion term. Since the frequency-dispersion term, in the absence of depth-uniform current, for pure capillary waves is of opposite sign for pure gravity waves, so too are the corrections to the instability properties.
We consider a local projection stabilization based on biorthogonal systems for convection–diffusion–reaction differential equations with mixed boundary conditions. The approach based on biorthogonal systems is numerically more efficient than other existing approaches to obtain a uniform approximation for convection dominated problems. We prove optimal a priori error estimates for the proposed numerical technique. Numerical examples are presented to demonstrate the performance of the approach.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.
Bayesian optimization is a methodology for optimizing expensive objective functions that has proven success in the sciences, engineering, and beyond. This timely text provides a self-contained and comprehensive introduction to the subject, starting from scratch and carefully developing all the key ideas along the way. This bottom-up approach illuminates unifying themes in the design of Bayesian optimization algorithms and builds a solid theoretical foundation for approaching novel situations.
The core of the book is divided into three main parts, covering theoretical and practical aspects of Gaussian process modeling, the Bayesian approach to sequential decision making, and the realization and computation of practical and effective optimization policies.
Following this foundational material, the book provides an overview of theoretical convergence results, a survey of notable extensions, a comprehensive history of Bayesian optimization, and an extensive annotated bibliography of applications.