To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This tutorial guide introduces online nonstochastic control, an emerging paradigm in control of dynamical systems and differentiable reinforcement learning that applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. In optimal control, robust control, and other control methodologies that assume stochastic noise, the goal is to perform comparably to an offline optimal strategy. In online control, both cost functions and perturbations from the assumed dynamical model are chosen by an adversary. Thus, the optimal policy is not defined a priori and the goal is to attain low regret against the best policy in hindsight from a benchmark class of policies. The resulting methods are based on iterative mathematical optimization algorithms and are accompanied by finite-time regret and computational complexity guarantees. This book is ideal for graduate students and researchers interested in bridging classical control theory and modern machine learning.
With an emphasis on timeless essential mathematical background for optimization, this textbook provides a comprehensive and accessible introduction to convex optimization for students in applied mathematics, computer science, and engineering. Authored by two influential researchers, the book covers both convex analysis basics and modern topics such as conic programming, conic representations of convex sets, and cone-constrained convex problems, providing readers with a solid, up-to-date understanding of the field. By excluding modeling and algorithms, the authors are able to discuss the theoretical aspects in greater depth. Over 170 in-depth exercises provide hands-on experience with the theory, while more than 30 'Facts' and their accompanying proofs enhance approachability. Instructors will appreciate the appendices that cover all necessary background and the instructors-only solutions manual provided online. By the end of the book, readers will be well equipped to engage with state-of-the-art developments in optimization and its applications in decision-making and engineering.
In this chapter, we (a) introduce the notion of a convex problem in cone-constrained form, (b) present the Lagrange function of a cone-constrained convex problem, (c) prove the convex programming Duality Theorem in cone-constrained form, and (d) discuss conic programming and conic duality, and present the conic programming Duality Theorem.
In this chapter we present convex programming optimality conditions in both sadde point form and Karush--Kuhn--Tucker form for mathematical programming, and also optimality conditions for cone-constrained convex programs and for conic problems. We conclude the chapter by revisiting linear programming duality as a special case of conic duality and reproducing the classical results on the dual of a linearly constrained convex quadratic minimization problem.
In this chapter, we (a) outline operations preserving convexity of functions, (b) present differential criteria for convexity, (c) establish convexity of several important multivariate functioins, (d) present the gradient inequality, and (e) establish local boundedness and Lipschitz continuity of convex functions.
In this chapter, we (a) present the notion of a polyhedral representation and illustrate its importance, (b) demonstrate via Fourier--Motzkin eliminaton that every polyhedrally representable set is polyhedral, and (c) outline the calculus of polyhedral representations. As an immediate application, we demonstrate that a bounded and feasible LP problem is solvable.
In this chapter, we (a) present epigraph characterization of cone-convexity, (b) introduce cone-monotonicity, and describe differential criteria of cone-convexity and cone-monotonicity, (c) present instructive examples of cone-convex and cone-monotone functions, (d) outline basic operations preserving cone-convexity and cone-monotonicity. Taken together, (b)--(d) provide simple and powerful tools allowing one to detect and utilize cone-convexity and cone-monotonicity.