To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we demonstrate that (a) substituting the vector of eigenvalues of a symmetric n x n matrix into a convex permutation symmetric function of n real variables results in a convex function of the matrix, and (b) that if g is a convex function on the real axis, and G is the set of symmetric matrices of a given size with spectrum in the domain of g, then G is a convex set, and when X is a matrix from G, the trace of the matrix g(X), is a convex function of X; here g(X) is the matrix acting at a spectral subspace of X associated with eigenvalue v as multiplication by g(v); both these facts will be heavily used when speaking about cone-convexity is chapter 21.
In this chapter, we (a) outline the subject and the terminology of mathematical and convex programming, (b) introduce the Slater and relaxed Slater conditions and formulate the Convex Theorem on the Alternative -- the basis of Lagrange duality theory in convex programming, (c) introduce the notions of cone-convexity and of the convex programming problem in cone-constrained form, thus extending the standard mathematical programming setup of convex optimization, and (d) formulate and prove the Convex Theorem on the Alternative in cone-constrained form, justifying, as a byproduct, the standard Convex Theorem on the Alternative.
In this chapter, we derive the standard first- and second-order necessary/sufficient conditions for local optimality of a feasible solution to a (possibly nonconvex) mathematical programming problem. We conclude the chapter by illustrating these on the S-Lemma.
In this chapter, we (a) discuss the notion of lower semicontinuity of a function and demonstrate that functions with this property have closed epigraphs, (b) show that the pointwise supremum of a family of lower semicontinous functions is lower discontinuous, (c) demonstrate that a proper lower semiconscious convex function is the pointwise supremum of the affine minorants of the function, (d) introduce the notion of a subgradient and the subdifferential of a convex function at a point and demonstrate existence of subgradients at points from the relative interior of the function’s domain, (e) outline elementary rules of subdifferential calculus, and (f) establish basic properties of the directional derivatives of convex functions and the connection between directional derivatives and subdifferentials.
In this chapter, we extract from the results of Chapter 3 the basic theory of finite systems of linear inequalities - Farkas’ Lemmas, General Theorem on the Alternative, certificates for feasibility/infeasibility of polyhedral sets, and linear programming Duality Theorem.
In this chapter, we (a) introduce the notion of Legendre transformation of a proper convex function, (b) establish basic properties of the Legendre transform, in particular, demonstrate that the transform of a proper lower semicontinuous convex function is itself a proper lower semicontinous convex function and that its Legendre transformation is the original function, (c) demonstrate that the set of minimizers of a proper lower semicontinuous convex function is the subdifferential, taken at the origin, of the function’s Legendre transform, and (d) derive the Young, Holder, and moment inequalities and discuss dual (a.k.a. conjugate) norms.
The main ideas are introduced in a historical context. Beginning with phase retrieval and ending with neural networks, the reader will get a sense of the book’s broad scope.
Beginning with linear programming and ending with neural network training, this chapter features seven applications of the divide-and-concur approach to solving problems with RRR.
The reflect-reflect-relax (RRR) algorithm is derived from basic principles. Local convergence is established and the flow limit is introduced to better understand the global behavior.
The size of the intersection of A and B tells us if we should expect many solutions, or if we should be surprised to find even one. The latter case implies a conspiracy and is the most interesting.
Trapping of the RRR algorithm on nonsolutions can be avoided by modifying the constraint sets and also the metric. This chapter also covers general good practice on the use of RRR.