The notion of convexity occupies a central position in the study of optimization theory. It encompasses not only the idea of convex sets, but also of concave and convex functions (see Section 7.1 for definitions). The attractiveness of convexity for optimization theory arises from the fact that when an optimization problem meets suitable convexity conditions, the same first-order conditions that we have shown in previous chapters to be necessary for local optima, also become sufficient for global optima. Indeed, even more is true. When the convexity conditions are tightened to what are called strict convexity conditions, we get the additional bonus of uniqueness of the solution.
The importance of such results, especially from a computational standpoint, is obvious. Of course, such a marked strengthening of our earlier analysis does not come free. As we show in Section 7.2, the assumption of convexity is a strong one. A function that is concave or convex must necessarily be continuous everywhere on the interior of its domain. It must also possess strong differentiability properties; for instance, all directional derivatives of such a function must exist at all points in the domain. Finally, an assumption of convexity imposes strong curvature restrictions on the underlying function, in the form of properties that must be met by its first and second-derivatives.
These results indicate that an assumption of convexity is not an innocuous one, but, viewed from the narrow standpoint of this book, the restrictive picture they paint is perhaps somewhat exaggerated.
Review the options below to login to check your access.
Log in with your Cambridge Higher Education account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.