To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Unconstrained multivariate gradient-based minimization is introduced by means of search direction-producing methods, focusing on steepest descent and Newton's method.Issues with both methods are discussed, highlighting what happens in the case of locally nonconvex functions, particularly in Newton's method.Linesearch is introduced, effectively rendering multidimensional optimization into a sequence of one-dimensional searches along the ray of the search directions produced.Linesearch criteria are discussed, such as the Armijo first condition, and efficient ways to cut the step size are discussed.
Duality theory has a central role in constrained optimization, both from a theoretical point of view and to enable understanding of solution methods and problem reformulations for special classes of problems.Such applications are presented in the next chapter on Lagrangian relaxation and Lagrangian decomposition.In this chapter, the fundamental background for duality theory is presented along with a basic introduction of key concepts related to it.
Convexity is of paramount importance in optimization theory.This chapter adopts a simple and intuitive description, highlighting the importance of these properties to guarantee global optimality, and paves the way to understanding nonconvex optimization problems in later chapters.
This chapter is the main chapter of the book that introduces in detail how modern Interior Point Methods work, what they are based on, and the associated numerical-computational implementation schemes involved.The difference between primal barrier methods and primal-dual barrier methods is presented and discussed, showing why nowadays mostly primal-dual methods are used in general optimization solvers.
This chapter is a first introduction to penalty and barrier methods, as a direct way to transform generally constrained optimization problems to unconstrained ones.This is done through the appropriate choice of penalty and barrier functions, with the various problems facing such methods highlighted in intuitive and illustrative ways via discussion and graphical examples.The chapter also prepares the reader for the much more advanced material that follows in the next chapter.
This chapter is a standard section in most introductory material on optimization.It examines three basic one-dimensional optimization methods, highlighting connections between them and leading to the one-dimensional Newton's method as the method of choice.
This chapter focuses on the formulation of LP problems as a means to teach how to derive mathematical programming formulations for basic descriptions, specifications, and data related to associated processes that are to be optimized.It focuses on basic blending type problems, production planning, with special focus given to network flow problems that cover a wide range of applications.This topic is revisited in more detail in chapter 14.
Decomposition of optimization problems is a fundamental technique to reduce the computational cost and enable efficient solution of very large-scale models.Key decomposition approaches are presented in this chapter, discussing primal and dual decomposition methods, Generalized Benders Decomposition, and related applications.
This is a basic and standard chapter on motivating and illustrating Linear Programming problems via geometrical construction of feasible regions and objective function contours.It establishes the context of Linear Programming and motivates the material of the next chapter via numerous illustrative examples.
Non-differentiable optimization is a topic of contemporary interest in several applications.Non-differentiability may arise from piecewise descriptions of the objective function or the constraints, and requires special handling in order to derive solutions for such problems.Here in this chapter the emphasis is given on subgradient methods, with a basic introduction on subdifferentials and all associated necessary concepts.
Uncertainty is ubiquitous in engineering practice and models.Parameters that are estimated via online measurement or by experiments always carry a certain level of uncertainty with them – which can in fact be significant for difficult-to-measure systems.Other sources of uncertainty are fluctuations in process inputs, e.g. concentrations, flow rates, temperatures, etc.And finally, one may not be certain of the structure of models, e.g. the actual chemical reaction mechanism(s) may be uncertain.All these necessitate special handling of such models, and where the uncertainty can be quantified by probabilistic measures this allows special formulations and solution procedures to be employed so as to derive robust solutions with respect to the uncertainty involved.All these, along with the necessary theoretical concepts, are presented in this chapter, with subsequent emphasis for practical application to the multiple scenario approach for the handling of parametric uncertainty.
The chapter aims to introduce completely the theory behind the simplex method for Linear Programming, by building slowly the material from the solution set of rectangular linear (affine) systems of equations to vertex solutions.The simplex method is introduced as a natural way to progress from one vertex to the next, on the constraint polytope, always improving the objective until the optimal solution is reached.Use of artificial variables and the two-phase simplex method is made to deal with finding an initial feasible basis for the simplex method.Pathological cases of LP problems are also considered, and how the simplex method would detect them is highlighted.
This is one of the main and key chapters in the introductory material part of this book.Constrained nonlinear programming, involving both equality and inequality constraints, is introduced and related in an intuitive (at this stage) manner with Lagrange multipliers.In a later chapter (duality theory, Chapter 17) a more rigorous and theoretical introduction to Lagrangian theory is presented.