To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter addresses the global optimization of nonconvex NLP and MINLP optimization problems. The use of convexification transformations is first introduced that allow us to transform a nonconvex NLP into a convex NLP. This is illustrated with geometric programming problems that involve posynomals, and that can be convexified with exponential transformations. We consider next the more general solution approach that relies on the use of convex envelopes that can predict rigorous lower bounds to the global optimum, and which are used in conjunction with a spatial branch and bound method. The case of bilinear NLP problems is addressed as a specific example for which the McCormick convex envelopes are derived. The application of the spatial branch and bound search, coupled with McCormick envelopes, is illustrated with a small example. The software BARON, ANTIGONE, and SCIOP are briefly described.
This chapter addresses the solution of nonlinear programming (NLP)problemsthrough algorithms whose objective is to find a point satisfying the Karush–Kuhn–Tucker conditions through different applications of Newton's method. The algorithms considered include successive-quadratic programming, reduced-gradient method and interior-point method. The basic assumptions behind each method are stated and used to derive the major steps involved in these algoritms. We make brief reference to optimization software including SNOPT, MINOS, CONOPT, IPOPT and KNITRO. Finally, general guidelines are given how to formulate good NLP models.
This chapter first describes general approaches for anticipating uncertainty in optimization models. The strategies include optimizing the expected value, minimax stategy, chance-constrained,two-stage and multistage programming, and robust optimization. The chapter focuses on the solution of two-stage stochastic MILP programming problems in which 0-1 variables are present in stage-1 decisions. The discretization of the uncertain parameters is described, which gives rise to scenario trees. We then present the extended MILP formulation that explicitly considers all possible scenarios. Since this problem can become too large, the Benders decomposition method (also known as the L-shaped method )is introduced, in which a master MILP problem is defined through duality in order to predict new integer values for stage-1 decisions, as well as a lower bound. The extension to multistage programming problems is also briefly discussed, as well as a brief reference to robust optmization in which the robust counterpart is derived.
Having introduced mixed-integer linear programming (MILP)models in Chapter 6 using somewhat intuitive arguments, this chapter shows that MILP models can be systematically derived using concepts of propositional logic.The chapter introduces the conjunctive normal form (CNF) as a logic form that can be used as a basis to readily formulate linear constraints with 0-1 variables. Steps are described that are required to transform logic propositions into CNF form. Next the concept of disjunctions is introduced, showing that these can be formulated as MILP constraints either with big-M formulation or with the hull reformulation. It is also shown that the latter leads to strong LP relaxations.
This chapter addresses the solution of mixed-integer nonlinear programming (MINLP) problems. The following methods for convex MINLP optimization are described: branch and bound, outer-approximation, generalized Benders decomposition. and extended cutting plane. The last three methods rely on decomposing the MINLP problem into a master MILP model thatpredicts lower bounds and new integer values, and an NLP subproblem that is solved for fixed integer variables yielding an upper bound. It is shown that the MILP master problem of generalized Benders decomposition can be derivedfrom a linear combination of the constraints of the master MILP for outer-approximation yielding a weaker lower bound. The extension of these methods for solving nonconvex MINLP problems is discussed, as well as brief reference to software such as SBB, DICOPT, and α-ECP.
This chapter addresses the problem of establishing the feasibilty of a set of constraints given that recourse variables are involved, and that the uncertainty set is specified, typically through lower and upper bounds. This problem, denoted as the feasibility test problem, is shown to correspond to a max-min-max optimization problem. It is shown that, under assumptions of convexity, the problem can be simplified through vertex seaches in the parameter set. It is also shown that the feasibility test problem can be reformulated asa bilevel optimization problem in which the KKT conditions in the inner problem can be reformulated through mixed-integer constraints. It is shown that this MINLP has the capability of predicting nonvertex solutions. The feasibility test is then extended to the feasibility index problem that determines the actual parameter range that is feasible. The concept of one-dimensional convexity is introduced to provide sufficient conditions for the validity of vertex searches. The example of a heat exchanger network is used to illustrate the mathematical formulations.
This chapter first presents basic theoretical concepts of linear programming (LP) problems. These include convexity, solution at extreme points or vertices, and charcterization of these through system of equations expressed in terms of basic and nonbasic variables. The KKT conditions are the applied to identify optimal vertex solutions. These theoretical concepts are then applied to derive the Simplex algorithm, which is introduced as an exchange algorithm between basic and nonbasic variables so as to verify optimality at a given vertex, and ensure feasible steps. A small numerical example is presented to illustrate the steps of the Simplex algorithm. Finally, a brief discussion on software such as CPLEX, GUROBI, and XPRESS is also presented.
This chapter provides first an introduction an types of optimization problems that arise in different areas of process systems engineering. It then provides a general classification of optimization problems: linear and mixed-integer linear programming, nonlinear and mixed-integer nonlinear programming, generalized disjunctive programming, decomposition methods, stochastic programming, and flexibility analysis. Finally it reviews the outline of the book through the different chapters.
This chapter addressed the solution of mixed-integer linear (MILP) problems for which first simple methods are introduced, namely exhaustive enumeration of all 0-1 combinations, and solution through relaxation and rounding. The branch and bound method is then formally introduced, identifying major properties such as lower bounding in order to use these tofathom nodes, or to obtain feasibleinteger solutions which yield upper bounds. The concept of cutting planes is also introduced with Gomory's cutting plane. Finally, the combination of branch and bound and cutting planes is discussed. Finally, a brief discussion on software such as CPLEX, GUROBI, and XPRESS is presented.
This chapter first intruduces the idea of process modeling, both as equation oriented and as sequential modular calculations. It then poses the analysis of process flowsheets as a system of nonlinear equations. Newton's method is first introduced for solving the systems of nonlinear equations, highlighting some of its major theoretical properties. Next, the concept of quasi-Newton methods is introduced to approximate the Jacobian matrix in Newton's method. The specific quasi-Newton method presented is Broyden's method for which its derivation is presented.
This chapter introduces constraint programming, which is a modeling framework that can accommodate discrete, Boolean, and continuous variables, and where constraints can be algebraic, or in the form of disjunctions, logic constraints, or global constraints that represent procedures. The main goal in constraint programming is to find feasible solutions to the specified model. The main solution method relies on a tree search that relies on domain reduction and constraint propagation techniques.As an example of these constraints, "edge-finding constraints" for the area of scheduling are presented to illustrate the procedural aspect of the search. A simple example problem is presented to illustrate the tree search used in conjunction with domain reduction and constraint propagation. The software OPL is briefly described.