To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An essential text on practical application, theory and simulation, written by an international coalition of experts in the field and edited by the authors of Colloidal Suspension Rheology. This up-to-date work builds upon the prior work as a valuable guide to formulation and processing, as well as fundamental rheology of colloidal suspensions. Thematically, theory and simulation are connected to industrial application by consideration of colloidal interactions, particle properties, and suspension microstructure. Important classes of model suspensions including gels, glasses and soft particles are covered so as to develop a deeper understanding of industrial systems ranging from carbon black slurries, paints and coatings, asphalt, cement, and mine tailings, to natural suspensions such as biocolloids, protein solutions, and blood. Systematically presenting the established facts in this multidisciplinary field, this book is the perfect aid for academic researchers, graduate students, and industrial practitioners alike.
This chapter introduces basic models for mixed-integer linear programming. It starts with multiple-choice constraints, and implications that are formulated as inequalities with 0-1 variables. Next, constraints with continuous variables are introduced, such as discontinuous domains and cost functions with fixed charges, which are formulated as linear mixed-integer constraints. Finally, the chapter closes by introducing classic OR problems, the assignment problem, facility location problem, knapsack problem, set covering problem and the traveling salesman problem, all of which are formulated as linear integer and mixed-integer linear programming models.
This chapter addresses the decomposition of MILP optimization problems that involve complicating constraints. It is shown that, by dualizing the complicating constraints, one can derive the Lagrangean relaxation that yields a lower bound to the optimal solution. It is also shown that, by duplicating variables, one can dualize the corresponding equalities yielding the Lagrangean decomposition method that can predict stronger lower bounds than the Lagrangean relaxationn. The steps involved in this decomposition method are described, and can be exended to NLP and MINLP problems.
This chapter first introduces basic concepts in nonlinear optimization, especially feasible regions and convexity conditions. Sufficient conditions are provided for both convex regions and convex functions. Next, optimality conditions are presented for unconstrained optimization problems (stationary conditions), and constrained problems with equality constraints (stationary condition of Lagrange function) and with inequality constraints (Fritz-John Theorem).The chapter concludes with nonlinear optimization with equality and equalityconstraints that lead to the Karush–Kuhn–Tucker conditions. Finally, an active set strategy is introduced for the solution of small nonlinear programming problems, and is illustrated with a small example.
This chapter addresses the global optimization of nonconvex NLP and MINLP optimization problems. The use of convexification transformations is first introduced that allow us to transform a nonconvex NLP into a convex NLP. This is illustrated with geometric programming problems that involve posynomals, and that can be convexified with exponential transformations. We consider next the more general solution approach that relies on the use of convex envelopes that can predict rigorous lower bounds to the global optimum, and which are used in conjunction with a spatial branch and bound method. The case of bilinear NLP problems is addressed as a specific example for which the McCormick convex envelopes are derived. The application of the spatial branch and bound search, coupled with McCormick envelopes, is illustrated with a small example. The software BARON, ANTIGONE, and SCIOP are briefly described.
This chapter addresses the solution of nonlinear programming (NLP)problemsthrough algorithms whose objective is to find a point satisfying the Karush–Kuhn–Tucker conditions through different applications of Newton's method. The algorithms considered include successive-quadratic programming, reduced-gradient method and interior-point method. The basic assumptions behind each method are stated and used to derive the major steps involved in these algoritms. We make brief reference to optimization software including SNOPT, MINOS, CONOPT, IPOPT and KNITRO. Finally, general guidelines are given how to formulate good NLP models.
This chapter first describes general approaches for anticipating uncertainty in optimization models. The strategies include optimizing the expected value, minimax stategy, chance-constrained,two-stage and multistage programming, and robust optimization. The chapter focuses on the solution of two-stage stochastic MILP programming problems in which 0-1 variables are present in stage-1 decisions. The discretization of the uncertain parameters is described, which gives rise to scenario trees. We then present the extended MILP formulation that explicitly considers all possible scenarios. Since this problem can become too large, the Benders decomposition method (also known as the L-shaped method )is introduced, in which a master MILP problem is defined through duality in order to predict new integer values for stage-1 decisions, as well as a lower bound. The extension to multistage programming problems is also briefly discussed, as well as a brief reference to robust optmization in which the robust counterpart is derived.