To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Most algorithms in this book assume that the design variables are continuous. However, sometimes design variables must be discrete. Common examples of discrete optimization include scheduling, network problems, and resource allocation. This chapter introduces some techniques for solving discrete optimization problems.
In the introductory chapter, we discussed function characteristics from the point of view of the function’s output—the black-box view shown in Fig. 1.16. Here, we discuss how the function is modeled and computed. The better your understanding of the model and the more access you have to its details, the more effectively you can solve the optimization problem. We explain the errors involved in the modeling process so that we can interpret optimization results correctly.
Optimization is a human instinct. People constantly seek to improve their lives and the systems that surround them. Optimization is intrinsic in biology, as exemplified by the evolution of species. Birds optimize their wings’ shape in real time, and dogs have been shown to find optimal trajectories. Even more broadly, many laws of physics relate to optimization, such as the principle of minimum energy. As Leonhard Euler once wrote, “nothing at all takes place in the universe in which some rule of maximum or minimum does not appear.”
This chapter provides helpful historical context for the methods discussed in this book. Nothing else in the book depends on familiarity with the material in this chapter, so it can be skipped. However, this history makes connections between the various topics that will enrich the big picture of optimization as you become familiar with the material in the rest of the book, so you might want to revisit this chapter.
As mentioned in , most engineering systems are multidisciplinary, motivating the development of multidisciplinary design optimization (MDO). The analysis of multidisciplinary systems requires coupled models and coupled solvers. We prefer the term component instead of discipline or model because it is more general. However, we use these terms interchangeably depending on the context. When components in a system represent different physics, the term multiphysics is commonly used.
We solve these problems using gradient information to determine a series of steps from a starting guess (or initial design) to the optimum, as shown in Fig. 4.1. We assume the objective function to be nonlinear, continuous, and deterministic. We do not assume unimodality or multimodality, and there is no guarantee that the algorithm finds the global optimum. Referring to the attributes that classify an optimization problem (Fig. 1.22), the optimization algorithms discussed in this chapter range from first to second order, perform a local search, and evaluate the function directly. The algorithms are based on mathematical principles rather than heuristics.
The gradient-based optimization methods introduced in Chapters 4 and 5 require the derivatives of the objective and constraints with respect to the design variables, as illustrated in Fig. 6.1. Derivatives also play a central role in other numerical algorithms. For example, the Newton-based methods introduced in Section 3.8 require the derivatives of the residuals.
Uncertainty is always present in engineering design. Manufacturing processes create deviations from the specifications, operating conditions vary from the ideal, and some parameters are inherently variable. Optimization with deterministic inputs can lead to poorly performing designs. Optimization under uncertainty (OUU) is the optimization of systems in the presence of random parameters or design variables. The objective is to produce robust and reliable designs. A design is robust when the objective function is less sensitive to inherent variability. A design is reliable when it is less prone to violating a constraint when accounting for the variability.*
Up to this point in the book, all of our optimization problem formulations have had a single objective function. In this chapter, we consider multiobjective optimization problems, that is, problems whose formulations have more than one objective function. Some common examples of multiobjective optimization include risk versus reward, profit versus environmental impact, acquisition cost versus operating cost, and drag versus noise.
Gradient-free algorithms fill an essential role in optimization. The gradient-based algorithms introduced inare efficient in finding local minima for high-dimensional nonlinear problems defined by continuous smooth functions. However, the assumptions made for these algorithms are not always valid, which can render these algorithms ineffective. Also, gradients might not be available when a function is given as a black box.
Presenting a fresh look at process control, this new text demonstrates state-space approach shown in parallel with the traditional approach to explain the strategies used in industry today. Modern time-domain and traditional transform-domain methods are integrated throughout and explain the advantages and limitations of each approach; the fundamental theoretical concepts and methods of process control are applied to practical problems. To ensure understanding of the mathematical calculations involved, MATLAB® is included for numeric calculations and MAPLE for symbolic calculations, with the math behind every method carefully explained so that students develop a clear understanding of how and why the software tools work. Written for a one-semester course with optional advanced-level material, features include solved examples, cases that include a number of chemical reactor examples, chapter summaries, key terms, and concepts, as well as over 240 end-of-chapter problems, focused computational exercises and solutions for instructors.
A unique text integrating numerics, mathematics and applications to provide a hands-on approach to using optimization techniques, this mathematically accessible textbook emphasises conceptual understanding and importance of theorems rather than elaborate proofs. It allows students to develop fundamental optimization methods before delving into MATLAB®'s optimization toolbox, and to link MATLAB's results with the results from their own code. Following a practical approach, the text demonstrates several applications, from error-free analytic examples to truss (size) optimization, and 2D and 3D shape optimization, where numerical errors are inevitable. The principle of minimum potential energy is discussed to highlight the deep relationship between engineering and optimization. MATLAB code in every chapter illustrates key concepts and the text demonstrates the coupling between MATLAB and SOLIDWORKS® for design optimization. A wide variety of optimization problems are covered including constrained non-linear, linear-programming, least-squares, multi-objective, and global optimization problems.