Skip to main content Accessibility help
Internet Explorer 11 is being discontinued by Microsoft in August 2021. If you have difficulties viewing the site on Internet Explorer 11 we recommend using a different browser such as Microsoft Edge, Google Chrome, Apple Safari or Mozilla Firefox.

Chapter 4: Unconstrained Gradient-Based Optimization

Chapter 4: Unconstrained Gradient-Based Optimization

pp. 79-152

Authors

, University of Michigan, Ann Arbor, , Brigham Young University, Utah
Resources available Unlock the full potential of this textbook with additional resources. There are free resources and Instructor restricted resources available for this textbook. Explore resources
  • Add bookmark
  • Cite
  • Share

Summary

We solve these problems using gradient information to determine a series of steps from a starting guess (or initial design) to the optimum, as shown in Fig. 4.1. We assume the objective function to be nonlinear, C2 continuous, and deterministic. We do not assume unimodality or multimodality, and there is no guarantee that the algorithm finds the global optimum. Referring to the attributes that classify an optimization problem (Fig. 1.22), the optimization algorithms discussed in this chapter range from first to second order, perform a local search, and evaluate the function directly. The algorithms are based on mathematical principles rather than heuristics.

About the book

Access options

Review the options below to login to check your access.

Purchase options

eTextbook
US$125.00
Hardback
US$125.00

Have an access code?

To redeem an access code, please log in with your personal login.

If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.

Also available to purchase from these educational ebook suppliers