Skip to main content Accessibility help
Internet Explorer 11 is being discontinued by Microsoft in August 2021. If you have difficulties viewing the site on Internet Explorer 11 we recommend using a different browser such as Microsoft Edge, Google Chrome, Apple Safari or Mozilla Firefox.

Chapter 4: Optimization: Hard Computing

Chapter 4: Optimization: Hard Computing

pp. 109-141

Authors

, Idaho State University
Resources available Unlock the full potential of this textbook with additional resources. There are free resources and Instructor restricted resources available for this textbook. Explore resources
  • Add bookmark
  • Cite
  • Share

Extract

In this chapter, we establish the mathematical foundation for hard computing optimization algorithms. We look at the classical optimization approaches and extend our discussion to include iterative methods, which hold a special role in machine learning. In particular, we review the gradient decent method, Newton’s method, the conjugate gradient method and the quasi-Newton’s method. Along with the discussion of these optimization methods, implementation using Matlab script as well as considerations for use in neural network training algorithms are provided. Finally, the Levenberg-Marquardt method is introduced, discussed, and implemented in Matlab script to compare its functioning with the other four iterative algorithms introduced in this chapter.

Keywords

  • Gradient decent
  • Newton’s method
  • conjugate gradient method
  • quasi-Newton method
  • Levenberg-Marquardt method

About the book

Access options

Review the options below to login to check your access.

Purchase options

eTextbook
US$99.99
Hardback
US$99.99

Have an access code?

To redeem an access code, please log in with your personal login.

If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.

Also available to purchase from these educational ebook suppliers