In this chapter, we establish the mathematical foundation for hard computing optimization algorithms. We look at the classical optimization approaches and extend our discussion to include iterative methods, which hold a special role in machine learning. In particular, we review the gradient decent method, Newton’s method, the conjugate gradient method and the quasi-Newton’s method. Along with the discussion of these optimization methods, implementation using Matlab script as well as considerations for use in neural network training algorithms are provided. Finally, the Levenberg-Marquardt method is introduced, discussed, and implemented in Matlab script to compare its functioning with the other four iterative algorithms introduced in this chapter.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.