We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A dynamical system is a mechanical, electrical, chemical, or biological system that evolves in time.Dynamical systems theory provides one of the most powerful and pervasive applications of matrix methods in science and engineering.These qualitative and quantitative tools and methods allow for the determination and characterization of the number and types of solutions, including their stability, of complex, often nonlinear, systems.These methods include phase-plane analysis, bifurcation diagrams, stability theory, Poincare diagrams illustrated using linear and nonlinear physical examples, including the Duffing equation and the Saltzman-Lorenz model.
The algebraic eigenproblem is the mathematical answer to the physical questions:What are the principal stresses in a solid or fluid and on what planes do they act?What are the natural frequencies of a system?Is the system stable to small disturbances?What is the best basis with respect to which to solve a system of linear algebraic equations with a real symmetric coefficient matrix?What is the best basis with respect to which to solve a system of linear ordinary differential equations?What is the best basis with respect to which to represent an experimental or numerical data set?
A central goal of scientists and engineers is obtaining solutions of the differential equations that govern their physical systems.This can be done numerically for large and/or complex systems using finite-difference methods, finite-element methods, or spectral methods.This chapter gives an introduction and the formal basis for these methods, with particular emphasis on finite-difference methods.Second-order partial differential equations are classified as elliptic, parabolic, or hyperbolic, and the numerical methods developed for such equations must be faithful to their mathematical properties.
Optimization and root finding are closely aligned techniques for determining the extremums and zeros, respectively, of a function.Newton's method is the workhorse of both types of algorithms for nonlinear functions, and the conjugate-gradient and GMRES methods are also covered.Optimization of linear, quadratic, and nonlinear functions are addressed with and without constraints, which may be equality or inequality.In the linear programming case, emphasis is placed on the simplex method.
Reduced-order modeling is an active area of research by which simplified models of experimental or numerical data can be generated that are faithful to the behavior of the unerlying system.These methods are based on Galerkin projection, which is motivated by variational methods, or some other method of weighted residuals and allow for the projection of any governing differential equation onto an appropriate set of basis vectors or functions.These basis vectors or functions can be obtained using proper-orthogonal decomposition (POD) or one of its extensions or alternatives.Galerkin projection and POD are applied to continuous and discrete data sets.
The application of finite-difference methods to boundary-value problems is considered using the Poisson equation as a model problem.Direct and iterative methods are given that are effective for solving elliptic partial differential equations in multidimensions having various types of boundary conditions.Multigrid methods are given particular attention given their generality and efficiency.Treatment of nonlinear terms are illustrated using Picard and Newton linearization.
The eigenvalues and eigenfunctions of self-adjoint differential operators provide the basis functions with respect to which ordinary and partial differential equations can be solved.These methods are extensions of those used to solve linear systems of algebraic equations and ordinary differential equations.Eigenfunction expansions also provide the basis for advanced numerical methods, such as spectral methods, and data-reduction techniques, such as proper-orthogonal decomposition.
Analysis of various data sets can be accomplished using techniques based on least-squares methods.For example, linear regression of data determines the best-fit line to the data via a least-squares approach.The same is true for polynomial and regression methods using other basis functions.Curve fitting is used to determine the best-fit line or curve to a particular set of data, while interpolation is used to determine a curve that passes through all of the data points.Polynomial and spline interpolation are discussed.State estimation is covered using techniques based on least-squares methods.
Computational linear algebra builds on the methods in Part I for solving systems of linear algebraic equations and the algebraic eigenproblem appropriate for small systems to methods amenable to approximate computer solutions for large systems.These include direct and iterative methods for solving systems of equations, such as LU decomposition and Gauss-Seidel iteration.A popular algorithm based on QR decomposition is described for solving large algebraic eigenproblems for the full spectrum of eigenpairs, and the Arnoldi method for a subset of eigenpairs of sparse matrices.
The application of finite-difference methods to initial-value problems, with emphasis on parabolic equations, is considered using the one- and two-dimensional unsteady diffusion equations as model problems.Single-step methods are introducted for ordinary differential equations, and more general explicit and implicit methods are articulated for partial differential equations.Numerical stability analysis is covered using the matrix method, von Neumann method, and the modified wavenumber method.These numerical methods are also applied to nonlinear convection problems.A brief introduction to numerical methods for hyperbolic equations is provided, and parallel computing is discussed.
Least-squares methods provide the mathematical foundation for optimization of algebraic systems.They can be applied to overdetermined systems, having more equations than unknowns, or undertermined systems, having fewer equations than unknowns.The optimization may involve constraints or be subject to a penalty function.Numerical methods, namely the conjugate-gradient and GMRES methods, that are based on least-squares optimization method are discussed in detail and put into the context of other Krylov-based methods.