We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The application of finite-difference methods to boundary-value problems is considered using the Poisson equation as a model problem.Direct and iterative methods are given that are effective for solving elliptic partial differential equations in multidimensions having various types of boundary conditions.Multigrid methods are given particular attention given their generality and efficiency.Treatment of nonlinear terms are illustrated using Picard and Newton linearization.
The eigenvalues and eigenfunctions of self-adjoint differential operators provide the basis functions with respect to which ordinary and partial differential equations can be solved.These methods are extensions of those used to solve linear systems of algebraic equations and ordinary differential equations.Eigenfunction expansions also provide the basis for advanced numerical methods, such as spectral methods, and data-reduction techniques, such as proper-orthogonal decomposition.
Analysis of various data sets can be accomplished using techniques based on least-squares methods.For example, linear regression of data determines the best-fit line to the data via a least-squares approach.The same is true for polynomial and regression methods using other basis functions.Curve fitting is used to determine the best-fit line or curve to a particular set of data, while interpolation is used to determine a curve that passes through all of the data points.Polynomial and spline interpolation are discussed.State estimation is covered using techniques based on least-squares methods.
Computational linear algebra builds on the methods in Part I for solving systems of linear algebraic equations and the algebraic eigenproblem appropriate for small systems to methods amenable to approximate computer solutions for large systems.These include direct and iterative methods for solving systems of equations, such as LU decomposition and Gauss-Seidel iteration.A popular algorithm based on QR decomposition is described for solving large algebraic eigenproblems for the full spectrum of eigenpairs, and the Arnoldi method for a subset of eigenpairs of sparse matrices.
The application of finite-difference methods to initial-value problems, with emphasis on parabolic equations, is considered using the one- and two-dimensional unsteady diffusion equations as model problems.Single-step methods are introducted for ordinary differential equations, and more general explicit and implicit methods are articulated for partial differential equations.Numerical stability analysis is covered using the matrix method, von Neumann method, and the modified wavenumber method.These numerical methods are also applied to nonlinear convection problems.A brief introduction to numerical methods for hyperbolic equations is provided, and parallel computing is discussed.
Least-squares methods provide the mathematical foundation for optimization of algebraic systems.They can be applied to overdetermined systems, having more equations than unknowns, or undertermined systems, having fewer equations than unknowns.The optimization may involve constraints or be subject to a penalty function.Numerical methods, namely the conjugate-gradient and GMRES methods, that are based on least-squares optimization method are discussed in detail and put into the context of other Krylov-based methods.
Vectors and matrices provide a mathematical framework for formulating and solving linear systems of algebraic equations, which have applications in all areas of engineering and the sciences.Solution methods include Gaussian elimination and the matrix inverse.Vectors play a key role in representing various quantities in mechanics as well as providing the bases for vector spaces.Linear transformations allow one to alter such vector spaces to ones that are more convenient for the task at hand.
Vector and matrix calculus provides a powerful set of tools for analying and manipulating scalars, vectors, and tensors in continuum mechanics.This includes transformations between coordinate systems and provides the foundation for optimization methods.
Address vector and matrix methods necessary in numerical methods and optimization of linear systems in engineering with this unified text. Treats the mathematical models that describe and predict the evolution of our processes and systems, and the numerical methods required to obtain approximate solutions. Explores the dynamical systems theory used to describe and characterize system behaviour, alongside the techniques used to optimize their performance. Integrates and unifies matrix and eigenfunction methods with their applications in numerical and optimization methods. Consolidating, generalizing, and unifying these topics into a single coherent subject, this practical resource is suitable for advanced undergraduate students and graduate students in engineering, physical sciences, and applied mathematics.