We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this appendix we review some essential concepts regarding finite dimensional vector spaces, their bases, norms, and inner products that are essential for the reader and appear repeatedly throughout the text.
The Banach contraction mapping principle is used in several parts of the text, both in its version for Banach spaces as well as in the case of complete metric spaces. This appendix presents this result.
We prove that every matrix has a singular value decompostion (SVD), and explore conditions for its uniqueness. With the help of the SVD we prove several properties of matrices, and discuss low rank approximation, including the Eckart-Young theorem
This chapter begins with the study of trigonometric interpolation and the discrete Fourier transform. As a first application, the numerical integration of periodic functions is discussed. More detailed topics, like existence and uniqueness of trigonometric interpolants, as well as alias as convergence of trigonometric interpolation are discussed. An important practical tool, the fast Fourier transform (FFT) is then introduced. With this at hand the most rudimentary ideas of signal processing are presented.
In this chapter we present and analyze the stability of Galerkin methods for ordinary differential equations. We present the discontinuous Galerkin method for problems with coercive operators, discuss its stability and convergence. Then, for problems with monotone operators, the continuous Petrov–Galerkin method is introduced and analyzed. We show how, in simple scenarios these methods reduced to ones that have been discussed in previous chapters.
We study the solution of square systems of linear equations with a nonsingular matrices, we provide sufficient conditions for the classical Gaussian elimination with pivoting to proceed without failure. We introduce the LU factorization of a matrix, and several of its variants, like Cholesky factorization.
We motivate the introduction of multistep schemes by recalling interpolatory quadrature rules. Then, these are presented in the general setting. Their consistency and conditions of consistency: the method of C’s and the log method are discussed. The construction and consistency of Adams-Bashforth, Adams-Moulton, and backward differentiation formulas is then presented. Then, we turn our attention to the delicate issue of stability for multistep schemes: the notions of zero stability, root condition, and homogeneous zero stability are introduced and their equivalence discussed. This is achieved by developing the theory of solutions to linear difference equations. The celebrated Dahlquist equivalence theorem, stating that a linear multistep scheme is convergent if and only if it is consistent and stable is then presented. A discussion of Dahlquist first barrier concludes the chapter
The topic of this chapter is linear stability of schemes for ODEs. The notions of stiffness, linear stability domain, and A-stability are introduced. We discuss the A-stability of Runge-Kutta schemes via the amplification factor. The notion of A-acceptable, Pade approximations of the exponential, and the Hairer-Wanner-Norsett theorem are then presented. This allows us to show that all Gauss-Legendre-Runge-Kutta schemes are A-stable. For multistep schemes we present linear stability criteria and Dahlquist second barrier theorem. The boundary locus method concludes the chapter.
We present several methods for approximating the spectrum of a matrix. We start by providing coarse estimates using so-called Gershgoring disks. The stability, via the Bauer-Fike theorem is then analyzed. For Hermitian matrices we then present and analyze the power iteration method and its variants. The reduction to Hessenberg form, and the QR algorithm are then presented and analyzed. The chapter then concludes with a discussion of the Golub-Kahan algorithm to compute the SVD.
The best polynomial approximation in a weighted least squares sense is studied in this chapter. The essential notion of orthogonal polynomials, and their properties are analyzed. These are, first of all, used to show existence, uniqueness, and convergence of a least squares best approximation.This motivates the introduction of generalized Fourier series. The issue of uniform convergence of least squares approximations for smooth functions is then studied.
This chapter is a collection of facts, ideas, and techniques regarding the analysis of boundary value, initial and initial boundary value problems for partial differential equations. We begin by deriving some of the representative equations of mathematical physics, which then give rise to the classification of linear, second order, constant coefficient partial differential equations into: elliptic, parabolic, and hyperbolic equations. For each one of these classes we then discuss the main ideas behind problem with them and the existence of solutions: both classical and weak.
We present and analyze the simplest single step schemes for the approximation of the solution to an initial value problem: forward and bacward Euler, trapezoidal and midpoint rules, and Taylor’s method. We discuss the notions of consistency error and convergence for these schemes
We introduce the spectral radius of a matrix, and study how it relates to induced matrix norms. We prove Gelfand’s theorem on the spectral radius. We introduce the condition number of a matrix, and use it to provide error estimates for the solution of a linear system under perturbations.