We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
FD approximations of increasing orders of accuracy require correspondingly large stencil sizes. The limiting case of global stencils has numerous important applications, which have been extensively treated in the literature, under the acronym of pseudospectral (PS) methods. While the accuracy in certain cases can be very high, geometric flexibility typically becomes severely compromised. Our present summary aims mainly to provide some general insights into how PS methods relate to FD methods (rather than to describe PS implementation technicalities, such as, for nonperiodic problems, the necessity to cluster nodes very strongly toward the interval end points).
The most common reason for approximating derivatives by finite differences is to apply these to solve ordinary and patrial differential equations – ODEs and PDEs, respectively. In the case of ODEs, many of the well-established (and seemingly quite different) procedures are immediately related to FD approximations – often more closely than may be apparent from how these methods are customarily described. Together with some basic convergence and stability theory, this chapter surveys a variety of ODE solvers, with emphasis on their FD connection and on the computational advantages that high-order accurate approximations can provide.
Polynomial interpolation is one of the most fundamental computational procedures, underlying a large number of more specialized numerical methods. With N distinct nodes in 1-D together with associated data values, there is a unique interpolating polynomial of degree N-1. In this brief appendix, we summarize some different ways to arrive at and then to represent this polynomial. There are also different error formulas available (measuring the difference between a smooth function and its polynomial interpolant). One particular way to formulate this difference, in the form of a complex plane contour integral, provides the key to understanding many features of polynomial interpolants (such as the Runge phenomenon, often causing violent oscillations near interval end points).
Although a large number of approaches have been employed for numerical solutions of PDEs, quite straightforward FD-based approximations remain of great utility and importance. Just as for ODEs, high orders of accuracy often significantly increase computational efficiency. This chapter describes FD approaches for a variety of different types of PDEs, together with both basic theory and practical considerations. For many commonly occurring equations (e.g., Laplace, Poisson, and Helmholtz equations), their analytic character permits more effective FD approximations than what one obtains by simply approximating each derivative separately.
A spline is a piecewise polynomial function, commonly used to interpolate or approximate one-dimensional data. If these polynomials are of degree p, only the pth derivative may be discontinuous at the nodes. While polynomial interpolation can be highly unstable, especially cubic splines provide in many cases a very attractive combination of smoothness and absence of spurious oscillations. This appendix provides a survey of various variations and features of splines. While generalizations of splines to Cartesian grids in 2-D and 3-D are quite straightforward, radial basis functions (RBFs, cf., Chapter 5) can be seen as a very powerful and flexible generalization of splines to arbitrarily scattered nodes in any number of dimensions.
Although extrapolation intrinsically is a less stable task than interpolation, it can nevertheless in many cases be highly effective. In the context of FD methods, a common situation is to have numerical results for a sequence of step sizes h approaching h = 0, together with theoretical knowledge that the computed solution (pointwise) possesses a convergent Taylor expansion (with unknown coefficients) around h = 0. Extrapolation down to h = 0 is then well posed and is known as Richardson extrapolation. Deferred correction is another option that sometimes can provide several additional orders of accuracy from a lower-order scheme (described in Chapter 3). The two methods just mentioned can be characterized as linear. In other applications, such as accelerating slowly converging iterations and evaluations of infinite sums, nonlinear techniques are often more effective. Two such methods are briefly described here: Aitken extrapolation and conversion of truncated Taylor expansions to Padé rational form. Such methods can in certain contexts provide spectacularly effective extrapolations (although strict error analysis often is difficult or unavailable).
Mesh-Free FD Approximations are easy to work with and are in some sense optimal in their representations of functions locally around a single point. However, in more than one dimension, approximations based on polynomial interpolants encounter severe difficulties if the node points are not regularly placed (grid-based). Additional difficulties often arise at boundaries which (above 1-D) may be irregularly shaped, and also from mixtures of scales across a computational domain that may require spatially variable resolution. It transpires that radial basis functions (RBFs) can favorably replace (or supplement) polynomials in such situations. Their use in creating spatially local and highly effective FD-type approximations is quite recent.
The history of FD approximations goes back even further than that of calculus. The classical definition of a derivative is an example of a very simple FD formula. Although many basic FD properties follow quite immediately from Taylor expansions, numerous additional perspectives have proven very helpful both for deriving a wide range of FD formulas, and for understanding their different features (such as their accuracy near boundaries vs. in domain interiors). This chapter focuses on FD approximations on equispaced grids in one dimension, generalizing quite directly to Cartesian grids in higher dimensions. Further generalizations to mesh-free node layouts in multiple dimensions are discussed in Chapter 5.
The two main types of node sets when solving PDEs are grid based and mesh free. Grid-based discretizations were traditionally the only option in FD contexts. In contrast, mesh-free methods do not require any direct node-to-node connections. The nodes can be placed irregularly, thereby offering simple opportunities for gradual local refinement in critical spatial areas. Finite element methods are in this sense not mesh free but rather based on irregular meshes (typically triangular or tetrahedral), as they require specific node-to-node connections in forming their elements. In the context of solving PDEs with RBF-FD methods, random and Halton node sets are excessively irregular, while quasi-uniform node sets typically give near-optimal accuracies. Several methods are available for creating such node sets, allowing for problem-specific node density variations over different regions. Effective methods are also available for subsampling such node sets, as needed for scattered-node counterparts to multigrid methods for solving elliptic-type PDEs.
Lagrange multipliers provide an important tool for both analytic and numeric constrained optimization. To provide background and notation, we first consider the unconstrained case. After following up with the constrained case, we note how this provides a very useful tool for solving (in least square sense) overdetermined linear systems for which certain constraint equations are required to be satisfied exactly. This type of problem arises frequently in applications, with one recent example being the implementation of boundary conditions for RBF-FD-based elliptic PDE solvers.
Measurable physical quantities do not directly involve complex numbers (with possible exceptions in quantum physics). However, since most of the standard and special functions in the applied sciences are analytic functions, both mathematical analysis and computational procedures can benefit greatly from exploiting this feature. While such mathematical tools have seen much use during the last couple of centuries (residue calculus, methods for asymptotic expansions, etc.), the realization is very recent that FD methods, specialized for analytic functions in the complex plane, can be remarkably effective. Since analytic functions (and separately their real and imaginary parts) in singularity-free regions satisfy Laplace’s equation, it can be exploited that FD schemes for such applications need only to be accurate for the very small subset of functions of two variables that satisfy this equation.
The theme of this book is high-accuracy FD methods. Only for pseudospectral methods (Chapter 2) was the accuracy increased to its infinite order limit. Some compromises between order of accuracy and other features (such as handling of boundaries, numerical stability, etc.) are usually needed. This appendix focuses on cases where high orders of accuracy are readily achievable, but some of these orders can favorably be exchanged for the introduction of free parameters which then can be utilized for alternative purposes. Examples include increasing the wave number range for an FD scheme, enhanced Gregory quadrature, and accelerating ODE solvers for wave equations when using multicore processors.
Gaussian quadrature can be very effective on smooth data that is available at highly specific node locations. A more common situation is that data is equispaced (if not created explicitly for the purpose of such quadrature). The most effective quadrature methods that are then available relate closely to FD approximations. A particularly noteworthy method was introduced by Gregory already in 1670 (predating the descriptions of calculus by Leibniz and Newton). In cases when the function to be integrated happens to be analytic, complex plane FD approximations can be used for highly accurate contour integration. Given the close relation between integrals and equispaced sums, FD-based methods can be very effective also for infinite sums.
Although the history of fractional order derivatives is nearly as long as that of regular (integer order) derivatives, their range of applications has increased dramatically in recent decades. In contrast to an integer order derivative, a fractional order derivative is not a local operator. It is most often expressed as an integral between some “base point” and an “evaluation point.” Although this integral is singular at least at the evaluation point, the FD perspectives provided in the previous chapters can still be applied and have recently led to a very high-order accurate method for their approximation. This is the case both for purely real-valued functions and for analytic functions in the complex plane. Fractional order Laplace operators are also frequently encountered in various applications, and methods for their approximation are discussed.
Fourier developed his groundbreaking expansion methods at first as a tool for analyzing heat flow. From this application (and originally with little mathematical rigor) these expansions evolved rapidly to become one of the foremost tools in present day applied mathematics. The three main versions we describe in more detail are Fourier Transform (FT), Fourier Series (FS) and Discrete Fourier Transform (DFT), and how these are related to each other. Each case amounts to a transform pair – allowing one to move either way between physical and transform variables. The typical purpose of applying transforms is that certain operations are simpler in one of the spaces than in the other. This overview is followed by a discussion of the Fast Fourier Transform (FFT) algorithm, which is a computationally rapid way to carry out the DFT. This algorithm (by Cooley and Tukey, in 1965) caused one of the greatest computational advances of all time. The applications of this algorithm are far reaching.