To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Periodic PS methods are almost always implemented with use of the FFT algorithm. For nonperiodic PS methods, direct matrix × vector multiplication is often both fast and convenient. However, in the case of Chebyshev-PS methods, a cosine-FFT approach is also effective. Following a description of the FFT concept in Section F.1, its use for periodic and Chebyshev- PS implementations is described in Sections F.2 and F.3.
In most periodic PS contexts, what is actually needed is not Fourier expansion coefficients but rather a fast way to compute periodic convolutions. FFTs offer one way to do this. In Section F.4, we discuss convolutions and some alternative ways to calculate them effectively. In Section F.5, we find that, at four times the cost of a “basic” FFT, the scope of the algorithm can be greatly broadened. These fractional Fourier transforms apply to many problems of physical and computational interest.
Both periodic and nonperiodic PS methods can be seen as high-accuracy limits of FD methods. This alternative approach to PS methods provides both generalizations and insights.
Orthogonal polynomials and functions lead only to a small class of possible spectral methods, whereas the FD viewpoint allows many generalizations.
For example, all classical orthogonal polynomials cluster the nodes quadratically toward the ends of the interval – this is often, but not always, best.
An FD viewpoint offers a chance to explore intermediate methods between low-order FD and PS methods.
One might consider procedures of not quite as high order as Chebyshev spectral methods, and with nodes not clustered quite as densely – possibly trading some accuracy for stability and simplicity.
Two separate ways to view any method will always provide more opportunities for analysis, understanding, and improvement.
Many special enhancements have been designed for FD methods. Viewing PS methods as a special case of FD methods often makes it easier to carry over such ideas.
Examples include staggered grids, upwind techniques, boundary techniques, polar and spherical coordinates, etc. (see Chapters 5 and 6).
Comparisons between PS and FD methods can be made more consistent.
Sections 3.1 and 3.2 contain some general material on FD approximations, allowing us in Section 3.3 to discuss different types of node distributions. The relation between these types and the accuracy that polynomial interpolation provides (at different locations over [–1, 1]) is clarified in Section 3.4.
We are glad that the first edition of these volumes is thought to have achieved its main aim of making mathematical techniques more available, not only to mathematicians, but also to the wider scientific and engineering community.
We have been glad to take the opportunity provided by this edition to incorporate the most salient aspects of the large body of new results which have been obtained since the publication of the original edition. The incorporation of this new material has led to the need to make several significant rearrangements of the previous material.
We wish to record our gratitude for the mathematical contributions and company of Arne Magnus and Helmut Werner, both of them friends who are missed by many of us. The influence of their work is to be found in Chapter 4.
A few infelicities which have been noticed in the original edition have been corrected.
In the previous chapters, we have repeatedly referred to the exponential convergence rate of spectral methods for analytic functions. This is discussed in more detail in Section 4.1. When functions are not smooth, PS theory is much less clear. An approximation can appear very good in one norm and, at the same time, very bad in another. As illustrated in Section 4.2, PS performance can also be very impressive in many cases that are “theoretically questionable” – this is exploited in most major PS applications. Sections 4.3–4.5 describe differentiation matrices in more detail, their influence on time stepping procedures, and linear stability conditions. A fundamentally different kind of instability, specific to nonlinear equations, is discussed in Section 4.6. Very particular distributions of the nodes yield spectacular accuracies for Gaussian quadrature formulas. PS methods are often based on such formulas, presumably with the hope of obtaining a correspondingly enhanced accuracy when approximating differential equations. In the examples of Section 4.7, we see little evidence for this.
Smoothness of a function is a rather vague concept. Increasingly severe requirements include:
a finite number of continuous derivatives;
infinitely many derivatives; and
analyticity – allowing continuation as a differentiate complex function away from the real axis.
In the limit of N (number of nodes or gridpoints) tending to infinity, these cases give different asymptotic convergence rates for PS methods. In the first case, the rate becomes polynomial with the power corresponding to the number of derivatives that are available.