To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We propose in this chapter a speculative dynamical description of an abstract cognitive system that goes beyond neural networks to attempt to take into account some features of nervous systems and, in particular, adaptations to environmental constraints. This personal viewpoint of the author is but one of the several attempts to model cognitive processes mathematically. It is presented primarily for the purpose of stirring up reaction and prompting further research involving other techniques and other approaches to this wide field.
Before we look at the evolution of nervous systems for useful suggestions regarding the means they have used to master more and more complex cognitive faculties, we shall start from the fact that an organism must adapt to environmental constraints by perceiving them and recognizing them through “metaphors” with what we shall call “conceptual controls.” This problem of adaptation is not dealt with explicitly in most studies of neural networks. This chapter is devoted to highlighting the roles of cognitive systems in this process.
The variables of the cognitive system are described by its state and a regulatory control (conceptual control). The state of the system (henceforth called the sensorimotor state) is described by
the state and the variations of the environment on which the cognitive system acts,
the state of cerebral motor activity of the cognitive system, which guides an individual's action on the environment.
Up to this point, we have described basic PS implementations. However, many variations are possible, offering advantages in different respects. In this chapter, we discuss a few of these variations.
Use of additional information from the governing equations
This idea (like most others) is best described through the use of examples. It requires that the problem be manipulated analytically (e.g., by repeated differentiation) to provide more information than is immediately available from its original formulation.
Example 1. Exploit additional derivative information at the boundaries when solving the eigenvalue problem uxx = λu, u(±1) =0.
Since u(±1) = 0, clearly also u″(±1) = 0 and u″″(±1) = 0 (we label these as “extra” boundary conditions – for this example, we ignore that this pattern continues indefinitely and that u becomes periodic). The boundary information on u″ and u″″ can be exploited in different ways.
A, Reduce the largest spurious EVs (cf. Figure 4.4-2). To each extra boundary condition (such as u″(−1) = 0) corresponds a one-sided difference stencil. From each row of the DM, like those shown in Figures 4.3-1(c) and (d), we can subtract any multiples of these stencils without compromising the spectral accuracy. The multiples can be chosen to minimize the sum of the squares of the elements of the resulting DM.
The partial differential equations (PDEs) that arise in applications can only rarely be solved in closed form. Even when they can be, the solutions are often impractical to work with and to visualize. Numerical techniques, on the other hand, can be applied successfully to virtually all well-posed PDEs. Broadly applicable techniques include finite element (FE), finite volume (FV), finite difference (FD), and, more recently, spectral methods. The complexity of the domain and the required levels of accuracy are often the key factors in selecting among these approaches.
Finite-element methods are particularly well suited to problems in very complex geometries (e.g. 3-D engineering structures), whereas spectral methods can offer superior accuracies (and cost efficiencies) mainly in simple geometries such as boxes and spheres (which can, however, be combined into more complex shapes). FD methods perform well over a broad range of accuracy requirements and (moderately complex) domains.
Both FE and FV methods are closely related to FD methods. FE methods can frequently be seen as a very convenient way to generate and administer complex FD schemes and to obtain results with relatively sharp error estimates. The connection between spectral methods – in particular the so-called pseudospectral (PS) methods, the topic of this book – and FD methods is closer still. A key theme in this book is to exploit this connection, both to make PS methods more intuitively understandable and to obtain particularly powerful and flexible PS variations.
Different variations of the “energy method” can be used to show that PDEs are well posed, to show that discrete approximations are stable, and to establish (global) convergence rates under mesh refinements. The energy approach is very broadly applicable, and can handle many cases that include: boundary conditions; variable coefficients (and nonlinearities); and nonperiodic PS methods.
However, this flexibility and power comes at a price of often significant technical difficulty. This appendix is intended to provide only a first flavor of this rich subject to readers who are unfamiliar with it. For this purpose, we here consider five examples, all relating to the heat equation on the interval [–1, 1]. More systematic descriptions can be found in Richtmyer and Morton (1967), Gustafsson et al. (1995), and (for PS methods) Canuto et al. (1988).
High-order FD and PS methods are particularly advantageous in cases of
high smoothness of solution (but note again the discussion in Section 4.2),
stringent error requirement,
long time integrations, and
more than one space dimension.
Because the PS methods for periodic and nonperiodic problems are quite different, the two cases are discussed separately in what follows. In both cases, we find that the PS methods compare very favorably against FD methods in simple model situations. However, in cases with complex geometries or severe irregularities in the solutions, lower-order FD (or FE) methods may be both more economical and more robust.
Especially for nonperiodic problems, it can be difficult to estimate a priori the computational expense required to solve a problem to a desired accuracy. Many implementation variations are possible, and the optimal selection of formal orders of accuracy, level of grid non-uniformity, and so forth may well turn out to depend not only on the problem type, but also on the solution regimes that are studied. Therefore, it makes sense to keep open as many of these implementation options as possible while developing application codes. One technique is to write an FD code of variable order of accuracy on a grid with variable density (using the algorithm in Section 3.1 and Appendix C). By simply changing parameter values, one can then explore (and exploit) the full range of methods from low-order FD on a uniform grid to Chebyshev (Legendre, etc.) and other PS methods. Obviously, it is also desirable to structure codes so that time stepping methods (if present) are easily interchangeable.
PDEs in spherical geometries arise in many important areas of application such as meteorology, geophysics, and astrophysics. A fundamental problem for most discretization techniques is that it is impossible to cover a sphere with grids that are both dense and uniform. We begin by describing some numerical approaches designed to address or bypass this problem.
Approximately uniform grids over the sphere. One might start by laying out a coarse, perfectly uniform grid based on one of the five platonic bodies (in particular the icosahedron with 20 equilateral triangular faces), and then carry out subdivisions within each face. Variations on this theme include using grids reminiscent of
the dimple pattern on golf balls;
“Buckminster Fullerenes” – or the patterns of carbon atoms in “Buckey balls”; or
approximations found in biology, such as the pattern of composite eyes in some insects or the silica skeletons of some radiolaria.
Grids of this type can be well suited for low-order FD and FE methods (see e.g. Bunge and Baumgardner 1995), but even their relatively minor irregularities cause considerable algebraic complications in connection with higher-order FD and PS methods.
Spherical (surface) harmonics. These form an infinite set of analytic basis functions with a completely uniform approximation ability over all parts of a sphere. Galerkin techniques are particularly attractive for linear constant-coefficient problems. Equations with variable coefficients and nonlinearities are best handled via (repeated) transformations to and from a grid-based physical representation. Drawbacks include algebraic complexity and lack of very fast transforms.
Partial differential equations arise in almost all areas of science, engineering, modeling, and forecasting. Finite difference and finite element methods have long histories as particularly flexible and powerful general-purpose numerical solution methods. In the last two decades, spectral and in particular pseudospectral (PS) methods have emerged as intriguing alternatives in many situations – and as superior ones in several areas.
The aim of this Practical Guide is to describe when, how, and why the PS approach works, in a style that makes the transition to actual numerical implementations as straightforward as possible. For this reason, the book focuses on illustrations, examples, and heuristic explanations, and includes key code segments and references, but contains only a few rigorous theorems or technical proofs. It is written primarily for scientists and engineers who are interested in applying PS methods to real problems. However, I also hope that it will prove suitable for graduate-level study, conveying to students that PS methods form an important and rapidly developing field in which elaborate mathematical preliminaries are unnecessary. Material that is normally included in undergraduate-level mathematics and numerical analysis courses is mentioned only if customary viewpoints need to be complemented.
The paper entitled “A Review of Pseudospectral Methods”, which was co-authored by Professor David Sloan, appeared in Acta Numerica 1994. The encouragement offered by Dr. Arieh Iserles (the principal editor for Acta Numerica) was critical both for the original article and for its subsequent extension into this monograph.
Periodic PS methods are almost always implemented with use of the FFT algorithm. For nonperiodic PS methods, direct matrix × vector multiplication is often both fast and convenient. However, in the case of Chebyshev-PS methods, a cosine-FFT approach is also effective. Following a description of the FFT concept in Section F.1, its use for periodic and Chebyshev- PS implementations is described in Sections F.2 and F.3.
In most periodic PS contexts, what is actually needed is not Fourier expansion coefficients but rather a fast way to compute periodic convolutions. FFTs offer one way to do this. In Section F.4, we discuss convolutions and some alternative ways to calculate them effectively. In Section F.5, we find that, at four times the cost of a “basic” FFT, the scope of the algorithm can be greatly broadened. These fractional Fourier transforms apply to many problems of physical and computational interest.
Both periodic and nonperiodic PS methods can be seen as high-accuracy limits of FD methods. This alternative approach to PS methods provides both generalizations and insights.
Orthogonal polynomials and functions lead only to a small class of possible spectral methods, whereas the FD viewpoint allows many generalizations.
For example, all classical orthogonal polynomials cluster the nodes quadratically toward the ends of the interval – this is often, but not always, best.
An FD viewpoint offers a chance to explore intermediate methods between low-order FD and PS methods.
One might consider procedures of not quite as high order as Chebyshev spectral methods, and with nodes not clustered quite as densely – possibly trading some accuracy for stability and simplicity.
Two separate ways to view any method will always provide more opportunities for analysis, understanding, and improvement.
Many special enhancements have been designed for FD methods. Viewing PS methods as a special case of FD methods often makes it easier to carry over such ideas.
Examples include staggered grids, upwind techniques, boundary techniques, polar and spherical coordinates, etc. (see Chapters 5 and 6).
Comparisons between PS and FD methods can be made more consistent.
Sections 3.1 and 3.2 contain some general material on FD approximations, allowing us in Section 3.3 to discuss different types of node distributions. The relation between these types and the accuracy that polynomial interpolation provides (at different locations over [–1, 1]) is clarified in Section 3.4.