To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 5 we presented basic iterative methods, both of stationary and of nonstationary type. The parameters in the methods were chosen to accelerate their convergence. The ability to accelerate depends, however, on the eigenvalue distribution. As we have seen, for iteration matrices with real and positive eigenvalues, or with nearly real eigenvalues but positive real parts, the parameters can be chosen so that the rate of convergence is increased by an order of magnitude, such as in the Chebyshev iteration method. Certain cases of an indefinite or more general complex spectrum can also be handled.
The eigenvalue distribution depends on the matrix splitting method used. Some splitting methods, such as the SOR method, lead to a “dead-end”—i.e., an iteration matrix where no, or only minor, polynomial acceleration of convergence is possible, because the spectrum of the iteration matrix is typically located on a circle and, hence, powers of the iteration matrix are also located on a circle.
The purpose of this and the following chapters is to present some practically important splitting methods of A, that is, A = C – R (where C is nonsingular), which can improve the eigenvalue distribution of the iteration matrix C−1A in such a way that the iterative method will converge much faster with the splitting than without it.
Let us consider first a n × n matrix A as defining a linear mapping in ℂn or ℝn, w.r.t. a fixed coordinate system. A number λ ∈ ℂ, for which Ax = λx where x ≠ 0, is said to be an eigenvalue of A and x is said to be an eigenvector corresponding to λ; hence x is a vector, which is mapped by A onto its own direction. We show that there is at least one such vector for every square matrix. First, some fundamental concepts and properties in the theory of eigenvalues are presented. We prove that the eigenvalues of A are the zeros of φ(λ) = det(A – λI), a polynomial in λ called the characteristic polynomial of A. We prove that φ(A) = 0, and we consider the polynomial m(λ), the polynomial of minimal degree for which m(A) = 0.
Selfadjoint and unitary matrices play an important role in applications, and we derive properties of the eigensolutions of such matrices. If the matrix B of order n defines the same mapping as the matrix A, but with respect to another basis in ℂn or ℝn, we can write B as B = C−1AC, where C is a nonsingular matrix. We prove that B and A have the same eigenvalues (i.e., the eigenvalues are independent of the particular basis) and consider matrices A for which there exists a matrix C such that B is a triangular or even a diagonal matrix.
Various topics of matrix theory, in particular, those related to nonnegative matrices (matrices with nonnegative entries) are considered in this chapter. We introduce the concepts of reducible and irreducible matrices and matrix graph theory (the concepts of directed and strongly connected graphs), and show the equivalence between irreducibility of a matrix and the connectivity of its directed graph. This enables us, among other things, to strengthen the Gershgorin theorem for estimating the location of eigenvalues of irreducible matrices. In order to determine if a symmetric matrix is positive definite, we need information regarding the signs of its eigenvalues. Also, in order to determine the rate of convergence of certain iterative methods to solve linear systems of algebraic equations, we need to know—as we shall see in later chapters—some information regarding the location of the eigenvalues of the iteration matrix.
The Perron-Frobenius theorem, showing that the spectral radius ρ(A) is an eigenvalue corresponding to a positive eigenvector, if A is nonnegative and irreducible, is presented. It will be seen in some of the following chapters that the concept of numerical radius can give sharper estimates of the norm of the powers of matrices, for instance, than the spectral radius can. Some results relating the numerical radius with the norm of the matrix and with the spectral radius of the symmetric part of nonnegative matrices are presented.
Our second numerical laboratory assignment is the computation of an area under a given curve to a desired accuracy. Unlike the calculation of the areas of various polygons, the computation of the area of a circle, an ellipse, or the area under the curve y = 1/log x from x = 2 to x = 7, say, is not at all trivial and requires methods of integral calculus or numerical approximations. Such areas arise not only in a geometrical context but also in various applications in engineering, biology, and statistics. In keeping with our policy of making the material as accessible as possible to precalculus students, without sacrificing rigor, we shall somewhat limit the generality so that results can be proved by elementary means and generalizations pointed out.
We shall henceforth be interested in the computation of the area under the graph of a positive function y = f(x), from x = a to x = b, but limit ourselves for the time being to monotonic (increasing or decreasing) or convex functions.
Rectangular approximations
Although the ensuing analysis is carried out in terms of a general, positive, monotonic function f(x), a ≤ x ≤ b, it is advisable that the laboratory participants bear in mind a concrete example such as f(x) = 1/x, 1 ≤ x ≤ 2.
In the spirit of the mathematical laboratory, special attention is accorded to the elimination of traditional mathematical tables, which students have always used as “black boxes,” without the faintest understanding of their origin and construction. Thus, these tables were antieducational tools that the students aquired without any mathematical enlightenment and used as “cookbook recipes.” Now that calculators and microcomputers are used in mathematical education, the danger arises of replacing one set of black boxes with another. Of course, we are not advocating the introduction of these new electronic black boxes merely for using, say, the logarithmic built-in function of the computer (or pressing the “log” key on a pocket calculator). What we do advocate is to teach students what is behind such built-in functions as part of the material covered in the mathematical laboratory. This subject fits naturally into the environment of the laboratory and reveals the “story behind the key.” The attainment of this objective is the subject of this chapter and the next.
We might ask whether students should be allowed to use the built-in functions before (and during) learning how they were built in. We feel that no harm can result from such a practice, so long as the students are told expressly that their “ignorant” use of the built-in functions is temporary. Before long, the secrets held by the computer keys will be revealed.
The term computer library functions refers to the collection of built-in functions – sin x, In x, ex, arctan x, to name but a few – that were installed in the computer's permanent memory. These built-in functions, of course, are efficient approximations of the abstract mathematical entities they represent. By efficiency we mean that every evaluation is performed with utmost speed and yields all the correct significant figures that are available on the computing device used. The construction of such built-in functions usually entails lengthy, computationally expensive preparations, which, however, are carried out only once. The first preparatory step is to reduce, as much as possible, the interval [a, b] in which the given f(x) is to be approximated (examples of this strategy can be found in Sections 2.2 and 7.6). To guarantee the desired correct significant figures, we also must control the relative error in the approximation of f(x) in [a, b]. This issue will be discussed in Section 8.3.
We shall concentrate on polynomial approximations, making use of the results obtained in Chapter 7, and shall consider the possibility of constructing rational approximations in Sections 8.5 and 8.6. We start with polynomial approximation of trigonometric functions in the same spirit with which we treated In x in Chapter 7.
Accumulated experience has shown that early emphasis on algorithmic thinking, augmented by actual computing, is indispensable in mathematical education. Recognizing the cardinal importance of the individual, active involvement of every student in the computational work (as opposed to mere demonstration by the teacher), we advocate the use of mathematical laboratories equipped with microcomputers. Optimally, a special room should be set aside for the mathematical laboratory. Failing that, physics or biology laboratories can be used since they tend to create the proper atmosphere. A pair of students is assigned to each microcomputer, as to a microscope in a biology laboratory, and spends a few hours a week working with the microcomputer in the laboratory.
The mere presence of an increasing number of microcomputers in various educational institutions, even those at which a programming language such as True-Basic or Pascal is taught, in no way constitutes a new mode of teaching and learning. The full potential of microcomputers and proper courseware should be harnessed to improve the state of the art in education. Moreover, a new role will be played by the mathematics teacher when traditional “chalk-and-talk” methods are augmented by active participation as a laboratory instructor. The numerous advantages of such computer-aided teaching of mathematics are detailed in Section 1.2.
The laboratory work will center around specific assignments, or modules, to be carried out by the participants at their own pace.
Numerical Mathematics – A Laboratory Approach is a unique book that introduces the computational microcomputer laboratory as a vehicle for teaching algorithmic aspects of mathematics. This is achieved through a sequence of laboratory assignments, presupposing no previous knowledge of calculus or linear algebra, where the “chalk and talk” lecturer turns into a laboratory instructor. The computational assignments cover basic numerical topics that should be part of the mathematical education in the era of microcomputers.
In writing this book at the precalculus and pre–linear algebra level, we were mainly addressing an audience of four groups: first-year university students of mathematics, sciences, and engineering who have had no exposure to systematic calculus; students at teachers' training colleges who will be tomorrow's teachers of mathematics and computer science; superior high-school mathematics students; and scientific programmers at all levels. Various parts of this book were successfully tested on classes representative of each of these groups and subsequently modified. The material was received enthusiastically by high-school students who were members of Tel Aviv University's Math Club, some of whom are now faculty members of the School of Mathematical Sciences. The material was also welcomed by members of New York University's summer program for talented high-school students (held every summer at the Courant Institute of Mathematical Sciences and directed by Henry Mullish), and by several classes of in-service or future mathematics teachers at Tel Aviv University and at New York University.
In this chapter we present an algorithmic approach to the solution of systems of linear equations, another typical subject for the mathematical laboratory. No knowledge of matrices, vectors, and their underlying theory is presupposed, and thus the laboratory participants can handle this material even before the study of linear algebra.
After the development of an algorithm for the solution of “naive” systems of linear equations, special attention will be paid to problematic cases in which unrealistic answers with huge errors might be obtained. In particular, we shall discuss reasons for loss of accuracy, sensitivity to minor changes in the data, pivoting, scaling, and computational efficiency. By elaborating on each of these points by means of appropriate examples, we hope to present this traditionally abstract mathematical subject in a concrete, practical way that will be more meaningful to many students.
Coefficient tables
Systems of linear equations arise naturally in many practical areas such as mixing liquids, work and power calculations, electrical circuit computations, and marketing problems. It is particularly useful to demonstrate the subject under consideration by means of 3 × 3 systems (three equations and three unknowns). Such systems are not too large and cumbersome, but nevertheless constitute a case in which a pattern is revealed. Occasionally, when it is necessary for clarity, 4 × 4 and 2 × 2 systems will also be used.
This article is for those who have already a computer program for incompressible viscous transient flows and want to put a turbulence model into it. We discuss some of the implementation problems that can be encountered when the Finite Element Method is used on classical turbulence models except Reynolds stress tensor models. Particular attention is given to boundary conditions and to the stability of algorithms.
Introduction
Many scientists or engineers turn to turbulence modeling after having written a Navier-Stokes solver for laminar flows.
For them turbulence modeling is an external module into the computer program. Generally, the main ingredients to built a good Navier-Stokes solver are known; this includes tools like mixed approximations for the velocity u and pressure p to avoid checker board oscillations and also upwinding to damp high Reynolds number oscillations; however the problems that one may meet while implementing a turbulence model are not so well known because these models have not been studied much theoretically.
Judging from the literature [3] [11] [12] [15] [19] [22] the most commonly used turbulence models seem to be
All three start from a decomposition of u and p into a mean part and a fluctuating part u’. However oscillations are understood either as time oscillations or space oscillations or even variations due to changes in initial conditions. In any case, the decomposition u+u’ is applied to the Navier-Stokes equations.
This article contains a summary account of covolume methods for incompressible flows. Covolume methods are a recently developed way to solve both compressible and incompressible flow problems on unstructured meshes. The general idea is to use complementary pairs of control volumes to discretize flux, circulation and other expressions which occur in the governing equations. These complementary volumes (covolumes for short) are related by an orthogonality property which is a basic feature of the covolume approach. One of the simplest mesh configurations which is suitable is the Delaunay- Voronoi mesh pair. This is introduced in the next section. After that we proceed through div-curl systems to the stationary Stokes equations and the Navier-Stokes equations. We will show that for uniform meshes the covolume equations for the stationary Stokes equations specialize to the MAC (staggered mesh) scheme, and that the MAC scheme itself is actually equivalent to a velocity-vorticity scheme. Some numerical results are presented in the last section.
Since this article is intended only as an overview, we will present most of the results in a two dimensional setting. Almost all of the ideas and techniques do generalize nicely to three dimensions but are harder to visualize than in two dimensions. Given our limited aims it would be inappropriate to present proofs of most of the mathematical results. We will refer to the original sources for these and other details.
One of the reasons for introducing covolume methods is to find lower order methods for viscous flows which are free of “spurious mode” problems.