To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Turbulence is the state of vortex fluid motion where the velocity, pressure and other properties of the flow field vary in time and space sharply and irregularly and, it can be assumed, randomly. Turbulent fluid flows surround us, in the atmosphere, in the oceans, in engineering and biological systems. First recognized and examined by Leonardo da Vinci, for the past century turbulence has been studied by engineers, mathematicians and physicists, including such giants as Kolmogorov, Heisenberg, Taylor, Prandtl and von Kármán. Every advance in a wide collection of subjects, from chaos and fractals to field theory, and every increase in the speed and parallelization of computers is heralded as ushering in the solution of the ‘turbulence problem’, yet turbulence remains the greatest challenge of applied mathematics as well as of classical physics.
It is very discouraging that in spite of hard work by an army of scientists and research engineers over more than a century, almost nothing became known about turbulence from first principles, i.e. from the continuity equation and the Navier–Stokes equations (Batchelor 1967; Germain 1986; Landau and Lifshitz 1987).
For the past seven years students and faculty at the University of California at Berkeley have had the privilege of attending lectures by Professor G.I. Barenblatt on mechanics and related topics; the present book, which grew out of some of these lectures, extends the privilege to a wider audience. Professor Barenblatt explains here how to construct and understand self-similar solutions of various physical problems, i.e. solutions whose structure recurs over differing length or time scales and different parameter ranges. Such solutions are often the key to understanding complex phenomena; there is no universal recipe for finding them, but the tools that can be useful, including dimensional analysis and nonlinear eigenvalue problems, are explained here with admirable conciseness and clarity, together with some of the multifarious uses of self-similarity in intermediate asymptotics and their connection with wave propagation and the renormalization group. Whenever possible, Professor Barenblatt shuns dry and distant abstraction in favor of the telling example from his incomparable stock of such examples; with the appearance of this book, there is no longer any excuse for any scientist not to master these simple, elegant, crucial and sometimes surprising ideas.
In the scientific and even popular literature of recent time fractals have been widely used and discussed. By fractals are meant those geometric objects, curves, surfaces and three- and higher-dimensional bodies, having a rugged form and possessing certain special properties of homogeneity and selfsimilarity. Such geometric objects were studied intensively by mathematicians at the end of the nineteenth century and the beginning of the twentieth century, euphony particularly in connection with the construction of examples of continuous nowhere-differentiable functions. To many pure mathematicians (starting with Hermite) and most physicists and engineers they seemed for a long time mathematical monsters having no applications in the problems of natural science and technology. In fact, it is not so and in clarifying this point the concept of intermediate asymptotics plays a decisive role.
The revival of interest in such objects and the recognition of their fundamental role in natural science and engineering is due primarily to a series of papers by Mandelbrot and, especially, to his monographs (1975, 1977, 1982). Mandelbrot coined the very term ‘fractal’ and introduced the general concept of fractality.
Applied mathematics is the art of constructing mathematical models of phenomena in nature, engineering and society. In constructing models it is impossible to take into account all the factors which influence the phenomenon; therefore some of the factors should be neglected, and only those factors which are of crucial importance should be left. So we say that every model is based on a certain idealization of the phenomenon. In constructing the idealizations the phenomena under study should be considered at ‘intermediate’ times and distances (think of the impressionists!). These distances and times should be sufficiently large for details and features which are of secondary importance to the phenomenon to disappear. At the same time they should be sufficiently small to reveal features of the phenomena which are of basic value.We say therefore that every mathematical model is based on ‘intermediate asymptotics’.
Measurement of physical quantities, units of measurement. Systems of units
We say without any particular thought that the mass of water in a glass is 200 grams, the length of a ruler is 0.30 meters (12 inches), the half-life of radium is 1600 years, the speed of a car is 60 miles per hour. In general, we express all physical quantities in terms of numbers; these numbers are obtained by measuring the physical quantities. Measurement is the direct or indirect comparison of a certain quantity with an appropriate standard, or, to put it another way, with an appropriate unit of measurement. Thus, in the examples discussed above, the mass of water is compared with a standard – a unit of mass, the gram; the length of the ruler is compared with a unit of length, the meter; the half-lifetime of radium is compared with a unit of time, the year; and the velocity of the car is compared with a unit of velocity, the velocity of uniform motion in which a distance of one mile is traversed in a time equal to one hour.
The units for measuring physical quantities are divided into two categories: fundamental units and derived units. This means the following.
A class of phenomena (for example, mechanics, i.e. the motion and equilibrium of bodies) is singled out for study. Certain quantities are listed, and standard reference values – either natural or artificial – for these quantities are adopted as fundamental units; there is a certain amount of arbitrariness here. For example, when describing mechanical phenomena we may adopt mass, length and time standards as the fundamental units, though it is also possible to adopt other sets, such as force, length and time.
The methods presented in Chapter 5 attempt to close the chemical source term by making a priori assumptions concerning the form of the joint composition PDF. In contrast, the methods discussed in this chapter involve solving a transport equation for the joint PDF in which the chemical source term appears in closed form. In the literature, this type of approach is referred to as transported PDF or full PDF methods. In this chapter, we begin by deriving the fundamental transport equation for the one-point joint velocity, composition PDF. We then look at modeling issues that arise from this equation, and introduce the Lagrangian PDF formulation as a natural starting point for developing transported PDF models. The simulation methods that are used to ‘solve’ for the joint PDF are presented in Chapter 7.
Introduction
As we saw in Chapter 1, the one-point joint velocity, composition PDF contains random variables representing the three velocity components and all chemical species at a particular spatial location. The restriction to a one-point description implies the following.
The joint PDF contains no information concerning local velocity and/or scalar gradients. A two-point description would be required to describe the gradients.
All non-linear terms involving spatial gradients require transported PDF closures. Examples of such terms are viscous dissipation, pressure fluctuations, and scalar dissipation.
The one-point joint composition PDF contains random variables representing all chemical species at a particular spatial location. It can be found from the joint velocity, composition PDF by integrating over the entire phase space of the velocity components. The loss of instantaneous velocity information implies the following.
At first glance, to the uninitiated the subject of turbulent reacting flows would appear to be relatively simple. Indeed, the basic governing principles can be reduced to a statement of conservation of chemical species and energy ((1.28), p. 16) and a statement of conservation of fluid momentum ((1.27), p. 16). However, anyone who has attempted to master this subject will tell you that it is in fact quite complicated. On the one hand, in order to understand how the fluid flow affects the chemistry, one must have an excellent understanding of turbulent flows and of turbulent mixing. On the other hand, given its paramount importance in the determination of the types and quantities of chemical species formed, an equally good understanding of chemistry is required. Even a cursory review of the literature in any of these areas will quickly reveal the complexity of the task. Indeed, given the enormous research production in these areas during the twentieth century, it would be safe to conclude that no one could simultaneously master all aspects of turbulence, mixing, and chemistry.
Notwithstanding the intellectual challenges posed by the subject, the main impetus behind the development of computational models for turbulent reacting flows has been the increasing awareness of the impact of such flows on the environment. For example, incomplete combustion of hydrocarbons in internal combustion engines is a major source of air pollution. Likewise, in the chemical process and pharmaceutical industries, inadequate control of product yields and selectivities can produce a host of undesirable byproducts.
In Chapter 6 we reviewed the theory underlying transported PDF methods. In order to apply this theory to practical flow problems, numerical algorithms are required to ‘solve’ the PDF transport equation. In general, solving the PDF transport equation using standard finite-difference (FD) or finite-volume (FV) methods is computationally intractable for a number of reasons. For example, the velocity, composition PDF transport equation ((6.19), p. 248) has three space variables (x), three velocity variables (V), Ns composition variables (ψ), and time (t). Even for a statistically two-dimensional, steady-state flow with only one scalar, a finite-difference grid in at least five dimensions would be required! Add to this the problem of developing numerical techniques that ensure fU, φ remains non-negative and normalized to unity at every space/time point (x, t), and the technical difficulties quickly become insurmountable.
A tractable alternative to ‘solving’ the PDF transport equation is to use statistical or Monte-Carlo (MC) simulations. Unlike FV methods, MC simulations can handle a large number of independent variables, and always ensure that the resulting estimate of fu, φ is well behaved. As noted in Section 6.8, MC simulations employ representative samples or so-called ‘notional’ particles. The principal challenge in constructing an MC algorithm is thus to define appropriate rules for the rates of change of the notional-particle variables so that they have statistical properties identical to fU,φ(V, ψ; x, t). The reader should, however, keep in mind that the necessarily finite ensemble of notional particles provides only a (poor) estimate of fu, φ. When developing MC algorithms, it will thus be important to consider the magnitude of the estimation errors and to develop ways to control them.