To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Incomplete but optimal information: the natural coarsegraining of phase space
How can one characterize an invariant set of a chaotic dynamical system and the motion on that set in a way that makes basic theoretical quantities of interest available for comparison with experiment or computation? Because a formulation of the theory based upon finite resolution is needed in both cases, we generalize the results of Chapter 4 and introduce a formalism that is based upon a hierarchy of more and more refined coarsegrained descriptions of both the invariant set and the motion. Within this finite description, it is sometimes useful to introduce the f(α)-spectrum. The f(α)-spectrum, in the end, depends upon two things: the invariant set and the frequency with which the dynamical system visits different parts of the invariant set. A coarsegrained description of both properties is demanded by virtue of the fact that invariant sets of chaotic dynamical systems have, mathematically, the cardinality of the continuum.
We have seen by the example of the Lorenz model that strange attractors can lead at least approximately to one-dimensional chaotic maps (Chapter 2). In that case, one starts with a certain time series {z(t)} that follows from plotting maxima of the z coordinate against the time, yielding a sequence of numbers z(tn) = zn at discrete times tn, where n = 1,2, 3, … In the Lorenz model, the phase space flow is three-dimensional and the points on the orbit that include the discrete time series {zn} do not fall within a single plane, nor is there any known simple analytic pattern among the various times {tn}.
We use the word ‘universal’ in the sense of ‘including, pertaining to, affecting all members of a class or group’. The important idea is therefore that of a ‘universality class’ and it is necessary to have a criterion for membership.
The modern use of these phrases in physics stems from the renormalization group theory of phase transitions in classical statistical mechanics, where all models with the same symmetry and dimension yield the same critical exponents independently of other details of the Hamiltonian, so long as corrections to scaling are ignored. The predictions of Lorenz and Ruelle and Takens had been made, but had generated no large following; just as broken symmetries dominated the physics of the 1960s, the 1970s were the heyday of critical phenomena and Wilson's renormalization group method. So, when Feigenbaum (1978, 1980) argued near the end of that decade that the ‘period-doubling’ transition to chaos yields universal critical exponents, many physicists became excited to learn what was meant by ‘deterministic chaos’ and then had to learn about the ‘strange’ mathematical objects called Cantor sets, because the period-doubling limit defines a particular Cantor set.
Feigenbaum developed his renormalization group theory of period doubling by starting with an iterative map of the interval, the logistic map x → f(x,D) = Dx(l – x). He showed that all smooth maps of the interval with a quadratic maximum (Fig. 6.1) should yield the same critical exponents, so that the order of the maximum is one factor that determines the universality class.
The study of deterministic chaos by iterated maps goes back to the mathematician H. Poincaré, but did not become a part of theoretical physics until after M. Feigenbaum's discovery, and analysis by a renormalization group method, of universal critical exponents at the transition to chaos in a class of one-dimensional maps (one-dimensional maps had also been studied as paradigms of chaos in higher-dimensional systems by Lorenz and Grossmann). Since the discovery of universality at transitions to chaos, and the observation of period-doubling sequences by A. Libchaber and his co-workers in fluid mechanics experiments, much has been written about deterministic chaos and fractals. However, one thing must be stated in the beginning: although this book is primarily about iterated maps, the method of analysis and choice of emphasis make it very different from all of the others. It is written for those who not only want an introduction to modern developments in nonlinear dynamics and fractals, but also want to understand the following questions: How can a deterministic trajectory be unpredictable? How can nonperiodic chaotic trajectories be computed? Is information loss avoidable or necessary in a deterministic chaotic system? Are deterministic chaotic orbits random? What are multifractals, and where do they come from? Why do we study iterated maps instead of differential equations?
There are in nature both states and processes that are described in completely different contexts as disorderly, noisy, random, chaotic, or turbulent. An interesting example of disorder is provided by the energy-eddy cascade in fully developed turbulence which appears, at least superficially, to be a very different problem from that of the solid state disorder of a glassy state. However, there is a common thread that connects all disordered phenomena in nature: they appear as noisy, complicated patterns that require a statistical description, in stark contrast with the periodic patterns of a crystalline lattice or a laminar flow. Is the assumption of randomness fundamentally necessary for the description of certain kinds of disorder in nature? Let us begin by comparing the idea of randomness with deterministic chaos.
True randomness presumes indeterminism: given the state of a random system at one time, there is in principle and in practice no formula or algorithm from which one can predict which possible state will occur at the next instant. Or, given a pattern as the end result of a random process, there is in principle no way to predict what will be the pattern in an identical repetition of the process starting from the same initial state. The resulting uncertainty gives rise to statistics that are describable empirically by probabilities.
Smooth curves, or curves where the slope is at least piecewise well defined, have a definite length. When one measures the length of such a curve to higher and higher precision, by using a sequence of shorter and shorter rulers, for example, then one obtains corrections to the previous length measurement in the form of higher-order decimals, but the leading decimals are not changed if the former length measurement was at all accurate. These curves are the foundation for Euclidean and non-Euclidean geometry and we are familiar with them from standard courses in mathematics and physics. The elliptic orbit of the earth about the sun provides only one of many possible examples of the smooth orbits of integrable dynamics.
In stark contrast with mathematical smoothness, much of nature is made up of collections of fragments whose evolution and form seem at first glance to be beyond the possibility of description in the theory of differential equations, unless ‘random noise’ is included ‘ad hoc’ as an external force. A few examples are coastlines, mountain ranges, aggregates of soot particles, the porespace of sandstone, and the eddy cascade in turbulence.
Since the beginning of physics, the study of theoretical mechanics has been synonymous with the study of the gross regularities of nature. Regularity of occurrence either in space or in time depends implicitly upon an underlying stability in the development of chains of events whereby small changes in starting conditions do not produce the need for a statistical description: nearby starting conditions must yield patterns of trajectories that are similar in appearance at later times. At the other extreme lie phenomena where attempts at reproducibility fail. They fail in the sense that ‘repeated identical experiments’, experiments where the starting conditions are as close to one another as experimental error will permit, yield very different, apparently unrelated patterns of trajectories as time goes on. This kind of behavior we call statistical. It has been widely assumed that the cause of statistical behavior is randomness, which, if one takes the word literally, is the same as saying that there is no identifiable cause at all, because the explanation of phenomena in terms of cause and effect requires a deterministic description of chains of events.
Irregularities in the form of nonequilibrium disordered phenomena were once presumed to be describable only with the aid of assumptions of randomness: a deterministic description was thought either to be impossible or else was too complicated, in principle, to be practical. One reason why a deterministic description of disorder was thought to be impractical was the incorrect but widely held belief that apparent randomness in deterministic equations required for its appearance a large number of degrees of freedom.
Integrable conservative systems: symmetry, invariance, conservation laws, and motion on invariant tori in phase space
Systems that show instability with respect to small changes in initial data were neither studied nor taught in most physics departments until nearly two decades after Lorenz's discovery of the butterfly effect. The reason for this is in part that classical mechanics was regarded by many physicists as a dead subject and was often taught mainly as preparation for the study of quantum mechanics: dissipative systems were largely ignored and the study of Hamiltonian mechanics was restricted to a very special case, that of completely integrable (or just ‘integrable’) systems. Mechanics was typically taught as if there were no systems other than the integrable ones. One reason for this was the very strong influence of progress that had been made in quantum theory by studying systems that were completely solvable by using symmetry. Whenever you can solve a dynamics problem by symmetry, then the system is called integrable, or ‘completely integrable’. This terminology has been used in quantum as well as in classical mechanics, and for the same reason: in both cases, the method proceeds by finding a (finite) complete set of commuting infinitesimal generators (complete set of commuting operators).