To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The exact results presented in the previous chapter allow us to obtain the scaling exponents for d = 1, and reduce the number of independent scaling exponents to one. The same results can be obtained using the dynamic renormalization group method, which we now develop and use to study the scaling properties of the KPZ equation. In particular, we analyze the ‘flow equations’ and extract the exponents describing the KPZ universality class for d = 1. We also discuss numerical results leading to the values of the scaling exponents for higher dimensions.
Basic concepts
So far, we have argued that we can distinguish between various growth models based on the values of the scaling exponents α, β and z. The existence of universal scaling exponents and their calculation for various systems is a central problem of statistical mechanics. A main goal for many years has been to calculate the exponents for the Ising model, a simple spin model that captures the essential features of many magnetic systems. A major breakthrough occurred in 1971, when Wilson introduced the renormalization group (RG) method to permit a systematic calculation of the scaling exponents. Since then the RG has been applied successfully to a large number of interacting systems, by now becoming one of the standard tools of statistical mechanics and condensed matter physics. Depending on the mathematical technique used to obtain the scaling exponents, the RG methods can be partitioned into two main classes: real space and k-space (Fourier space) RG.
It is a truism to remark that no one – not even a theoretical physicist – can predict the future. Nonetheless, after asking the beleaguered reader to indulge in the rather extensive ‘banquet’ of the preceding 27 chapters, it seems only fair to offer a light ‘dessert’ that affords some outlook and perspective on this rapidly-evolving field.
What concepts loom above the details is a question worth addressing at the end of any large meal. Charles Kittel wrote his first edition of Introduction to Solid State Physics almost 50 years ago. He surely realized that solid state physics was a rapidly-evolving field, so his book ran the risk of becoming dated in short order. Therefore the first chapter systematically discusses the various crystal symmetries – and the group theory mathematics that describes these symmetries. The topics comprising solid state physics have changed rather dramatically, and most chapters of Kittel's 7th edition hardly resemble the chapters of the first edition. Nevertheless, the opening chapter of the first edition could serve as well today as an introduction to the essential underpinnings of the subject.
Inspired by Kittel's example, we have attempted in this short book to highlight where possible what seems to us to be the analog for disorderly surface growth of the various symmetries obeyed by crystalline materials. These newer ‘symmetries’, described using terms that may frighten the neophyte – such as scale invariance and self-affinity – are as straightforward to describe as translation, rotation, and inversion.
The increasing interest of researchers in the basic properties of growth processes has provided the initiative for a number of experimental studies designed to check the applicability of various theoretical ideas to experimental systems. While many experimental studies have been inspired by the KPZ theory, most have failed to provide support for the KPZ prediction that α = ½. Instead, most data suggest that α > ½. These experimental results initiated a closer look at the theory, and led to the discovery that quenched noise affects the scaling exponents in unexpected ways. In this chapter, we discuss some of these key experiments, including fluid-flow experiments, paper wetting, propagation of burning fronts, growth of bacterial colonies and paper tearing. Atom deposition in molecular beam epitaxy, which is one important class of experiments on kinetic roughening of interfaces, will be discussed in Chapter 12. The new theoretical ideas needed to understand the effect of atomic diffusion on the roughening process will be developed at that time.
Fluid flow in a porous medium
Two-phase fluid flow experiments have long been used to study various growth phenomena. The Hele–Shaw cell, well-known from studies on growth instabilities in viscous fingering, has proved to be a useful experimental setup for the study of the growth of selfaffine interfaces. A trypical setup used in these experiments (Fig. 11.1) is a thin horizontal Hele–Shaw cell made from two transparent plates.
Most of this book deals with local growth processes, for which the growth rate depends on the local properties of the interface. For example, the interface velocity in the BD model depends only on the height of the interface and its nearest neighbors. However, there are a number of systems for which nonlocal effects contribute to the interface morphology and growth velocity. Such growth processes cannot be described using local growth equations, such as the KPZ equation; if we attempt to do so, we must include nonlocal effects. In this chapter we discuss phenomena that lead to nonlocal effects, and we also discuss models describing nonlocal growth processes.
Diffusion-limited aggregation
Probably the most famous cluster growth model is diffusion-limited aggregation (DLA). The model is illustrated in Fig. 19.1. We fix a seed particle on a central lattice site and release another particle from a random position far from the seed. The released particle moves following a Brownian trajectory, until it reaches one of the four nearest neighbors of the seed, whereupon it sticks, forming a two-particle cluster. Next we release a new particle which can stick to any of the six perimeter sites of this two-particle cluster. This process is then iterated repeatedly. In Fig. 19.2, we show clusters resulting from the deposition of 5 × 105, 5 × 106, and 5 × 107 particles.
As discussed in the previous chapters, we can distinguish the various growth processes based on the concept of universality. Interfaces that belong to the same universality class are described by the same scaling exponents, and they are also described by the same continuum equation.
The universality class is determined by the physical processes taking place at the surface. There are three basic microscopic processes that can play a major role in this respect: deposition, desorption, and surface diffusion. In addition to these, nonlocal effects such as shadowing may play a decisive role in shaping the interface morphology. While deposition must occur, the other microscopic processes may be irrelevant or even absent altogether. For example, in many systems desorption is negligible, while at low temperatures surface diffusion may be negligible.
A number of recent experiments support the existence of kinetic roughening in various deposition processes. It is possible to measure both the roughness exponent α characterizing the interface morphology, and the exponent β quantifying the dynamics of the roughening process. However, the emerging picture is far from complete, and there is no unambiguous support for the various universality classes.
There are a number of reasons for this situation. First, it is only recently that experimental groups have initiated systematic investigations of the various roughening processes. While the results are quite encouraging, more work is needed to obtain a coherent picture. Second, due to the complicated nature of the competing effects discussed in the previous chapters, the interpretation of the data is often not straightforward.
The discovery that scaling laws and continuum theories are applicable to molecular beam epitaxy (MBE) has generated increasing interest among both experimentalists and theorists. The closer study of these deposition processes reveals the decisive role played by surface diffusion of the deposited particles. From the experimental point of view, these studies re-focus attention on a neglected aspect of MBE growth processes: roughening of a growing interface.
There are two complementary approaches to crystal growth:
(a) Atomistic approaches, in which the position of every atom is well defined. Our knowledge of the behavior of individual atoms has increased due to the high resolution of scanning tunneling microscopy (STM). STM is capable of identifying not only the structure of the lattice, but the positions of the individual atoms as well. First principles calculations provide insight into the energetics of atomic motion on solid surfaces. Based on this detailed information, modeling of different growth processes on the atomic level is becoming a widely used tool to gain deeper insight on the collective nature of atomic motion and deposition processes.
(b) Continuum approaches view the interface on a coarse-grained scale, in which every property is averaged over a small volume containing many atoms. Neglecting the discrete nature of the growth process, continuum theories attempt to capture the essential mechanisms determining the growth morphology. Their predictive power is limited to length scales larger than the typical interatom distance, providing information on the collective nature of the growth process, such as the variation in the interface roughness or correlation functions.
ABSTRACT. In this paper experiments performed with the one-atom maser are reviewed. Furthermore, possible experiments to test basic quantum physics are discussed.
Introduction, the One-Atom-Maser
The most promising avenue to study the generation process of radiation in lasers and masers is to drive a maser consisting of a single mode cavity by single atoms. This system, at first glance, seems to be another example of a Gedanken-experiment treated in the pioneering work of Jaynes and Cummings (1963), but such a one-atom maser (Meschede, Walther and Müller, 1985) really exists and can in addition be used to study the basic principles of radiation-atom interaction. The advantages of the system are:
it is the first maser which sustains oscillations with less than one atom on the average in the cavity,
this setup allows to study in detail the conditions necessary to obtain nonclassical radiation, especially radiation with sub-Poissonian photon statistics in a maser system directly in a Poissonian pumping process, and
it is possible to study a variety of phenomena of a quantum field including the quantum measurement process.
What are the tools that make this device work: It was the enormous progress in constructing superconducting cavities together with the laser preparation of highly excited atoms – Rydberg atoms – that have made the realization of such a one-atom maser possible.
ABSTRACT. During the Spring of 1966, Ed Jaynes presented a seminar course on quantum electronics that included the now famous “Jaynes-Cummings Model” and his Neoclassical Theory (NCT). As part of this seminar series, the NCT description of a two-level atom in an applied field was formulated as a formidable set of coupled nonlinear differential equations. Undaunted, Ed posted the equations on the Washington University Physics Department bulletin board and offered a prize of “a steak dinner for two” at a restaurant of the choice of the person who solves the equations. Within days, Bill Mitchell was able to present an elegant solution at one of the quantum electronics seminars. This early success of a new approach to doing theoretical physics encouraged Ed to challenge the knowledge hungry Physics Department with Steak Dinner Problem II. This problem was a specific mathematical formulation of the exact (i.e. without the Rotating Wave Approximation) description of the interaction of a two-level atom with a single quantized electromagnetic field mode. Jaynes' formulation of the problem appears to have anticipated the use of Bargmann Hilbert space in QED. This problem has remained unsolved for 26 years in spite of the efforts of numerous researchers, most of whom were probably unaware of Jaynes' offered prize. Recent efforts to solve this problem will be described.
“If it were easy, it would already have been done.”
ABSTRACT. A reformulation of the Dirac theory reveals that iћ has a geometric meaning relating it to electron spin. This provides the basis for a coherent physical interpretation of the Dirac and Schödinger theories wherein the complex phase factor exp(–iφ / ћ) in the wave function describes electron zitterbewegung, a localized, circular motion generating the electron spin and magnetic moment. Zitterbewegung interactions also generate resonances which may explain quantization, diffraction, and the Pauli principle.
You know, it would be sufficient to really understand the electron.
—Albert Einstein
Introduction
Edwin T. Jaynes is one of the great thinkers of twentieth century science. More than anyone else he has deepened and clarified the role of statistical inference in science and engineering. To my mind, his greatest accomplishment has been to recognize that in the evolution of statistical mechanics the principles of physics had gotten confused with principles of statistical inference, and then to show how the two can be cleanly separated to produce a simpler yet more powerful theoretical system.
I share with Ed Jaynes the belief that quantum mechanics suffers from an analogous muddle of probability with physics, which is at the root of the perennial controversy over physical interpretation.
Though a Jaynesian revolution of the “quantum muddle” remains elusive, I will report here on a promising possibility that has been overlooked.
ABSTRACT. It is pointed out that the conclusions drawn from a recent quantum interference experiment with light suggest an operational resolution of the Schrödinger cat paradox.
On this occasion in honor of Prof. E.T. Jaynes, we recall that he devoted some of his research efforts to the interpretation of quantum mechanics, which led him to propose several experimental tests. Although the ‘neoclassical’ theory he developed was found not to be confirmed by experiment, it nevertheless played a role in encouraging us to think critically about quantum mechanics and to carry out new experiments. This short contribution is concerned with a well-known quantum problem of interpretation.
The quantum paradox known as Schrödinger's cat, in which the cat is cast in a linear superposition of the state of being alive and the state of being dead, has been debated since the beginnings of quantum mechanics. Whereas most physicists are ready to concede the existence of superposition states for a microscopic quantum system like an atom, such states appear to be ruled out for a macroscopic system like a cat by common experience. The question then arises at which level classical concepts take over from quantum mechanical ones.
In a very clear and readable discussion of this and other paradoxes Glauber has pointed out that the noise inevitably associated with the amplification process accompanying the measurement of a microscopic system leaves the cat in a mixed state rather than a pure one.
ABSTRACT. The Jaynes-Cummings Model (JCM), a soluble fully quantum mechanical model of an atom in a field, was first used (in 1963) to examine the classical aspects of spontaneous emission and to reveal the existence of Rabi oscillations in atomic excitation probability for fields with sharply defined energy (or photon number). For fields having a statistical distribution of photon numbers the oscillations collapse to an expected steady value. In 1980 it was discovered that with appropriate initial conditions (e.g. a near-classical field), the Rabi oscillations would eventually revive – only to collapse and revive repeatedly in a complicated pattern. The existence of these revivals, present in the analytic solutions of the JCM, provided direct evidence for discreteness of field excitation (photons) and hence for the truly quantum nature of radiation. Subsequent study revealed further nonclassical properties of the JCM field, such as a tendency of the photons to antibunch. Within the last two years it has been found that during the quiescent intervals of collapsed Rabi oscillations the atom and field exist in a macroscopic superposition state (a Schrödinger cat). This discovery offers the opportunity to use the JCM to elucidate the basic properties of quantum correlation (entanglement) and to explore still further the relationship between classical and quantum physics.
ABSTRACT. In this paper it is shown how Jaynes' maximum entropy principle or, more generally, his maximum calibre principle can be cast in such a form that the stochastic process that underlies observed data can be determined under the assumption that the process is Markovian. Under suitable constraints it becomes possible to derive the Fokker-Planck equation and the Îto-Langevin equation of that process.
Introduction
Jaynes' maximum entropy principle has found wide-spread applications not only in systems in thermoequilibrium but also in systems far from it. In addition, Jaynes treated time-dependent processes by means of his maximum calibre principle. In the present paper we show how his principles can be cast in such a form that the underlying process that is observed by certain correlation functions can be represented by a Fokker-Planck equation and an Îto-Langevin equation. It will be assumed throughout our paper that the process is Markovian. At the end of this contribution explicit examples treated recently by Lisa Borland and the present author will be presented.
Derivation of the Fokker-Planck Equation
In order to express the definition of a Markov process in a rigorous mathematical form, we choose a time sequence t0, t1, …tM. We may attribute a probability distribution to the path taken by the state vectors at the corresponding times.
ABSTRACT. E.T. Jaynes has been the central figure for over three decades in showing how maximum entropy estimation (MEE) provides an extension of logic to cases where one cannot carry out Aristotelian deductive reasoning. Here I will review two early applications of MEE which I hope will provide some insight into how these ideas were being used in quantum and statistical mechanics around 1960.
In May of ‘61 I turned in my Ph.D. thesis (Scalapino, 1961) to Stanford University and, following a well-known dictum, went on to work on other problems. Now, three decades later, on this, the occasion of Ed Jaynes’ seventieth birthday, I decided to look back to see what we were doing when I was first getting to know Ed and his special approach to problems.
Not surprisingly, my thesis contains several applications of the principle of Maximum Entropy Estimation (MEE). In 1957 Ed had written two seminal articles (Jaynes, 1957(a), 1957(b)) showing how one could use Shannon's (1948) Information Theory to construct density matrices for a variety of different problems in equilibrium statistical mechanics. I had been very much taken with this work and wanted to understand how it could be applied to other systems.