We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Why should special states occur as initial conditions in every physical situation in which they are needed? Half this book has been devoted to making the points that initial conditions may not be as controllable as they seem; that there may be constraints on microscopic initial conditions; that this would not have been noticed; that such constraints can arise from two-time or future conditioning; that in our universe such future conditioning may well be present, although, as remarked, cryptic. In this chapter I will take up more detailed questions: what future conditions could give rise to the need for our ‘special’ states, and why should those particular future conditions be imposed.
Before going into this there is a point that needs to be made. Everything in the present chapter could be wrong and the thesis of Chapter 6 nevertheless correct. It is one thing to avoid grotesque states (and solve the quantum measurement problem) by means of special states and it is another provide a rationale for their occurrence. I say this not only to highlight the conceptual dependencies of the theses in this book, but also because there is a good deal of hand waving in the coming chapter and I don't want it to reflect unfavorably on the basic proposal. As pointed out earlier, the usual thermodynamic arrow of time can be phrased as follows: initial states are arbitrary, final states special.
Pure quantum evolution is deterministic, ψ → exp(—iHt/ħ)ψ, but as for classical mechanics probability enters because a given macroscopic initial condition contains microscopic states that lead to different outcomes; the relative probability of those outcomes equals the relative abundance of the microscopic states for each outcome. This is the postulated basis for the recovery of the usual quantum probabilities, as discussed in Chapter 6. In this chapter we take up the question of whether the allowable microstates (the ‘special’ states) do indeed come with the correct abundance. To recap: ‘special’ states are microstates not leading to superpositions of macroscopically different states (‘grotesque’ states). For a given experiment and for each macroscopically distinct outcome of that experiment these states form a subspace. We wish to show that the dimension of that subspace is the relative probability of that outcome.
This is an ambitious goal, especially considering the effort needed to establish that there are any special states—the subject of Chapter 7. As remarked there, the special states exhibited are likely to be only a small and atypical fraction of all special states in the physical apparatus being modeled (e.g., the cloud chamber). In one example (the decay model) there is a remarkable matching of dimension and conventional probability, but I would not make too much of that. What is especially challenging about the present task is that we seek a universal distribution.
In Chapter 6 I presented a proposal for how and why grotesque states do not occur in Nature. In subsequent chapters I explored consequences and found subsidiary requirements, such as Cauchy distributed kicks. To find out whether all or part of our scheme is the way Nature works, we turn to experiment. How to turn to experiment is not so obvious, since the basic dynamical law, ψ → exp(−iHt/ħ)ψ, is the same as for most other theories. Our basic assertion concerns not the dynamical law but the selection of states. Therefore it is that assertion that must be tested. For example, one way is to set up a situation where the states we demand, the ‘special’ states, cannot occur. Then what happens? Another part of our theory is the probability postulate, and this deals not only with the existence of special states but with their abundance. It enters in the recovery of standard probabilities but has far reaching consequences that may well lead to the best tests of the theory. Such tests arise in the context of EPR situations.
The experimental tests fall into the following categories.
Precluding a class of special states. This should prevent a class of outcomes. If the changes in the system (due to precluding the class of special states) do not change the predictions of the Copenhagen interpretation, then this provides a test. In particular, with a class of special states precluded, our theory forbids the associated outcome.
Although the variational principles of classical mechanics lead to two-time boundary value problems, when dealing with the real world everyone knows you should use initial conditions. Not surprisingly, the eighteenth-century statements of classical variational principles were invested with religious notions of teleology: guidance from future paths not taken could only be divine. The Feynman path integral formulation of quantum mechanics makes it less preposterous that a particle can explore non-extremal paths; moreover, it is most naturally formulated using information at two times. In the previous chapter, use of the two-time boundary value problem was proposed as a logical prerequisite for considering arrow-of-time questions. Perhaps this is no less teleological than Maupertuis's principle, except that we remain neutral on how and why those conditions are imposed.
In this chapter we deal with the technical aspects of solving two-time boundary value problems. In classical mechanics you get into rich existence questions—sometimes there is no solution, sometimes many. For stochastic dynamics the formulation is perhaps easiest, which is odd considering that this is the language most suited to irreversible behavior. Our ultimate interest is the quantum two-time problem and this is nearly intractable.
Later in the book I will propose that the universe is most simply described as the solution of a two-time boundary value problem. A natural reaction is to wonder whether this is too constraining. Given that our (lower entropy) past already cuts down the number of possible microstates, are there sufficient microstates to meet a future condition as well?
Things happen. That may seem obvious, but it has also been maintained that all the ‘happening’ does not signify change and that the way of the world is periodic and repetitious:
One generation passeth away, and another generation cometh…
The sun… riseth, and the sun goeth down,
And there is nothing new under the sun.
—Ecclesiastes, Chapter 1
I won't discuss the profound aspects of this passage, but I will do post-industrial age nitpicking. Only in the past century has humanity understood a distinction that exists among these cyclic behaviors. For the 'rising and setting of the sun, there is indeed little that is happening. To a good approximation this is non-dissipative. But as to the coming and going of generations, with the benefit of wisdom gained in building steam engines, we recognize that birth and death can occur only so long as there is a source of negative entropy.
I could continue in this vein and discuss how the failure to distinguish between free and frictional motion confused humanity's greatest minds as they grappled with elementary mechanics. But I wish to begin a technical discussion of irreversibility and only want to draw a lesson of humility from the historical perspective. Until the past few centuries, humanity failed to appreciate the most manifest of time's arrows, the second law of thermodynamics. Unless one realizes that Nature's dynamics are mostly time symmetric one does not know that there is a problem.
In this chapter we return to the discussion of quantum gravity which we began in Chapter 4. In the first section we describe some of the technical problems that are encountered in constructing a theory of quantum gravity and some of the ideas that may go into their resolution. We then give a definition of simplicial gravity in arbitrary dimensions and describe a representative sample of the numerical results that have been obtained. It is often convenient to consider the theory in a fixed dimension larger than two. We shall discuss the four-dimensional case since it is physically the most relevant, and will only occasionally consider three-dimensional gravity.
Basic problems in quantum gravity
Formulating a theory of quantum gravity in dimensions higher than two leads to a number of basic questions, some of which go beyond those encountered in dimension two. Among these are the following:
(i) What are the implications of the unboundedness from below of the Einstein–Hilbert action?
(ii) Is the non-renormalizability of the gravitational coupling a genuine obstacle to making sense of quantum gravity?
(iii) What is the relation between Euclidean and Lorentzian signatures and do there exist analogues of the Osterwalder–Schrader axioms allowing analytic continuation from Euclidean space to Lorentzian space-time?
(iv) What is the role of topology in view, for instance, of the fact that higher-dimensional topologies cannot be classified?
We do not have answers to these questions and our inability to deal with them may be an indication that there exists no theory of Euclidean quantum gravity in four dimensions or, possibly, that quantum gravity only makes sense when embedded in a larger theory such as string theory.
The idea of describing the physical world entirely in terms of geometry has a history dating back to Einstein and Klein in the early decades of the century. This approach to physics had early success in general relativity but the appearance of quantum mechanics guided the development of theoretical physics in a different direction for a long time. During the past quarter of a century the programme of Einstein and Klein has seen a renaissance embodied in gauge theories and, more recently, superstring theory. During this time we have also witnessed the happy marriage of statistical mechanics and quantum field theory in the subject of Euclidean quantum field theory, a development which could hardly have taken place without Feynman's path integral formulation of quantization. In this book we shall work almost exclusively in the Euclidean framework.
The unifying theme of the present work is the study of quantum field theories which have a natural representation as functional integrals or, if one prefers, statistical sums, over geometric objects: paths, surfaces and higher-dimensional manifolds. Our philosophy is to work directly with the geometry as far as possible and avoid parametrization and discretizations that break the natural invariances. In this introductory chapter we give an overview of the subject, put it in perspective and discuss its main ideas briefly.
Lagrangian field theories whose action can be expressed entirely in terms of geometrical quantities such as volume and curvature have a special beauty and simplicity.
The role of random walks as a theoretical tool in physics dates back at least to the explanation of the origin of Brownian motion at the turn of the century. The universal features of the large-scale phenomena associated with random walks are already transparent in Einstein's derivation of the diffusion equation in his famous 1905 paper on Brownian motion. Ever since, theories of random walks have played an increasingly important role in virtually every branch of physics and now form the basis of statistical theory in general in the subject of stochastic processes. The appearance of random walks in elementary particle physics was mediated by Feynman's path integral formulation of quantum theory, and a mathematically rigorous approach to the subject was made possible by the introduction of Euclidean quantum field theory in the 1960s building on Wieners's earlier work on random walks and the diffusion equation.
We start in the next three sections by introducing various discrete random walk models describing the propagation of scalar particles in space-time. We pay particular attention to critical behaviour in these models and their universality properties: many different discrete models give rise to the same continuum limit. As explained in the Introduction we are primarily interested in viewing random paths as geometric objects. Hence, we focus on such aspects as reparametrization invariance which are usually not stressed in the more standard treatments.
When quantizing gravity in the functional integral formalism, the metric on the base manifold M is a dynamical variable that is integrated out in the partition function. This was discussed at length in Chapters 4 and 6 in particular cases. Because of general covariance one actually integrates over equivalence classes of metrics related by diffeomorphisms of M, and expectation values of physical observables are therefore diffeomorphism invariants of M. The central problem of quantum gravity in this formalism is to attribute a mathematical meaning to, as well as a physical interpretation of, such expectation values. It is fair to say that substantial results have been obtained in only two dimensions so far.
Introduction
This chapter is devoted to a discussion of certain examples of theories in which general covariance is realized in a simpler way than by averaging over metrics. For example, a theory is generally covariant if its action is a functional of a set of fields on the manifold M which does not involve a metric at all and for which the functional integration measure over the fields is also metric independent. Alternatively, it may occur that although a metric enters the expression defining the action, the energy–momentum tensor nevertheless vanishes on the physical state space. Perhaps the best-known example of the former is the three-dimensional Chern–Simons gauge theory [391]. Examples of the latter are cohomological field theories [393], which include the two-dimensional twisted N = 2 superconformal models [167].
The main topic of this book is the work that has been carried out during the last 15 years under the general heading of random surfaces. The original motivation for the study of random surfaces came from lattice gauge theory, where one can represent various quantities of interest as weighted sums over surfaces embedded in a hypercubic lattice. A few years later, with the resurrection of string theory, random surfaces were used as regularization of that theory and, most recently, random surface models have been applied to two-dimensional quantum gravity. There is also an impressive body of work on random surfaces that has been carried out by membrane physicists, as well as by condensed matter physicists, so one often finds mathematically identical problems being studied in different branches of physics. Random surfaces are therefore not a physical theory but, rather, a theoretical tool and a methodology that can be applied to various physical problems in the same way as random walks find applications in many branches of science. The formalism that has been developed to deal with random surfaces carries over to the study of higher-dimensional manifolds, which are important for quantizing gravity in higher dimensions.
We address this book primarily to advanced graduate students in theoretical physics but we hope that more experienced researchers in the field, as well as mathematicians, may find it useful.