To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A close scrutiny of the microlensing results towards the Magellanic clouds reveals that the stars within the Magellanic clouds are major contributors as lenses, and the contribution of MACHOs to dark matter is 0 to 5%. The principal results which lead to this conclusion are the following:
(i) Out of the ∼17 events detected so far towards the Magellanic Clouds, the lens location has been securely determined for one binary-lens event through its caustic-crossing timescale. In this case, the lens was found to be within the Magellanic Clouds. Although less certain, lens locations have been determined for three other events and in each of these three events, the lens is most likely within the Magellanic clouds.
(ii) If most of the lenses are MACHOs in the Galactic halo, the timescales would imply that the MACHOs in the direction of the LMC have masses of the order of 0.5 M⊙, and the MACHOs in the direction of the SMC have masses of the order of 2 to 3 M⊙. This is inconsistent with even the most flattened model of the Galaxy. If, on the other hand, they are caused by stars within the Magellanic Clouds, the masses of the stars are of the order of 0.2 M⊙ for both the LMC as well as the SMC.
(iii) If 50% of the lenses are in binary systems similar to the stars in the solar neighborhood, ∼10% of the events are expected to show binary characteristics.
There are now two cosmological constant problems: (i) why the vacuum energy is so small and (ii) why it comes to dominate at about the epoch of galaxy formation. Anthropic selection appears to be the only approach that can naturally resolve both problems. This approach presents some challenges to particle physics models.
The problems
Until recently, there was only one cosmological constant problem and hardly any solutions. Now, within the scope of a few years, we have made progress on both accounts. We now have two cosmological constant problems (CCPs) and a number of proposed solutions. In this talk I am going to review the situation, focusing mainly on the anthropic approach and on its implications for particle physics models. I realize that the anthropic approach has a low approval rating among physicists. But I think its bad reputation is largely undeserved. When properly used, this approach is quantitative and has no mystical overtones that are often attributed to it. Moreover, at present this appears to be the only approach that can solve both CCPs. I will also comment on other approaches to the problems.
The cosmological constant is (up to a factor) the vacuum energy density, ρv.
For physicists, recent developments in astrophysics and cosmology present exciting challenges. We are conducting “experiments” in energy regimes some of which will be probed by accelerators in the near future, and others which are inevitably the subject of more speculative theoretical investigations. Dark matter is an area where we have hope of making discoveries both with accelerator experiments and dedicated searches. Inflation and dark energy lie in regimes where presently our only hope for a fundamental understanding lies in string theory.
Introduction
It is a truism that the development of astronomy, astrophysics, cosmology relies on our understanding of the relevant laws of physics. It is thus no surprise that my astronomy colleagues tend to know more classical mechanics, electricity and magnetism, atomic and nuclear physics than my colleagues in particle theory.
As we consider many of the questions which we now face in cosmology, we must confront the fact that we simply do not know the relevant laws of nature. The public often asks us “What came before the Big Bang?” We usually think of this as requiring understanding of physics at the Planck scale. But at present we can't even come close. Ignorance sets in slightly above nucleosynthesis, and becomes severe by the time we reach the weak scale. Some of the questions which trouble us will be settled by experiment over the next decades; some require new theoretical developments. Needless to say, it is possible that much will remain obscure for a long time.
By
Marc Kamionkowski, California Institute of Technology, Mail Code 130-33, Pasadena, CA 91125, USA; kamion@tapir.caltech.edu,
Andrew H. Jaffe, Center for Particle Astrophysics, University of California, Berkeley, CA 94720, USA; jaffe@cfpa.berkeley.edu
Edited by
Mario Livio, Space Telescope Science Institute, Baltimore
Recent measurements of temperature fluctuations in the cosmic microwave background (CMB) indicate that the Universe is flat and that large-scale structure grew via gravitational infall from primordial adiabatic perturbations. Both of these observations seem to indicate that we are on the right track with inflation. But what is the new physics responsible for inflation? This question can be answered with observations of the polarization of the CMB. Inflation predicts robustly the existence of a stochastic background of cosmological gravitational waves with an amplitude proportional to the square of the energy scale of inflation. This gravitational-wave background induces a unique signature in the polarization of the CMB. If inflation took place at an energy scale much smaller than that of grand unification, then the signal will be too small to be detectable. However, if inflation had something to do with grand unification or Planckscale physics, then the signal is conceivably detectable in the optimistic case by the Planck satellite, or if not, then by a dedicated post-Planck CMB polarization experiment. Realistic developments in detector technology as well as a proper scan strategy could produce such a post-Planck experiment that would improve on Planck's sensitivity to the gravitational-wave background by several orders of magnitude in a decade timescale.
The simplest models for the formation of large-scale structure are reviewed. On the assumption that the dark matter is cold and collisionless, LSS data are able to measure the total amount of matter, together with the baryon fraction and the spectral index of primordial fluctuations. There are degeneracies between these parameters, but these are broken by the addition of extra information such as CMB fluctuation data. The CDM models are confronted with recent data, especially the 2dF Galaxy Redshift Survey, which was the first to measure more than 100,000 redshifts. The 2dFGRS power spectrum is measured to ≲ 10% accuracy for k > 0.02 h Mpc–1, and is well fitted by a CDM model with Ωmh = 0.20 ± 0.03 and a baryon fraction of 0.15 ± 0.07. In combination with CMB data, a flat universe with Ωm ⋍ 0.3 is strongly favored. In order to use LSS data in this way, an understanding of galaxy bias is required. A recent approach to bias, known as the ‘halo model’ allows important insights into this phenomenon, and gives a calculation of the extent to which bias can depend on scale.
Structure formation in the CDM model
The origin and formation of large-scale structure in cosmology is a key problem that has generated much work over the years. Out of all the models that have been proposed, this talk concentrates on the simplest: gravitational instability of small initial density fluctuations.
In the previous chapters of this book we have frequently used the concept ‘Riemannian space’ or ‘curved space’. Except in Section 14.4 on the geodesic deviation, it has not yet played any rôle whether we were dealing only with a Minkowski space with complicated curvilinear coordinates or with a genuine curved space. We shall now turn to the question of how to obtain a measure for the deviation of the space from a Minkowski space.
If one uses the word ‘curvature’ for this deviation, one most often has in mind the picture of a two-dimensional surface in a three-dimensional space; that is, one judges the properties of a two-dimensional space (the surface) from the standpoint of a flat space of higher dimensionality. This way of looking at things is certainly possible mathematically for a four-dimensional Riemannian space as well – one could regard it as a hypersurface in a ten-dimensional flat space. But this higher-dimensional space has no physical meaning and is no more easy to grasp or comprehend than the four-dimensional Riemannian space. Rather, we shall describe the properties of our space-time by four-dimensional concepts alone – we shall study ‘intrinsic geometry’. In the picture of the two-dimensional surface we must therefore behave like two-dimensional beings, for whom the third dimension is inaccessible both practically and theoretically, and who can base assertions about the geometry of their surface through measurements on the surface alone.
The picture of black holes we have drawn so far changes drastically if quantum effects are taken into account. Before we go into the details of this in Section 5 of this chapter, we want to make a few general remarks on the interplay of Relativity Theory and Quantum Theory. For a more detailed discussion we refer the reader to the literature given at the end of the chapter.
The problem
The General Theory of Relativity is completely compatible with all other classical theories. Even if the details of the coupling of a classical field (Maxwell, Dirac, neutrino or Klein–Gordon field) to the metric field are not always free of arbitrariness and cannot yet be experimentally tested with sufficient accuracy, no doubt exists as to the inner consistency of the procedure.
This optimistic picture becomes somewhat clouded when one appreciates that besides the gravitational field the only observable classical field in our universe is the Maxwell field, while the many other interactions between the building blocks of matter can only be described with the aid of Quantum Theory. A unification of Relativity Theory and Quantum theory has not yet been achieved, however.
One of the main postulates of relativity theory is that a locally geodesic coordinate system can be introduced at every point of space-time, so that the action of the gravitational force becomes locally ineffective and the space is approximately a Minkowski space. Hence it is easily understandable why in our neighbourhood, with its relatively small space curvature, space is, to very good approximation, as it is assumed to be in quantum theory.
When we are handling physical problems, symmetric systems have not only the advantage of a certain simplicity, or even beauty, but also special physical effects frequently occur then. One can therefore expect in General Relativity, too, that when a high degree of symmetry is present the field equations are easier to solve and that the resulting solutions possess special properties.
Our first problem is to define what we mean by a symmetry of a Riemannian space. The mere impression of simplicity which a metric might give is not of course on its own sufficient; thus, for example, the relatively complicated metric (31.1) in fact has more symmetries than the ‘simple’ plane wave (29.39). Rather, we must define a symmetry in a manner independent of the coordinate system. Here we shall restrict ourselves to continuous symmetries, ignoring discrete symmetry operations (for example, space reflections).
Killing vectors
The symmetry of a system in Minkowski space or in three-dimensional (Euclidean) space is expressed through the fact that under translation along certain lines or over certain surfaces (spherical surfaces, for example, in the case of spherical symmetry) the physical variables do not change. One can carry over this intuitive idea to Riemannian spaces and ascribe a symmetry to the space if there exists an s-dimensional (1 ≤ s ≤ 4) manifold of points which are physically equivalent: under a symmetry operation, that is, a motion which takes these points into one another, the metric does not change.
A cosmological model is a model of our universe which, taking into account and using all known physical laws, predicts (approximately) correctly the observed properties of the universe, and in particular explains in detail the phenomena in the early universe. Such a model must also explain inter alia why the universe was so homogeneous and isotropic at the epoch of last scattering of the cosmic microwave background, and how and when inhomogeneities (galaxies and stars) arose.
In a more restricted sense cosmological models are exact solutions of the Einstein field equations for a perfect fluid that reproduce the important features of our universe. Because there is only one actual universe the large number of known or possible cosmological models may at first seem surprising. There are, however, two reasons for this multiplicity.
Firstly, only a section of our universe is known, both in space and in time. All cosmological models which differ only near the origin of the universe must be accepted for competition. In fact solutions are known which are initially inhomogeneous or anisotropic to a high degree, and which then increasingly come to approximate a Friedmann universe. All cosmological models which yield a redshift and a cosmic background radiation can hardly be refuted. The possibility cannot be excluded that our universe is not homogeneous and isotropic, but has those properties only approximately in our neighbourhood. An expanding ‘dust star’, that is, a section of a Friedmann universe which is surrounded externally by a static Schwarzschild metric (the model of a collapsing star discussed in Section 36.3), may also perhaps be an excellent model of the universe.
The gravitational fields of the Earth and the Sun constitute our natural environment and it is in these fields that the laws of gravity have been investigated and summed up by equations. Both fields are to good approximation spherically symmetric and, as a result, suitable objects to test the Einstein theory as represented in the Schwarzschild metric.
The Einstein theory contains the Newtonian theory of gravitation as a first approximation and in this sense is of course also confirmed by Kepler's laws. What chiefly interests us here, however, are the – mostly very small – corrections to the predictions of the Newtonian theory. In very exact experiments one must distinguish carefully between the following sources of deviation from the Newtonian spherically symmetric field:
(a) Relativistic corrections to the spherically symmetric field,
(b) Newtonian corrections, due to deviations from spherical symmetry (flattening of the Earth or Sun, taking into account the gravitational fields of other planets),
(c) Relativistic corrections due to deviations from spherical symmetry and staticity.
The Newtonian corrections (b) are often larger than the relativistic effects (a) which are of interest to us here, and can be separated from them only with difficulty. Except for the influence of the rotation of the Earth (Lense–Thirring effect, see Section 27.5), one can almost always ignore the relativistic corrections of category (c).