To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Under the sub-subject of cosmology, Amazon.com currently lists 5765 items. Among them there are textbooks, serious scientific discussions, popular books, books on history, philosophy, metaphysics and pseudoscience, mega-bestsellers like those by Stephen Hawking, Brian Greene and Lisa Randall, and works no one has heard of by authors as obscure as their books. It would almost seem as though the number of books on the subject is expanding faster than the Universe; that soon the nature of the missing mass will be no mystery – the dark matter is in the form of published but largely unread cosmology books. Does the world need yet another book about this subject? Why have I decided to contribute to this obvious glut on the book market? Why do I feel that I have something to add of unique value?
The idea for the current project had its dim origins in the year 2003 when I was invited to lecture on observational cosmology at a summer school on the Aegean island of Syros. I was surprised at this invitation because I am neither an observer nor a cosmologist; I have always worked on smaller-scale astrophysical problems that I considered soluble. In this career choice I was no doubt influenced by my first teachers in astrophysics, who were excellent but traditional and, to my perception at least, found cosmology to be rather fanciful and speculative (although I never heard them explicitly say so and almost certainly they would not say so now).
But I decided that this invitation was an opportunity to learn something new, so I prepared a talk on the standard cosmological tests (e.g., the Hubble diagram, the angular size–redshift relation and the number counts of faint galaxies) in the context of the current cosmological paradigm that is supported by modern observations, such as the very detailed views of tiny anisotropies in the cosmic microwave background radiation (CMB). The issue I considered was the overall consistency of these classical tests with the standard model – Lambda-CDM (ΛCDM).
I was actually more interested in finding inconsistency rather than consistency. This is because of my somewhat rebellious nature, as well as my conviction that science primarily proceeds through contradiction and conflict rather than through agreement and “concordance.”
If dark energy is the yeast in the cosmic cake, the dark matter is the flour – it provides the substance and the material content. At first thought, it would seem to be more substantial and comprehensible than dark energy, so we might expect to have a better understanding of this medium. As a concept it has been around longer and is easier to visualize: we can all imagine billiard balls bouncing around in the gravitational potential well of a galaxy.
In fact, dark matter is stranger than the billiard-ball picture. The particles that are thought to comprise dark matter are non-standard – not the usual protons and neutrons of baryonic matter. They are electrically neutral, they do not interact directly with photons, and they interact rarely with baryons and rarely with themselves; it is as though the billiard balls can pass right through each other and everything else. They are notoriously difficult to detect by any direct non-astronomical technique, which is to say, the matter is very dark indeed. The essential interaction with normal baryonic matter and photons is gravitational, and the direct long-range gravitational influence of dark matter upon observed systems – clusters of galaxies, spiral and elliptical galaxies, dwarf spheroidal galaxies – is the primary evidence for its existence. So, in fact, this medium is just as peculiar and poorly understood as dark energy, and its existence requires new and exotic physics.
Evidence for Dark Matter in Galaxies and Galaxy Systems
The very first evidence of the dynamical effects of a substantial unseen component in a bound gravitational system was discovered in 1933 by Fritz Zwicky, not in individual galaxies but in a cluster of galaxies. Here the individual galaxies are moving much too fast if the only mass is in the visible starlight of the galaxies. The system should be unbound – the galaxies should not be seen to be clustered but flying away from each other, unless there is an unseen component holding the cluster together. Zwicky first used the term “dark matter” to describe this unseen component and estimated that it must outweigh the visible matter by a factor of several hundred. Subsequently, much of the “missing mass” was found to be in the form of X-ray-emitting hot gas (baryons), which can outweigh the visible stars in galaxies by a significant factor.
When the sky above was not named, And the earth beneath did not yet bear a name, And the primeval Apsû, who begat them, And chaos, Tiamat, the mother of them both, Their waters were mingled together, And no field was formed, no marsh was to be seen; When of the gods none had yet been called into being.
So begins the Enuma Elish, the seven tablets of creation, describing the ancient Sumerian creation myth. In the beginning the world is without form, and fresh water (Apsû) and salt water (Tiamat) mingle together. Then, in an act of creation, there follow six generations of gods, each associated with a natural manifestation of the world, such as sky or earth. Light and darkness are separated before the creation of luminous objects: the Sun, the Moon, the stars. The sixth-generation god, Marduk, establishes his precedence over all others by killing Tiamat and dividing her body into two parts – the earth below, and the sky above. He establishes law and order – control over the movement of the stars, twelve constellations through which the Sun and the planets move – and he creates humans from mud mixed with the blood of Tiamat.
The similarity to the Hebrew creation mythology described in the book of Genesis has long been recognized: In the biblical story creation takes place in six days, corresponding to the six generations of “phenomenon” gods in Babylon, and the separation of light and dark precedes the creation of heavenly bodies. There is an initial homogeneous state in which the various constituents of the world are mixed evenly together, and an act of creation at a definite point in time – an act which separates these constituents and makes the world habitable (and more interesting).
These aspects are also evident in the Greek creation mythology3 in which elements of the world are initially mixed together in a formless way – Chaos. However, at some point, two children are born of Chaos – Night and Erebus, an “unfathomable depth where death dwells” (not an obvious improvement over the initial state of Chaos). But then, also in an unexplained way, something positive and truly magnificent happens: Love is born and Order and Beauty appear.
MOND (modified Newtonian dynamics) posits a drastically different approach to the problem of the observed mass discrepancies in astronomical objects. The idea is based upon a simple proposition: if the only evidence for dark matter on astronomical scales is its putative global dynamical or gravitational effects, then its presumed existence is not independent of the laws of dynamics or gravitation on those scales – that so long as no candidate dark matter objects or particles have been identified, then it is legitimate to look for alternative solutions to the discrepancy in modifications of Newtonian dynamics or gravity. Such a point of view is hardly radical at all but would seem to be a reasonable scientific approach. And yet, the very mention of “MOND” evokes strong reactions among astrophysicists and cosmologists; most of that reaction is not benign.
MOND is an acceleration-based modification of Newtonian dynamics or gravity. Now, many years later, it has been realized that in the deep MOND limit, for accelerations below a critical value, the theory reflects a very basic symmetry – a symmetry that is already evident in galaxy phenomenology – and that is where I begin.
Galaxy Phenomenology Reveals a Symmetry Principle
There are three aspects of galaxy phenomenology that seem unnatural in the context of cold dark matter. The first is that the rotation velocity beyond the visible galaxy approaches a constant fixed value; rotation curves are asymptotically flat. This flatness of rotation curves, shown so vividly by the example plotted in Figure 7.2, is a general feature of spiral galaxies – at least in those cases where there are no complications, such as nearby interacting companion galaxies or large distortions (warps) of the gas layer.
In galaxies of high surface brightness, there may be a gradual decline in the visible disk, but then, with increasing distance, the rotational velocity approaches its constant value. For low-surface-brightness galaxies, there is often a slow rise throughout the visible disk, but at larger distances the rotation velocity rises to a fixed value. This behavior is shown by the compilation of galaxy rotation curves in Figure 8.1; it is a general phenomenon.
Rotation curves are asymptotically flat as far out as they have been measured. They do not slowly rise; they do not slowly decline.
Dark matter does not have the predictive power of MOND on the scale of galaxies. Yet cold dark matter has gained almost unquestioned acceptance by the majority of the relevant communities, while MOND has languished for more than 30 years outside of the mainstream. This is not because of some grand conspiracy against the idea of modified dynamics, but rather because of strong social factors that maintain support for the prevailing paradigm. There is an overriding tendency for scientists to function within the established framework and to select data that reinforce rather than challenge it (an effect that is supported by competition for academic positions and grants). There is a very large community (thousands) of physicists, astronomers and cosmologists with vested career interests in searching for and detecting dark matter – underground, above ground, in space, under-water, in the Antarctic ice. Moreover, in this case there is also an understandable reluctance to tamper with the historically established laws of physics; this is not what astronomers do. Physicists have a different culture, but most models for dark matter involve extensions of the standard model of particle physics; this sort of new physics is of general interest to the relatively large community of high-energy physicists and is perceived as less intrusive than modifying the venerable laws of Newton.
Beyond social factors, though, there is a reductionist current in modern science in general that in this area of research assigns priority to cosmology over mere galaxy phenomenology. Although the respectability of cosmology is quite recent, the science of the entire Universe is now considered to be more fundamental than that of its individual constituents. The pattern of anisotropies in the CMB is very well explained by the standard cosmological model, albeit with a somewhat unnatural combination of six free parameters. And given the precision of the fit to the angular power spectrum of these anisotropies, then, the reasoning goes, the theory must be correct even in its perceived implications for galaxies. So most cosmologists tend to be dismissive of mere galaxy phenomenology and its wealth of regularities; these are details and are due to messy baryonic physics that will be understood someday in the context of more detailed computations of the processes of star formation and feedback. It is certainly not serious enough to require a modification of Newton's (and therefore Einstein's) laws.
Shortly after the discovery of the CMB, Jim Peebles, closely followed by Robert Wagoner, William Fowler and Fred Hoyle, realized that the measured abundances of those light elements produced in the first few minutes of the Big Bang were sensitive probes of the density of the Universe, or at least the density of ordinary protons and neutrons that comprise most of the matter that we can directly observe – baryonic matter. This is particularly true of deuterium (a hydrogen nucleus with a neutron as well as a proton) and helium-3 (a helium nucleus consisting of two protons and only one neutron, rather than the usual two). If the primordial cosmic abundances of these isotopes could be precisely measured, then this would comprise a probe on the number of baryons relative to the known number of black-body photons – usually designated η; that is to say, these measured abundances would constitute a “baryometer.”2 The abundance of helium, the second most abundant element after hydrogen, is actually rather insensitive to the density of baryons, but is much more sensitive to the expansion rate of the Universe in the first few minutes after its origin. This depends upon the number of relativistic particle species such as neutrinos – the more species, the faster the expansion rate and the higher the helium abundance (recall that free neutrons have a lifetime of about 10 minutes; with a higher expansion rate more neutrons are available to form helium nuclei at temperatures low enough to avoid immediate photo-destruction). The expansion rate also depends upon more speculative possibilities such as deviations from standard gravity (i.e., a higher or lower constant of gravity in the early Universe). That is why it is said that the measured helium abundance is an effective chronometer.
Measuring the pristine abundances of helium or of the trace isotopes deuterium or helium-3 is a tricky business because of subsequent processing of these elements in stellar interiors – so-called “astration.” Helium and helium-3, for example, are produced in stars but deuterium is destroyed. So how does one determine the primordial unprocessed abundances of these isotopes?
If the creation and evolution of the Universe could be compared to baking a cake, then, in the context of ΛCDM plus inflation, the most important ingredient would be yeast – the substance causing an exponential expansion – a de Sitter phase. There are two such episodes of the dominance of yeast: the first is at the beginning, less than 10-32 seconds after the Big Bang. The modern creation story tells us that a fast-acting yeast causes the rapid expansion that inflates the Universe by a factor of possibly 1029, wiping out any significant initial curvature (pushing the density toward the critical density), extending the causally connected region to well beyond the currently observable Universe, and creating the small fluctuations that become the observed structure. In the reheating that occurs after this inflation, the Universe is literally recreated; the initial conditions that applied before this episode, whatever they were, are irrelevant. Everything that the Universe is now observed to be, all of the existent ingredients in their present abundances, appeared out of the vacuum at the end of this initial yeast-dominated period. The microphysics of this event – the nature of the yeast – is unknown, but in a naturalistic world (one in which physical effects follow physical causes) there is substantial reason to believe that such an early de Sitter phase has actually occurred. We should keep in mind, however, that in the absence of primary evidence (such as gravitational-wave-like fluctuations) and without a specific physical mechanism (the breaking of supersymmetry, for example), this is essentially an act of faith.
The second age of yeast-dominance is now. This is the current era (beginning about five billion years ago) when dark energy (the yeast) took over from cold matter (the dough) and again is driving an accelerated exponential expansion of the world, but much more slowly than in the early inflationary epoch. These periods of vacuum-energy-dominated exponential expansion, as well as those of radiation andmatter dominance, are shown schematically in the time-line of Figure 6.1. Here we see that, while inflation was a short episode (≈ 10-32 seconds), the Universe expanded by a large factor, possibly 1029, during this period. The current de Sitter phase has lasted five billion years so far, but the Universe has only expanded by a factor of two.
Suppose that Socrates and Glaucon were to wake up and continue their dialogue on the nature of reality and the acquisition of knowledge, all in the light of modern scientific developments. Perhaps it would go something like this:
“Well, Glaucon, you must remember our previous discussion concerning the illusionary nature of the physical world and the attainment of true knowledge of the eternal Forms.”
“Yes, indeed I do – your Allegory of the Cave is truly unforgettable. Shall I summarize it for you?”
“Please do, Glaucon, so that we can perhaps re-interpret the metaphor after all these many years. In particular, given the predictive success of modern science and the resulting fascination with the physical world, we might discuss whether or not the allegory still contains a message for us.”
“As I recall, it goes like this: A group of prisoners sits in a cave; they are constrained to face a nearby wall and look at nothing else, and they have been in this position for all of their cogent lives. Behind them and above there is fire that dimly illuminates the cave, and between the fire and the prisoners there is a road on which puppeteers, hidden by a low wall, pass carrying various objects. Some of the objects are images of humans and animals; some are more abstract. The puppeteers are free to speak and their voices are reflected off the wall. The objects that they carry cast shadows on the wall, and the moving shadows and reflected voices are all that the prisoners can see or hear of the world; this forms their view of reality. The prisoners are permitted to speak with one another and discuss what they see. Of course, they assume that the images before them on the wall are the real world since this is all they know of it.”
“Excellent, Glaucon. Your memory is pretty good after 2500 years. And what happens next in the story?”
“We suppose that one of the prisoners is set loose and is forced to turn around – that is, away from the wall and into a position such that he can see the fire and what is happening behind the prisoners. His eyes are somewhat dazzled by the light of the fire and, at first, he cannot get a clear view of the forms passing along the road.
In 1995, before the observations of the anisotropies in the CMB offered such a precise view of the Universe at the epoch of decoupling, and before the discovery of the accelerated expansion of the Universe through observations of distant supernovae, Jerry Ostriker and Paul Steinhardt in a letter to the journal Nature discussed the existing constraints on cosmological parameters. These were constraints arising from several distinct considerations: limits on the age of the Universe from globular-cluster lifetimes, measurements of the Hubble parameter H0 using the Hubble Space Telescope study of classical Cepheid variables and supernovae, the contribution of baryons to the mass budget of the Universe based upon the measured primordial abundances of light isotopes in the context of Big Bang nucleosynthesis, the measured and assumed universal ratio of baryons to dark matter in clusters of galaxies, and the total matter density required to form the observed structure in the Universe. They assumed that the properties required by inflation were absolutely valid: a flat universe in which the total Ω, including matter, radiation and vacuum energy, is equal to one and in which the primordial fluctuations were scale-free. They concluded that the model in concordance with all of these constraints was a low-density world (Ωm ≈ 0.3) dominated by a cosmological constant (ΩΛ≈ 0.7). In fact, it turned out that this concordance model was that which is supported, with much higher precision, by the subsequent observations of anisotropies in the CMB – altogether a remarkable success of inductive logic. The expansion and composition history of the concordance model are shown in Figure 5.1. It seems appropriate to refer to this model as the Friedmann–de Sitter model: it was pure Friedmann but is now becoming de Sitter.
Now we find ourselves in the situation where the primary empirical input to cosmology arises from the observations of the CMB. Although this is to be expected given the precision of the results, it is not altogether desirable to rely upon one phenomenon arising at one cosmic epoch to set the entire cosmological scenario. It would be as though geologists and paleontologists were to rely upon the continental formations, the composition of the atmosphere, ocean and rocks, and the dominant fossils that characterize some early geologic period, the Cambrian for example, to describe the geological and biological history and subsequent evolution of the Earth.
The scientific method requires the use of empirical evidence in discerning the nature of the world; one does not rely on folklore, prejudice, tradition, superstition, theology or authority but only upon the evidence that is presented to the senses, most often enhanced by the appropriate instrumentation. Significantly, science is not only observation and classification. The scientist seeks to unify diverse phenomena by a set of rules or theories that are rational, mathematical and general (although perhaps not always obvious or belonging to the realm of common sense). Ideally, theories should make predictions that are testable by repeated experimentation or observation. In this way the theory can be supported or rejected. The underlying assumption is that the world is rational and understandable to human beings, an assumption that appears to be justified by several hundred years of experience.
In physics, the first outstanding application of the scientific method was that of Isaac Newton. Building on the empirical descriptions by Galileo in codifying the kinematics of objects locally, and by Kepler in his formulation of the laws of planetary motion, Newton was able to unify the phenomenology of falling objects on the Earth with that of the planets moving about the Sun by a single law of gravitation combined with basic laws of motion. Newtonian dynamics provides the prototypical example of the scientific method in the steps followed to derive a general theory. The most dramatic early success of Newtonian theory (prediction as opposed to explanation) was its application by Urbain Le Verrier in the mid nineteenth century to predict the existence and position of the planet Neptune, using the observed small deviations in the motion of Uranus from that calculated due to the known massive bodies in the Solar System.
Such predictions form the core of the scientific method, and here I consider the three essential predictions of modern – twentieth century – physical cosmology: the expansion of the Universe; the existence of the cosmic background electromagnetic radiation; and the presence of small angular fluctuations in the intensity of that radiation. These are true predictions. They arose from theoretical considerations before being confirmed by observations, and as such, they form the backbone of physical cosmology.
The basic scenario of the Big Bang was in place by 1970. It was generally accepted that the Universe was once much hotter and denser and more homogeneous than at present. There was an origin of the world at a definite point in the past more than 10 billion years ago, and the light chemical elements, primarily helium, were synthesized in the hot early history of the cosmos. These were the essential aspects of what was becoming the standard scientific cosmological model. There were, however, several problems in principle with the Big Bang – issues that were not so much phenomenological as aesthetic – basically problems of naturalness.
In 1970 Robert Dicke emphasized that there are fine-tuning problems in the FLRW models for the Universe. Why is it, for example, that the density in the present Universe is so nearly equal to H02/G, where H0 is the present value of the Hubble parameter and G is the constant of gravity (H varies with time – it is not a true constant)? In other words, why should the present density be so close to the critical density for a flat universe if it is not, in fact, precisely the critical density?
This question arises because of the nature of the Friedmann equations. The measure of the average density of the Universe is typically given in terms of the critical density and is designated as Ω. This parameter also varies with cosmic time, but if Ω is greater than, less than or equal to one, it is always so (the curvature of the Universe cannot change with time). The value of the density parameter at the present epoch is Ω0 (analogous to the Hubble parameter H0), so Ω0 = 1 (implying a density of about 10-29 grams per cubic centimeter at present) exactly corresponds to the flat FLRW universe that asymptotically expands forever. The problem is that for Ω0 to be close to one but not one requires incredible fine-tuning of the density in the early Universe. If, for example, Ω0 = 0.05, then at the time of nucleosynthesis, when the temperature of the black body radiation is on the order of 10 billion degrees, the density parameter of the Universe could not deviate from one to within several parts in ten billion; for earlier epochs (higher energies) this tuning problem becomes more severe.
Magnetic fields are important in the Universe and their effects contain the key to many astrophysical phenomena that are otherwise impossible to understand. This book presents an up-to-date overview of this fast-growing topic and its interconnections to plasma processes, astroparticle physics, high energy astrophysics, and cosmic evolution. The phenomenology and impact of magnetic fields are described in diverse astrophysical contexts within the Universe, from galaxies to galaxy clusters, the filaments and voids of the intergalactic medium, and out to the largest redshifts. The presentation of mathematical formulae is accessible and is designed to add insight into the broad range of topics discussed. Written for graduate students and researchers in physics, astrophysics and related disciplines, this volume will inspire readers to devise new ways of thinking about magnetic fields in space on galaxy scales and beyond.
Gravitational lenses offer the best, and sometimes the only, means of tackling key problems in many fields of astrophysics and cosmology. According to Einstein's theory, the curvature of light-rays increases with mass; gravitational lenses can be used to map the distribution of mass in a Universe in which virtually all matter is dark matter of an unknown nature. Gravitational lensing has significantly improved our knowledge of many astrophysical phenomena, such as exoplanets, galaxies, active galactic nuclei, quasars, clusters, large-scale structure and the Universe itself. All these topics are covered fully in this book, together with two tutorials on lens and microlensing modelling. The future of lensing in relation to large surveys and the anticipated discoveries of thousands more gravitational lenses is also discussed, making this volume an ideal guide for postgraduate students and practising researchers in the use of gravitational lenses as a tool in their investigations.