To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
By
Antony Lewis, Institute of Astronomy and Kavli Institute for Cosmology, Madingley Road, Cambridge CB3 0HA, UK,
Sarah Bridle, Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
In this chapter we assume that you have a method for calculating the likelihood of some data from a parameterized model. Using some prior on the parameters, Bayes’ theorem then gives the probability of the parameters given the data and model. A common goal in cosmology is then to find estimates of the parameters and their error bars. This is relatively simple when the number of parameters is small, but when there are more than about five parameters it is often useful to use a sampling method. Therefore in this chapter we focus mainly on finding parameter uncertainties using Monte Carlo methods.
Why do sampling?
We suppose that you have (i) some data, d, (ii) a model, M, (iii) a set θ of unknown parameters of the model, and (iv) a method for calculating the probability of these parameters from the data P(θ|d, M). For convenience we mostly shall leave the dependence on d and M implicit, and thus write P(θ) = P(θ|d, M).
An example we will consider throughout this chapter is the estimation of cosmological parameters from cosmic microwave background (CMB) data. For example, you could consider that (i) the data is the CMB power spectrum, (ii) the cosmological model is a Big Bang inflation model with cold dark matter and a cosmological constant, and (iii) the unknown parameters are cosmological parameters such as the matter density and expansion rate of the Universe.
Why and how – simply – that's what this chapter is about.
Rational inference
Rational inference is important. By helping us to understand our world, it gives us the predictive power that underlies our technical civilization. We would not function without it. Even so, rational inference only tells us how to think. It does not tell us what to think. For that, we still need the combination of creativity, insight, artistry and experience that we call intelligence.
In science, perhaps especially in branches such as cosmology, now coming of age, we invent models designed to make sense of data we have collected. It is no accident that these models are formalized in mathematics. Mathematics is far and away our most developed logical language, in which half a page of algebra can make connections and predictions way beyond the precision of informal thought. Indeed, one can hold the view that frameworks of logical connections are, by definition, mathematics. Even here, though, we do not find absolute truth. We have conditional implication: ‘If axiom, then theorem’ or, equivalently, ‘If not theorem, then not axiom’. Neither do we find absolute truth in science.
Our question in science is not ‘Is this hypothetical model true?’, but ‘Is this model better than the alternatives?’. We could not recognize absolute truth even if we stumbled across it, for how could we tell? Conversely, we cannot recognize absolute falsity.
The study of planet formation has a long history. The idea that the Solar System formed from a rotating disk of gas and dust – the Nebula Hypothesis – dates back to the writings of Kant, Laplace, and others in the eighteenth century. A quantitative description of terrestrial planet formation was already in place by the late 1960s, when Viktor Safronov published his now classic monograph Evolution of the Protoplanetary Cloud and Formation of the Earth and the Planets, while the main elements of the core accretion theory for gas giant planet formation were developed in the early 1980s. More recently, a wealth of new observations has led to renewed interest in the problem. The most dramatic development has been the identification of extrasolar planets, first around a pulsar and subsequently in large numbers around main-sequence stars. These detections have furnished a glimpse of the Solar System's place amid an extraordinary diversity of extrasolar planetary systems. The advent of high resolution imaging of protoplanetary disks and the discovery of the Solar System's Kuiper Belt have been almost as influential in focusing theoretical attention on the initial conditions for planet formation and the role of dynamics in the early evolution of planetary systems.
My goals in writing this text are to provide a concise introduction to the classical theory of planet formation and to more recent developments spurred by new observations. Inevitably, the range of topics covered is far from comprehensive.
By
Ofer Lahav, Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK,
Filipe B. Abdalla, Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK,
Manda Banerji, Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
By
M. P. Hobson, Astrophysics Group, Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE, UK,
Graça Rocha, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, USA,
Richard S. Savage, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK; Systems Biology Centre, University of Warwick, Coventry CV4 7AL, UK
Source extraction is a generic problem in modern observational astrophysics and cosmology. Indeed, one of the major challenges in the analysis of astronomical observations is to identify and characterize a localized signal immersed in some general background. Typical one-dimensional examples include the extraction of point or extended sources from time-ordered scan data or the detection of absorption or emission lines in quasar spectra. In two dimensions, one often wishes to detect point or extended sources in astrophysical images that are dominated either by instrumental noise or contaminating diffuse emission. Similarly, in three dimensions, one might wish to detect galaxy clusters in large-scale structure surveys. Moreover, the ability to perform source extraction with reliable, automated methods has become vital with the advent of modern large-area surveys too large to be inspected in detail ‘by eye’. Indeed, much of the science derived from the study of astronomical sources, or from the background in which they are immersed, proceeds directly from accurate source extraction.
In extracting sources from astronomical data, we typically face a number of challenges. Firstly, there is instrumental noise. Nonetheless, it is often possible to obtain an accurate statistical characterization of the instrumental noise, which can then be used to compensate for its effects to some extent. More problematic are any so-called ‘backgrounds’ to the observation. These can be astrophysical or cosmological in origin, such as Galactic emission, cosmological backgrounds, faint source confusion, or even simply emission from parts of the telescope itself.
Planets can be defined informally as large bodies, in orbit around a star, that are not massive enough to have ever derived a substantial fraction of their luminosity from nuclear fusion. This definition fixes the maximum mass of a planet to be at the deuterium burning threshold, which is approximately 13 Jupiter masses for Solar composition objects (1 MJ = 1.899 × 1030 g). More massive objects are called brown dwarfs. The lower mass cut-off for what we call a planet is not as well defined. Currently, the International Astronomical Union (IAU) requires a Solar System planet to be massive enough that it is able to clear the neighborhood around its orbit of other large bodies. Smaller objects that are massive enough to have a roughly spherical shape but which do not have a major dynamical influence on nearby bodies are called “dwarf planets.” It is likely that some objects of planetary mass exist that are not bound to a central star, either having formed in isolation or following ejection from a planetary system. Such objects are normally called “planetary-mass objects” or “free-floating planets.”
Complementary constraints on theories of planet formation come from observations of the Solar System and of extrasolar planetary systems. Space missions to all of the planets have yielded exquisitely detailed information on the surfaces (and in some cases interior structures) of the Solar System's planets, satellites, and minor bodies.
We astronomers mostly work in the ‘discovery space’, the region where effects are statistically significant at less than three-sigma or near boundaries in data or parameter space. Working in the discovery space is a normal astronomical activity; few published results are initially found at large confidence. This can lead to anomalous results, e.g., positive definite quantities (such as mass, fractions, star formation rates, dispersions, etc.) are sometimes found to be negative or, more generally, quantities are sometimes found at unphysical values (completeness larger than 100%, V/Vmax > 1 or fractions larger than 1, for example). Working in the discovery space is typical of frontier-line research because almost every significant result reaches this status after having appeared first in the discovery space, and because a good determination of known effects or trends usually triggers searches for finer, harder to detect, effects, mostly falling once more in the discovery space.
Many of us are very confident that commonly used statistical tools work properly in the situations in which we use them. Unfortunately, in the discovery space, and sometimes outside it, we should not take this for granted, as shown below with a few examples. We cannot avoid working in this grey region, because to move our results into the statistically significant area we often need a larger or better sample. In order to obtain this, we first need to convince the community (and the telescope time allocation committees) that an effect is probably there, by working in the discovery space.
Understanding the formation of giant planets with substantial gaseous envelopes forces us to confront once again the physics of the gas within the protoplanetary disk. Unlike the case of terrestrial planet formation, two qualitatively different theories have been proposed to account for the formation of massive planets. In the core accretion theory of giant planet formation, the acquisition of a massive envelope of gas is the final act of a story that begins with the formation of a core of rock and ice via the identical processes that we discussed in the context of terrestrial planet formation. The time scale for giant planet formation in this model – and to a large extent its viability – hinges on how quickly the core can be assembled and on how rapidly the gas in the envelope can cool and accrete on to the core. In the competing disk instability theory, giant planets form promptly via the gravitational fragmentation of an unstable protoplanetary disk – a purely gaseous analog of the Goldreich–Ward mechanism for planetesimal formation that we discussed in Chapter 4. Fragmentation turns out to require that the disk be able to cool on a relatively short time scale that is comparable to the orbital time scale, and whether these conditions are realized within disks is the main theoretical issue that remains unresolved. Drawing on our prior results on gravitational instabilities in disks and on terrestrial planet formation, the goal in this chapter is to describe the physical principles behind both models and to provide a summary of some of the relevant observational constraints.
Just a century ago, our view of the cosmos was limited to the single window provided by optical telescopes. In the intervening years, new windows opened as cosmic rays, neutrinos, radio waves, X-rays, microwaves and gamma rays were enlisted in our quest to understand the Universe. These new messengers have revolutionized astronomy and profoundly altered our perception of the cosmos.
Despite these impressive advances, almost everything we know about the Universe beyond our own galaxy comes from observing light of various energies. Hopefully this will soon change as a new spectrum is opened by the direct detection of gravitational waves. Just as our sense of hearing complements our sense of sight, gravitational wave astronomy can extend and enrich the picture provided by electromagnetic astronomy (Hughes 2003).
Astrophysical and cosmological sources of gravitational waves are expected to produce signals that span over twenty decades in frequency, ranging from primordial signals with frequencies as low as 10−18 Hz, to supernovae explosions that reach frequencies of 104 Hz. A suite of detection techniques have been proposed to detect these signals, from acoustic (bar) detectors for narrow band detection of high-frequency waves, through to polarization maps of the cosmic microwave background radiation to look for imprints of the lowest frequency waves. Pulsar timing arrays are a promising technique for detecting waves in the 10−8 Hz range, and there is an outside chance that this technique might yield the first direct detection.
This self-contained introduction to molecular astrophysics is suitable as a text for advanced postgraduate courses on interstellar matter. It is an excellent summary of present knowledge and outstanding questions and will be valued by research astrophysicists, physical chemists, atomic and molecular physicists and atmospheric scientists who wish to become familiar with this field. Descriptions are given of the distributions and types of molecules observed in galactic and extragalactic sources, including those in the vicinity of active galactic nuclei. The chemistry of diffuse and dense clouds is also discussed, and chemical reactions in shocks and dynamically evolving clouds are considered.
The measurement of the flux of an astronomical source is a classic parameter estimation problem in which the quantity of interest must be inferred from noisy data. The Bayesian methods described in this book provide a clear, unambiguous, selfconsistent and optimal method for answering this type of question, but the vast majority of flux measurements are made by applying a heuristic classical estimator to the data. This raises some immediate questions: Why is the estimator-based approach adopted in most cases? How do the resultant flux measurements differ? What is the relationship between the two techniques?
To answer these questions first requires an understanding of the astronomical measurement process itself (Section 8.2), which leads very naturally to the definition of the standard flux estimator (Section 8.3). Using a model for the source population (Section 8.4) as a prior, it is also possible to apply Bayesian inference to the problem (Section 8.5), although care is required to avoid some potential inconsistencies (Section 8.6). Even with the full Bayesian result in hand, however, the existence of databases containing billions of classically estimated fluxes and errors leads to a number of practical considerations which argue against simply reporting posterior distributions for astronomical fluxes (Section 8.7).
Photometric measurements
How is the flux of an astronomical source measured? ‘With great difficulty’ is one possible answer, especially given a history of photographic plates, dipole antennae, microdensitometers and other arcane equipment.
The classical theory of giant planet formation described in the preceding chapter predicts that massive planets ought to form on approximately circular orbits, with a strong preference for formation in the outer disk at a few AU or beyond. Most currently known extrasolar planets have orbits that are grossly inconsistent with these predictions and, irrespective of the still open question of what the typical planetary system looks like, their existence demands an explanation. Even within the Solar System the existence of a large resonant population of Kuiper Belt Objects, and the time scale problem for the formation of Uranus and Neptune, suggest that the classical theory is at best incomplete.
In this chapter we describe a set of physical mechanisms – gas disk migration, planetesimal scattering, and planet–planet scattering – that promise to reconcile the observed properties of extrasolar planetary systems with theory. The common feature of all of these mechanisms is that they result in energy and angular momentum exchange either among newly formed planets, or between planets and leftover solid or gaseous debris in the system. The exchange of energy and angular momentum drives evolution of the planetary semi-major axis and eccentricity, which can be substantial enough to make the final architecture of the system unrecognizable from its state immediately after planet formation.
By
Roberto Trotta, Astrophysics, Department of Physics, Oxford University, Keble Road, Oxford OX1 3RH, UK; and Astrophysics Group, Imperial College London, Blackett Laboratory, London SW7 2AZ, UK,
Martin Kunz, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK,
Pia Mukherjee, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK,
David Parkinson, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK
Common applications of Bayesian methods in cosmology involve the computation of model probabilities and of posterior probability distributions for the parameters of those models. However, Bayesian statistics is not limited to applications based on existing data, but can equally well handle questions about expectations for the future performance of planned experiments, based on our current knowledge.
This is an important topic, especially with a number of future cosmology experiments and surveys currently being planned. To give a taste, they include: large-scale optical surveys such as Pan-STARRS (Panoramic Survey Telescope and Rapid Response System), DES (the Dark Energy Survey) and LSST (Large Synoptic Survey Telescope), massive spectroscopic surveys such as WFMOS (Wide-Field Fibrefed Multi-Object Spectrograph), satellite missions such as JDEM (the Joint Dark Energy Explorer) and EUCLID, continental-sized radio telescopes such as SKA (the Square Kilometer Array) and future cosmic microwave background experiments such as B-Pol searching for primordial gravitational waves. As the amount of available resources is limited, the question of how to optimize them in order to obtain the greatest possible science return, given present knowledge, will be of increasing importance.
In this chapter we address the issue of experimental forecasting and optimization, starting with the general aspects and a simple example. We then discuss the so-called Fisher Matrix approach, which allows one to compute forecasts rapidly, before looking at a real-world application. Finally, we cover forecasts of model comparison outcomes and model selection Figures of Merit.