To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Phenomenal new observations from Earth-based telescopes and Mars-based orbiters, landers, and rovers have dramatically advanced our understanding of the past environments on Mars. These include the first global-scale infrared and reflectance spectroscopic maps of the surface, leading to the discovery of key minerals indicative of specific past climate conditions; the discovery of large reservoirs of subsurface water ice; and the detailed in situ roving investigations of three new landing sites. This an important, new overview of the compositional and mineralogic properties of Mars since the last major study published in 1992. An exciting resource for all researchers and students in planetary science, astronomy, space exploration, planetary geology, and planetary geochemistry where specialized terms are explained to be easily understood by all who are just entering the field.
Cometography is a multi-volume catalog of every comet observed throughout history. It uses the most reliable orbits known to determine the distances from the Earth and Sun at the time a comet was discovered and last observed, as well as the largest and smallest angular distance to the Sun, most northerly and southerly declination, closest distance to the Earth, and other details to enable the reader to understand the physical appearance of each well-observed comet. Volume 4 provides a complete discussion of each comet seen from 1933 to 1959. It includes physical descriptions made throughout each comet's apparition. The comets are listed in chronological order, and each listing includes complete references to publications relating to the comet. This book is the most complete and comprehensive collection of comet data available, and provides amateur and professional astronomers, and historians of science, with a definitive reference on comets through the ages.
Planets form from protoplanetary disks of gas and dust that are observed to surround young stars for the first few million years of their evolution. Disks form because stars are born from relatively diffuse gas (with particle number density n ~ 105 cm−3) that has too much angular momentum to collapse directly to stellar densities (n ~ 1024 cm−3). Disks survive as well-defined quasi-equilibrium structures because once gas settles into a disk around a young star its specific angular momentum increases with radius. To accrete, angular momentum must be lost from, or redistributed within, the disk gas, and this process turns out to require time scales that are much longer than the orbital or dynamical time scale.
In this chapter we discuss the structure of protoplanetary disks. Anticipating the fact that angular momentum transport is slow, we assume here that the disk is a static structure. This approximation suffices for a first study of the temperature, density, and composition profiles of protoplanetary disks, which are critical inputs for models of planet formation. It also permits investigation of the predicted emission from disks that can be compared to a large body of astronomical observations. We defer for Chapter 3 the tougher question of how the gas and solids within the disk evolve with time.
The formation of terrestrial planets from micron-sized dust particles requires growth through at least 12 orders of magnitude in size scale. It is conceptually useful to divide the process into three main stages that involve different dominant physical processes:
Planetesimal formation. Planetesimals are defined as bodies that are large enough (typically of the order of 10 km in radius) that their orbital evolution is dominated by mutual gravitational interactions rather than aerodynamic coupling to the gas disk. With this definition it is self-evident that aerodynamic forces between solid particles and the gas disk are of paramount importance in the study of planetesimal formation, since these forces dominate the evolution of particles in the large size range that lies between dust and substantial rocks. The efficiency with which particles coagulate upon collision – loosely speaking how “sticky” they are – is also very important.
Terrestrial planet formation. Once a population of planetesimals has formed within the disk their subsequent evolution is dominated by gravitational interactions. This phase of planet formation, which yields terrestrial planets and the cores of giant planets, is the most cleanly defined since the basic physics (Newtonian gravity) is simple and well-understood. It remains challenging due to the large number of bodies – it takes 500 million 10 km radius planetesimals to build up the Solar System's terrestrial planets – and long time scales involved.
Giant planet formation and core migration. Once planets have grown to about an Earth mass, coupling to the gas disk becomes significant once again, though now it is gravitational rather than aerodynamic forces that matter. […]
A revolution is underway in cosmology, with largely qualitative models of the Universe being replaced with precision modelling and the determination of Universe's properties to high accuracy. The revolution is driven by three distinct elements – the development of sophisticated cosmological models and the ability to extract accurate predictions from them, the acquisition of large and precise observational datasets constraining those models, and the deployment of advanced statistical techniques to extract the best possible constraints from those data.
This book focuses on the last of these. In their approach to analyzing datasets, cosmologists for the most part lie resolutely within the Bayesian methodology for scientific inference. This approach is characterized by the assignment of probabilities to all quantities of interest, which are then manipulated by a set of rules, amongst which Bayes' theorem plays a central role. Those probabilities are constantly updated in response to new observational data, and at any given instant provide a snapshot of the best current understanding. Full deployment of Bayesian inference has only recently come within the abilities of high-performance computing.
Despite the prevalence of Bayesian methods in the cosmology literature, there is no single source which collects together both a description of the main Bayesian methods and a range of illustrative applications to cosmological problems. That, of course, is the aim of this volume. Its seeds grew from a small conference ‘Bayesian Methods in Cosmology’, held at the University of Sussex in June 2006 and attended by around 60 people, at which many cosmological applications of Bayesian methods were discussed.
By
Antony Lewis, Institute of Astronomy and Kavli Institute for Cosmology, Madingley Road, Cambridge CB3 0HA, UK,
Sarah Bridle, Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
In this chapter we assume that you have a method for calculating the likelihood of some data from a parameterized model. Using some prior on the parameters, Bayes’ theorem then gives the probability of the parameters given the data and model. A common goal in cosmology is then to find estimates of the parameters and their error bars. This is relatively simple when the number of parameters is small, but when there are more than about five parameters it is often useful to use a sampling method. Therefore in this chapter we focus mainly on finding parameter uncertainties using Monte Carlo methods.
Why do sampling?
We suppose that you have (i) some data, d, (ii) a model, M, (iii) a set θ of unknown parameters of the model, and (iv) a method for calculating the probability of these parameters from the data P(θ|d, M). For convenience we mostly shall leave the dependence on d and M implicit, and thus write P(θ) = P(θ|d, M).
An example we will consider throughout this chapter is the estimation of cosmological parameters from cosmic microwave background (CMB) data. For example, you could consider that (i) the data is the CMB power spectrum, (ii) the cosmological model is a Big Bang inflation model with cold dark matter and a cosmological constant, and (iii) the unknown parameters are cosmological parameters such as the matter density and expansion rate of the Universe.
Why and how – simply – that's what this chapter is about.
Rational inference
Rational inference is important. By helping us to understand our world, it gives us the predictive power that underlies our technical civilization. We would not function without it. Even so, rational inference only tells us how to think. It does not tell us what to think. For that, we still need the combination of creativity, insight, artistry and experience that we call intelligence.
In science, perhaps especially in branches such as cosmology, now coming of age, we invent models designed to make sense of data we have collected. It is no accident that these models are formalized in mathematics. Mathematics is far and away our most developed logical language, in which half a page of algebra can make connections and predictions way beyond the precision of informal thought. Indeed, one can hold the view that frameworks of logical connections are, by definition, mathematics. Even here, though, we do not find absolute truth. We have conditional implication: ‘If axiom, then theorem’ or, equivalently, ‘If not theorem, then not axiom’. Neither do we find absolute truth in science.
Our question in science is not ‘Is this hypothetical model true?’, but ‘Is this model better than the alternatives?’. We could not recognize absolute truth even if we stumbled across it, for how could we tell? Conversely, we cannot recognize absolute falsity.
The study of planet formation has a long history. The idea that the Solar System formed from a rotating disk of gas and dust – the Nebula Hypothesis – dates back to the writings of Kant, Laplace, and others in the eighteenth century. A quantitative description of terrestrial planet formation was already in place by the late 1960s, when Viktor Safronov published his now classic monograph Evolution of the Protoplanetary Cloud and Formation of the Earth and the Planets, while the main elements of the core accretion theory for gas giant planet formation were developed in the early 1980s. More recently, a wealth of new observations has led to renewed interest in the problem. The most dramatic development has been the identification of extrasolar planets, first around a pulsar and subsequently in large numbers around main-sequence stars. These detections have furnished a glimpse of the Solar System's place amid an extraordinary diversity of extrasolar planetary systems. The advent of high resolution imaging of protoplanetary disks and the discovery of the Solar System's Kuiper Belt have been almost as influential in focusing theoretical attention on the initial conditions for planet formation and the role of dynamics in the early evolution of planetary systems.
My goals in writing this text are to provide a concise introduction to the classical theory of planet formation and to more recent developments spurred by new observations. Inevitably, the range of topics covered is far from comprehensive.
By
Ofer Lahav, Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK,
Filipe B. Abdalla, Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK,
Manda Banerji, Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
By
M. P. Hobson, Astrophysics Group, Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE, UK,
Graça Rocha, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125, USA,
Richard S. Savage, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK; Systems Biology Centre, University of Warwick, Coventry CV4 7AL, UK
Source extraction is a generic problem in modern observational astrophysics and cosmology. Indeed, one of the major challenges in the analysis of astronomical observations is to identify and characterize a localized signal immersed in some general background. Typical one-dimensional examples include the extraction of point or extended sources from time-ordered scan data or the detection of absorption or emission lines in quasar spectra. In two dimensions, one often wishes to detect point or extended sources in astrophysical images that are dominated either by instrumental noise or contaminating diffuse emission. Similarly, in three dimensions, one might wish to detect galaxy clusters in large-scale structure surveys. Moreover, the ability to perform source extraction with reliable, automated methods has become vital with the advent of modern large-area surveys too large to be inspected in detail ‘by eye’. Indeed, much of the science derived from the study of astronomical sources, or from the background in which they are immersed, proceeds directly from accurate source extraction.
In extracting sources from astronomical data, we typically face a number of challenges. Firstly, there is instrumental noise. Nonetheless, it is often possible to obtain an accurate statistical characterization of the instrumental noise, which can then be used to compensate for its effects to some extent. More problematic are any so-called ‘backgrounds’ to the observation. These can be astrophysical or cosmological in origin, such as Galactic emission, cosmological backgrounds, faint source confusion, or even simply emission from parts of the telescope itself.
Planets can be defined informally as large bodies, in orbit around a star, that are not massive enough to have ever derived a substantial fraction of their luminosity from nuclear fusion. This definition fixes the maximum mass of a planet to be at the deuterium burning threshold, which is approximately 13 Jupiter masses for Solar composition objects (1 MJ = 1.899 × 1030 g). More massive objects are called brown dwarfs. The lower mass cut-off for what we call a planet is not as well defined. Currently, the International Astronomical Union (IAU) requires a Solar System planet to be massive enough that it is able to clear the neighborhood around its orbit of other large bodies. Smaller objects that are massive enough to have a roughly spherical shape but which do not have a major dynamical influence on nearby bodies are called “dwarf planets.” It is likely that some objects of planetary mass exist that are not bound to a central star, either having formed in isolation or following ejection from a planetary system. Such objects are normally called “planetary-mass objects” or “free-floating planets.”
Complementary constraints on theories of planet formation come from observations of the Solar System and of extrasolar planetary systems. Space missions to all of the planets have yielded exquisitely detailed information on the surfaces (and in some cases interior structures) of the Solar System's planets, satellites, and minor bodies.
We astronomers mostly work in the ‘discovery space’, the region where effects are statistically significant at less than three-sigma or near boundaries in data or parameter space. Working in the discovery space is a normal astronomical activity; few published results are initially found at large confidence. This can lead to anomalous results, e.g., positive definite quantities (such as mass, fractions, star formation rates, dispersions, etc.) are sometimes found to be negative or, more generally, quantities are sometimes found at unphysical values (completeness larger than 100%, V/Vmax > 1 or fractions larger than 1, for example). Working in the discovery space is typical of frontier-line research because almost every significant result reaches this status after having appeared first in the discovery space, and because a good determination of known effects or trends usually triggers searches for finer, harder to detect, effects, mostly falling once more in the discovery space.
Many of us are very confident that commonly used statistical tools work properly in the situations in which we use them. Unfortunately, in the discovery space, and sometimes outside it, we should not take this for granted, as shown below with a few examples. We cannot avoid working in this grey region, because to move our results into the statistically significant area we often need a larger or better sample. In order to obtain this, we first need to convince the community (and the telescope time allocation committees) that an effect is probably there, by working in the discovery space.