To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Our knowledge of Mars has changed dramatically in the past 40 years due to the wealth of information provided by Earth-based and orbiting telescopes, and spacecraft investigations. Recent observations suggest that water has played a major role in the climatic and geologic history of the planet. This textbook covers our understanding of the planet's formation, geology, atmosphere, interior, surface properties, and potential for life. This interdisciplinary textbook encompasses the fields of geology, chemistry, atmospheric sciences, geophysics, and astronomy. Each chapter introduces the necessary background information to help the non-specialist understand the topics explored. It includes results from missions through 2006, including the latest insights from Mars Express and the Mars Exploration Rovers. Containing the most up-to-date information on Mars, this textbook is essential reading for graduate courses, and an important reference for researchers.
The past forty years have been a time of spectacular development in the study of general relativity and cosmology. A special role in this has been played by the influential research groups led by Dennis Sciama in Cambridge, Oxford, and Trieste. In April 1992 many of his ex-students and collaborators came to Trieste (where he is currently Professor) for a review meeting to celebrate his 65th birthday. This book consists of written versions of the talks presented which, taken together, comprise an authoritative overview of developments which have taken place during his career to date. The topics covered include fundamental questions in general relativity and cosmology, black holes, active galactic nuclei, galactic structure, dark matter, and large scale structure.
A self-contained introduction to magnetohydrodynamics (MHD), with emphasis on nonlinear processes. Chapters 2 to 4 outline the conventional aspects of MHD theory, magnetostatic equilibrium and linear stability theory, which form a natural basis for the topics in the subsequent chapters. The main part, chapters 5 to 7, presents nonlinear theory, starting with the evolutions and saturations of individual ideas and resistive instabilities, continuing with a detailed analysis of magnetic reconnection, and concluding with the most complex nonlinear behaviour, that of MHD turbulence. The last chapters describe three important applications of the theory: disruptive processes in tokamaks, MHD effects in reversed-field pinches, and solar flares. In the presentation the focus is more on physical mechanisms than on special formalisms. The book is essential reading for researchers and graduate students interested in MHD processes both in laboratory and in astrophysical plasmas.
This volume includes contributions by leading workers in the field given at the workshop on Numerical Relativity held in Southampton in December 1991. Numerical Relativity, or the numerical solution of astrophysical problems using powerful computers to solve Einstein's equations, has grown rapidly over the last 15 years. It is now an important route to understanding the structure of the Universe, and is the only route currently available for approaching certain important astrophysical scenarios. The Southampton meeting was notable for the first full report of the new 2+2 approach and the related null or characteristic approaches, as well as for updates on the established 3+1 approach, including both Newtonian and fully relativistic codes. The contributions range from theoretical (formalisms, existence theorems) to the computational (moving grids, multiquadrics and spectral methods).
By
D. S. Sivia, St John's College, St. Giles, Oxford OX1 3JP, UK,
S. G. Rawlings, Astrophysics, Department of Physics, Oxford University, Keble Road, Oxford OX1 3RH, UK
Having seen how the need for rational inference leads to the Bayesian approach for data analysis, we illustrate its use with a couple of simplified cosmological examples. While real problems require analytical approximations or Monte Carlo computation for the sums to be evaluated, toy ones can be made simple enough to be done with brute force. The latter are helpful for learning the basic principles of Bayesian analysis, which can otherwise become confused with the details of the practical algorithm used to implement them.
Introduction
In science, as in everyday life, we are constantly faced with the task of having to draw inferences from incomplete and imperfect information. Laplace (1812, 1814), perhaps more than anybody, developed probability theory as a tool for reasoning quantitatively in such situations where arguments cannot be made with certainty; in his view, it was ‘nothing but common sense reduced to calculation’. Although this approach to probability theory lost favour soon after his death, giving way to a frequency interpretation and the related birth of statistics (Jaynes 2003), it has experienced a renaissance since the late twentieth century. This has been driven, in practical terms, by the rapid evolution of computer hardware and the advent of larger-scale problems. Theoretical progress has also been made with the discovery of new rationales (Skilling 2010), but most scientists are drawn to Laplace's viewpoint instinctively.
In the past few years, several introductory texts have become available on the Bayesian (or Laplacian) approach to data analysis written from the perspective of the physical sciences (Sivia 1996; MacKay 2003; Gregory 2005).
Signal separation is a common task in cosmological data analysis. The basic problem is simple to state: a number of signals are mixed together in some manner, either known or unknown, to produce some observed data. The object of signal separation is to infer the underlying signals given the observations.
A large number of techniques have been developed to attack this problem. The approaches adopted depend most crucially on the assumptions made regarding the nature of the signals and how they are mixed. Often methods are split into two broad classes: so-called blind and non-blind methods. Non-blind methods can be applied in cases where we know how the signals were mixed. Conversely, blind methods assume no knowledge of how the signals were mixed, and rely on assumptions about the statistical properties of the signals to make the separation. There are some techniques that straddle the two classes, which we shall refer to as ‘semi-blind’ methods. They assume partial knowledge of how the signals are mixed, or that the mixing properties of some signals are known and those of others are not.
There is a large literature in the field of signal processing about signal separation, using Bayesian techniques or otherwise. For any cosmological signal separation problem, it is almost always the case that someone has already attempted to solve an analogous problem in the signal processing literature. Readers who encounter a problem of this type, which is not already addressed in the cosmological literature, are encouraged to look further afield for existing solutions.
Once planetesimals have formed, the dominant physical process that controls further growth is their mutual gravitational interaction. Conventionally the only further role the gas disk plays in terrestrial planet formation is to provide a modest degree of aerodynamic damping of protoplanetary eccentricity and inclination. In this limit the physics involved – Newtonian gravity – is simple and the problem of terrestrial planet formation is well posed. It is not, however, easy to solve. It would take 4 × 109 planetesimals with a radius of 5 km to build the Solar System's terrestrial planets, and it is infeasible to directly simulate the N-body evolution of such a system for long enough (and with sufficient accuracy) to watch planets form. Instead a hybrid approach is employed. For the earliest phases of terrestrial planet formation a statistical approach, similar to that used in the kinetic theory of gases, is both accurate and efficient. When the number of dynamically significant bodies has dropped to a manageable number (of the order of hundreds or thousands), direct N-body simulations become feasible, and these are used to study the final assembly of the terrestrial planets. Using this two-step approach has known drawbacks (for example, it is difficult to treat the situation where a small number of protoplanets co-exist with a large sea of very small bodies), but nevertheless it provides a reasonably successful picture for how the terrestrial planets formed.
By
Andrew R. Liddle, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK,
Pia Mukherjee, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK,
David Parkinson, Astronomy Centre, University of Sussex, Brighton BN1 9QH, UK
One of the principal aims of cosmology is to identify the correct cosmological model, able to explain the available high-quality data. Determining the best model is a two-stage process. First, we must identify the set of parameters that we will allow to vary in seeking to fit the observations. As part of this process we need also to fix the allowable (prior) ranges that these parameters might take, most generally by providing a probability density function in the N-dimensional parameter space. This combination of parameter set and prior distribution is what we will call a model, and it should make calculable predictions for the quantities we are going to measure. Having chosen the model, the second stage is to determine, from the observations, the ranges of values of the parameters which are compatible with the data. This second step, parameter estimation, is described in the cosmological context by Lewis and Bridle in Chapter 3 of this volume. In this article, we shall concentrate on the choice of model.
Typically, there is not a single model that we wish to fit to the data. Rather, the aim of obtaining the data is to choose between competing models, where different physical processes may be responsible for the observed outcome. This is the statistical problem of model comparison, or model selection. This is readily carried out by extending the Bayesian parameter estimation framework so that we assign probabilities to models, as well as to parameter values within those models.
Phenomenal new observations from Earth-based telescopes and Mars-based orbiters, landers, and rovers have dramatically advanced our understanding of the past environments on Mars. These include the first global-scale infrared and reflectance spectroscopic maps of the surface, leading to the discovery of key minerals indicative of specific past climate conditions; the discovery of large reservoirs of subsurface water ice; and the detailed in situ roving investigations of three new landing sites. This an important, new overview of the compositional and mineralogic properties of Mars since the last major study published in 1992. An exciting resource for all researchers and students in planetary science, astronomy, space exploration, planetary geology, and planetary geochemistry where specialized terms are explained to be easily understood by all who are just entering the field.
Cometography is a multi-volume catalog of every comet observed throughout history. It uses the most reliable orbits known to determine the distances from the Earth and Sun at the time a comet was discovered and last observed, as well as the largest and smallest angular distance to the Sun, most northerly and southerly declination, closest distance to the Earth, and other details to enable the reader to understand the physical appearance of each well-observed comet. Volume 4 provides a complete discussion of each comet seen from 1933 to 1959. It includes physical descriptions made throughout each comet's apparition. The comets are listed in chronological order, and each listing includes complete references to publications relating to the comet. This book is the most complete and comprehensive collection of comet data available, and provides amateur and professional astronomers, and historians of science, with a definitive reference on comets through the ages.
Planets form from protoplanetary disks of gas and dust that are observed to surround young stars for the first few million years of their evolution. Disks form because stars are born from relatively diffuse gas (with particle number density n ~ 105 cm−3) that has too much angular momentum to collapse directly to stellar densities (n ~ 1024 cm−3). Disks survive as well-defined quasi-equilibrium structures because once gas settles into a disk around a young star its specific angular momentum increases with radius. To accrete, angular momentum must be lost from, or redistributed within, the disk gas, and this process turns out to require time scales that are much longer than the orbital or dynamical time scale.
In this chapter we discuss the structure of protoplanetary disks. Anticipating the fact that angular momentum transport is slow, we assume here that the disk is a static structure. This approximation suffices for a first study of the temperature, density, and composition profiles of protoplanetary disks, which are critical inputs for models of planet formation. It also permits investigation of the predicted emission from disks that can be compared to a large body of astronomical observations. We defer for Chapter 3 the tougher question of how the gas and solids within the disk evolve with time.
The formation of terrestrial planets from micron-sized dust particles requires growth through at least 12 orders of magnitude in size scale. It is conceptually useful to divide the process into three main stages that involve different dominant physical processes:
Planetesimal formation. Planetesimals are defined as bodies that are large enough (typically of the order of 10 km in radius) that their orbital evolution is dominated by mutual gravitational interactions rather than aerodynamic coupling to the gas disk. With this definition it is self-evident that aerodynamic forces between solid particles and the gas disk are of paramount importance in the study of planetesimal formation, since these forces dominate the evolution of particles in the large size range that lies between dust and substantial rocks. The efficiency with which particles coagulate upon collision – loosely speaking how “sticky” they are – is also very important.
Terrestrial planet formation. Once a population of planetesimals has formed within the disk their subsequent evolution is dominated by gravitational interactions. This phase of planet formation, which yields terrestrial planets and the cores of giant planets, is the most cleanly defined since the basic physics (Newtonian gravity) is simple and well-understood. It remains challenging due to the large number of bodies – it takes 500 million 10 km radius planetesimals to build up the Solar System's terrestrial planets – and long time scales involved.
Giant planet formation and core migration. Once planets have grown to about an Earth mass, coupling to the gas disk becomes significant once again, though now it is gravitational rather than aerodynamic forces that matter. […]
A revolution is underway in cosmology, with largely qualitative models of the Universe being replaced with precision modelling and the determination of Universe's properties to high accuracy. The revolution is driven by three distinct elements – the development of sophisticated cosmological models and the ability to extract accurate predictions from them, the acquisition of large and precise observational datasets constraining those models, and the deployment of advanced statistical techniques to extract the best possible constraints from those data.
This book focuses on the last of these. In their approach to analyzing datasets, cosmologists for the most part lie resolutely within the Bayesian methodology for scientific inference. This approach is characterized by the assignment of probabilities to all quantities of interest, which are then manipulated by a set of rules, amongst which Bayes' theorem plays a central role. Those probabilities are constantly updated in response to new observational data, and at any given instant provide a snapshot of the best current understanding. Full deployment of Bayesian inference has only recently come within the abilities of high-performance computing.
Despite the prevalence of Bayesian methods in the cosmology literature, there is no single source which collects together both a description of the main Bayesian methods and a range of illustrative applications to cosmological problems. That, of course, is the aim of this volume. Its seeds grew from a small conference ‘Bayesian Methods in Cosmology’, held at the University of Sussex in June 2006 and attended by around 60 people, at which many cosmological applications of Bayesian methods were discussed.