To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Quantum gravity by causal dynamical triangulation has over the last few years emerged as a serious contender for a nonperturbative description of the theory. It is a nonperturbative implementation of the sum-overhistories, which relies on few ingredients and initial assumptions, has few free parameters and – crucially – is amenable to numerical simulations. It is the only approach to have demonstrated that a classical universe can be generated dynamically from Planckian quantum fluctuations. At the same time, it allows for the explicit evaluation of expectation values of invariants characterizing the highly nonclassical, short-distance behaviour of spacetime. As an added bonus, we have learned important lessons on which aspects of spacetime need to be fixed a priori as part of the background structure and which can be expected to emerge dynamically.
Quantum gravity – taking a conservative stance
Many fundamental questions about the nature of space, time and gravitational interactions are not answered by the classical theory of general relativity, but lie in the realm of the still-searched-for theory of quantum gravity: What is the quantum theory underlying general relativity, and what does it say about the quantum origins of space, time and our universe? What is the microstructure of spacetime at the shortest scale usually considered, the Planck scale lPl = 10-35m, and what are the relevant degrees of freedom determining the dynamics there?
Observational astronomers always struggle, and often fail, to characterize celestial populations in an unbiased fashion. Many surveys are flux-limited (or, as expressed in traditional optical astronomy, magnitude-limited) so that only the brighter objects are detected. As flux is a convolution of the object's intrinsic luminosity and the (often uninteresting) distance to the observer according to Flux = L/4πd2, this produces a sample with a complicated bias in luminosity: high-luminosity objects at large distances are over-represented and lowluminosity objects are under-represented in a flux-limited survey. This and related issues with nondetections have confronted astronomers for nearly 200 years.
A blind astronomical survey of a portion of the sky is thus truncated at the sensitivity limit, where truncation indicates that the undetected objects, even the number of undetected objects, are entirely missing from the dataset. In a supervised astronomical survey where a particular property (e.g. far-infrared luminosity, calcium line absorption, CO molecular line strength) of a previously defined sample of objects is sought, some objects in the sample may be too faint to detect. The dataset then contains the full sample of interest, but some objects have upper limits and others have detections. Statisticians refer to upper limits as left-censored data points.
Multivariate problems with censoring and truncation also arise in astronomy. Consider, for example, a study of howthe luminosity function of active galactic nuclei (AGN) depends on covariates such as redshift (as a measure of cosmic time), clustering environment, host galaxy bulge luminosity and starburst activity.
Our astronomical knowledge of planets, stars, the interstellarmedium, galaxies or accretion phenomena is usually limited to a few observables that give limited information about the underlying conditions. Our astrophysical understanding usually involves primitive models of complex processes operating on complex distributions of atoms. In light of these diffi- culties intrinsic to astronomy, there is often little basis for assuming particular statistical distributions of and relationships between the observed variables. For example, astronomers frequently take the log of an observed variable to reduce broad ranges and to remove physical units, and then assume with little justification that their residuals around some distribution are normally distributed. Few astrophysical theories can predict whether the scatter in observable quantities is Gaussian in linear, logarithmic or other transformation of the variable.
Astronomersmay commonly use a simple heuristic model in situationswhere there is little astrophysical foundation, or where the underlying phenomena are undoubtedly far more complex. Linear or loglinear (i.e. power-law) fits are often used to quantify relationships between observables in starburst galaxies, molecular clouds or gamma-ray bursts where the statistical model has little basis in astrophysical theory. In such cases, the mathematical assumptions of the statistical procedures are often not established, and the choice of a simplistic model may obfuscate interesting characteristics of the data.
Feigelson (2007) reviews one of these situations where hundreds of studies over seven decades modeled the radial starlight profiles of elliptical galaxies using simple parametric models, see king to understand the distribution of mass within the galaxies.
The Byrd Green Bank Telescope is the largest fully steerable filled-aperture radio telescope, with a size of 100 × 110 meters. The runners-up are the Effelsberg telescope, with a diameter of 100 meters, and the Jodrell Bank Lovell Telescope, 76 meters in diameter. The collecting areas of these telescopes are awesome, but their angular resolutions are poor: at λ = 21 cm, λ/D for a 100-m telescope is about 7 arcmin. These enormous telescope structures are difficult to keep in accurate alignment and are subject to huge forces from wind. Proposals for larger telescopes (e.g., the Jodrell Bank Mark IV and V telescopes at 305 and 122 m respectively) pose major engineering challenges for modest improvements in resolution and have also proven to press the limits of what other humans are willing to purchase for astronomers. The Arecibo disk circumvents some of these issues by being fixed in the ground and pointing by moving its feed; this concept allows a diameter of 259 m. However, the dish must be spherical, limiting the optical accuracy of the telescope, and it can reach only a fraction of the sky (roughly ± 20° from the zenith). The angular resolution is still modest, a bit worse than 2 arcmin at 21 cm (1.4 GHz). This resolution is only equivalent to that of the unaided human eye!
Even if we could build larger telescopes, they would be limited by source confusion: in very deep observations, the background of distant galaxies is so complex to these relatively large beams that it becomes impossible to distinguish an object from its neighbors (see Section 1.5.4).
Multivariate analysis discussed in Chapter 8 seeks to characterize structural relationships among the p variables that may be present in addition to random scatter. The primary structural relations may link the subpopulations without characterizing the structure of any one population.
In such cases, the scientist should first attempt to discriminate the subpopulations. This is the subject of multivariate clustering and classification. Clustering refers to situations where the subpopulations must be estimated from the dataset alone whereas classification refers to situationswhere training datasets of known populations are available independently of the dataset under study. When the datasets are very large with well-characterized training sets, classification is a major component of data mining. The efforts to find concentrations in a multivariate distribution of points are closely allied with clustering analysis of spatial distribution when p = 2 or 3; for such low-p problems, the reader is encouraged to examine Chapter 12 along with the present discussion.
The astronomical context
Since the advent of astrophotography and spectroscopy over a century ago, astronomers have faced the challenge of characterizing and understanding vast numbers of asteroids, stars, galaxies and other cosmic populations. A crucial step towards astrophysical understanding was the classification of objects into distinct, and often ordered, categories which contain objects sharing similar properties. Over a century ago, A. J. Cannon examined hundreds of thousands of low-resolution photographic stellar spectra, classifying them in the OBAFGKM sequence of decreasing surface temperature.
Astronomers fit data both to simple phenomenological relationships and to complex non-linear models based on astrophysical understanding of the observed phenomenon. The first type often involves linear relationships, and is common in other fields such as social and biological sciences. Examples might include characterizing the Fundamental Plane of elliptical galaxies or the power-law index of solar flare energies. Astrophysicists may have some semi-quantitative explanations for these relationships, but they typically do not arise from a well-established astrophysical process.
But the second type of statistical modeling is not seen outside of the physical sciences. Here, providing the model family truly represents the underlying phenomenon, the fitted parameters give insights into sizes, masses, compositions, temperatures, geometries and other physical properties of astronomical objects. Examples of astrophysical modeling include:
Interpreting the spectrum of an accreting black hole such as a quasar. Is it a nonthermal power law, a sum of featureless blackbodies, and/or a thermal gas with atomic emission and absorption lines?
Interpreting the radial velocity variations of a large sample of solar-like stars. This can lead to discovery of orbiting systems such as binary stars and exoplanets, giving insights into star and planet formation. Is a star orbited by two planets or four planets?
Interpreting the spatial fluctuations in the Cosmic Microwave Background radiation. What are the best-fit combinations of baryonic, darkmatter and dark energy components? Are Big Bang models with quintessence or cosmic strings excluded?
The goals of astronomical modeling also differ from many applications in social science or industry.
The submillimeter and millimeter-wave regime – roughly λ = 0.2 mm to 3 mm – represents a transition between infrared and radio (λ > 3 mm) methods. Because of the infinitesimal energy associated with a photon, photodetectors are no longer effective and we must turn to the alternative two types described in Section 1.4.2. Thermal detectors – bolometers – are useful at low spectral resolution. For high-resolution spectroscopy (and interferometers), coherent detectors are used. Coherent detectors – heterodyne receivers – dominate the radio regime for both low and high spectral resolution (Wilson et al. 2009).
As the wavelengths get longer, the requirements for optics also change. The designs of the components surrounding bolometers and submm- and mm-wave mixers must take account of the wave nature of the energy to optimize the absorption efficiency. “Pseudo-optics” are employed, combining standard lenses and mirrors with components that concentrate energy without necessarily bringing it to a traditional focus. In the radio region, non-optical techniques are used to transport and concentrate the photon stream energy. For example, energy can be conveyed long distances in waveguides, hollow conductors designed to carry signals through resonant reflection from their walls. At higher frequencies, strip lines or microstrips can be designed to have some of the characteristics of waveguides; they consist of circuit traces on insulators and between or over ground planes.
The goal of density estimation is to estimate the unknown probability density function of a random variable from a set of observations. In more familiar language, density estimation smooths collections of individual measurements into a continuous distribution, smoothing dots on a scatterplot by a curve or surface.
The problem arises in a wide variety of astronomical investigations. Galaxy or lensing distributions can be smoothed to trace the underlying dark matter distribution. Photons in an X-ray or gamma-ray image can be smoothed to visualize the X-ray or gamma-ray sky. Light curves from episodically observed variable stars or quasars can be smoothed to understand the nature of their variability. Star streams in the Galaxy's halo can be smoothed to trace the dynamics of cannibalized dwarf galaxies. Orbital parameters of Kuiper Belt Objects can be smoothed to understand resonances with planets.
Astronomical surveys measure properties of large samples of sources in a consistent fashion, and the objects are often plotted in low-dimensional projections to study characteristics of (sub)populations. Photometric color-magnitude and color-color plots are well-known examples, but parameters may be derived from spectra (e.g. emission-line ratios to measure gas ionization, velocity dispersions to study kinematics) or images (e.g. galaxy morphology measures). In these situations, it is often desirable to estimate the density for comparison with astrophysical theory, to visualize relationships between variables, or to find outliers of interest.
Concepts of density estimation
When the parametric form of the distribution is known, either from astrophysical theory or from a heuristic choice of some simple mathematical form, then the distribution function can be estimated by fitting the model parameters.
Imagers capture the two-dimensional pattern of light at the telescope focal plane. They consist of a detector array along with the necessary optics, electronics, and cryogenic apparatus to put the light onto the array at an appropriate angular scale and wavelength range, to collect the resulting signal, and to hold the detector at an optimum temperature while the signal is being collected. Imaging is basic to a variety of investigations, but is also the foundation for the use of other instrument types that need to have their target sources located accurately. In this chapter we discuss the basic design requirements for imagers in the optical and the infrared and guidelines for obtaining good data and reducing it well. We finish with a section on astrometry, a particularly demanding and specialized use of images.
Optical imager design
A simple optical imager consists of a CCD in a liquid nitrogen dewar or cryostat with a window through which the telescope focuses light onto the CCD (see Figure 4.1). Broad spectral bands are isolated with filters, mounted in a wheel or slide to allow different ones to be placed conveniently over the window. Although this imager is conceptually simple, good performance requires attention to detail. For example, if the filters are too close to the CCD, any imperfections or dust on them will produce large-amplitude artifacts in the image.
For nearly a century, photography was central to huge advances in astronomy. Photographic plates were the first detectors that could accumulate long integrations and could store the results for in-depth analysis away from the telescope. They had three major shortcomings, however: (1) they have poor DQE; (2) their response can be nonlinear and complex; and (3) it is impossible to obtain repeated exposures with the identical detector array, an essential step toward quantitative understanding of subtle signals. The further advances with electronic detectors arise largely because they have overcome these shortcomings.
Modern photon detectors operate by placing a bias voltage across a semiconductor crystal, illuminating it with light, and measuring the resulting photo-current. There are a variety of implementations, but an underlying principle is to improve the performance by separating the region of the device responsible for the photon absorption from the one that provides the high electrical resistance needed to minimize noise. Nearly all of these detector types can be fabricated in large-format two-dimensional arrays with multiplexing electrical readout circuits that deliver the signals from the individual detectors, or pixels, in a time sequence. Such devices dominate in the ultraviolet, visible, and near- and mid-infrared. Our discussion describes: (1) the solid-state physics around the absorption process (Section 3.2); (2) basic detector properties (Section 3.3); (3) infrared detectors (Section 3.4); (4) infrared arrays and readouts (Section 3.5); and charge coupled devices (CCDs – Section 3.6). This chapter also describes image intensifiers as used in the ultraviolet, and photomultipliers (Section 3.7). Heritage detectors that operate on other principles are discussed elsewhere (e.g., Rieke 2003, Kitchin 2008).
Progress in astronomy is fueled by new technical opportunities (Harwit, 1984). For a long time, steady and overall spectacular advances in the optical were made in telescopes and, more recently, in detectors. In the last 60 years, continued progress has been fueled by opening new spectral windows: radio, X-ray, infrared (IR), gamma ray. We haven't run out of possibilities: submillimeter, hard X-ray/gamma ray, cold IR telescopes, multi-conjugate adaptive optics, neutrinos, and gravitational waves are some of the remaining frontiers. To stay at the forefront requires that you be knowledgeable about new technical possibilities.
You will also need to maintain a broad perspective, an increasingly difficult challenge with the ongoing explosion of information. Much of the future progress in astronomy will come from combining insights in different spectral regions. Astronomy has become panchromatic. This is behind much of the push for Virtual Observatories and the development of data archives of many kinds. To make optimum use of all this information requires you to understand the capabilities and limitations of a broad range of instruments so you know the strengths and limitations of the data you are working with.