To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Fringe visibility measurements and derived observables such as the power spectrum and bispectrum are subject to both systematic and random errors. Chapter 3 showed how a systematic reduction in the fringe contrast can arise from the effects of atmospheric seeing. This chapter concentrates on the main sources of random errors or noise.
Noise on fringe parameters can arise both from the effects of atmospheric seeing and also from noise sources associated with measuring the light intensity levels in the fringe pattern. The main aim of this chapter is to derive robust ‘rule-of-thumb’ estimates for the noise levels in a given observation. These rules of thumb can then be used to determine which noise sources are dominant and whether random errors or systematic errors provide the fundamental limitation to accuracy in any given case.
Atmospheric noise
5.1.1 Power spectrum
As discussed in Chapter 3, the spatial and temporal wavefront phase fluctuations which occur due to atmospheric seeing cause the mean fringe contrast to decrease as the apertures go from being point-like to being comparable in size to the Fried parameter of the seeing r0, and as the exposure times go from being infinitesimal to being comparable to the coherence time of seeing t0. Under these conditions the fringe contrast will also fluctuate on an exposureto-exposure basis. Exposure times and aperture sizes need to be as large as possible in order to get more light, so it is helpful to be able to quantify what the trade-off between these experimental variables and the atmospheric noise level is.
An idea for how the noise varies as a function of integration time can be obtained using the ‘random-walk’ model for the visibility reduction used in Section 3.3.2. In this model, the coherent flux is given by a random walk consisting of n steps so that
where F0 is the coherent flux in a single ‘step’ and Φk is the fringe phase at step k. The mean power spectrum is therefore given by
The OIFITS files produced at the end of the data-reduction process can be used to provide information about the object under study. There are two ways in which this information can be extracted, either in terms of parameters of a relatively simple model, or as a model-independent image. These two forms of information are related to one another and are often used in tandem. This chapter discusses the process of deriving these end products of an interferometric observation.
Bayesian inference
In Chapter 8 the overall inverse problem of interferometry was presented as the problem of determining the parameters of a model of the object being observed from the measured data values. The data-reduction process presented in that chapter does not fundamentally change that problem: the object model parameters still need to be determined, but, as the name implies, the data-reduction process reduces the volume of data that need to be considered.
Indeed, the remaining problem is not one of data reduction but of data interpretation, as the number of data points produced by the data-reduction process may be comparable to or even less than the number of model parameters. This kind of under-constrained problem is where a form of inference known as Bayesian inference is at its best.
Bayesian inference acknowledges the non-uniqueness of the model parameters in the majority of inverse problems. Instead of selecting a single model (for example a parameterised model together with a single set of values for the model parameters) which could have produced the data, it acknowledges that there may be many possible models and assigns a probability P between 0 and 1 to all of them in parallel.
I was honoured and delighted when David Buscher invited me to write an introduction to his new book. I have long felt that there is a desperate need for an authoritative and accessible book on the techniques of optical and infrared interferometry and David has filled this major gap in the literature with this excellent piece of technical and scientific writing.
For the radio astronomer, interferometry is the bread and butter of how much of the discipline has to be undertaken. Radio, and nowadays millimetre and submillimetre, astronomers are brought up with the concepts of amplitude and phase, Fourier inversion and so on, which has always been something of a barrier to the wider appreciation of these disciplines by optical astronomers, who until recently scarcely had to bother about phase at all. The understanding of aperture synthesis imaging in all its variants became a black-belt sport for the initiates and this discouraged the typical astronomer from taking the plunge.
But this is no longer reasonable or acceptable. The possibilities opened up by optical and infrared synthesis imaging are enormous, as David makes clear in this book. Angular resolution of a milliarcsecond or better can now be routinely provided by the most advanced optical-infrared interferometers and will undoubtedly result in important new discoveries and much improved tests of theories of Galactic and extragalactic objects.
This is where David's book comes in. He offers a rigorous, but accessible, introduction to the necessary theoretical and experimental tools needed to understand and apply the techniques of optical synthesis imaging. The result is that the effort needed to understand the key concepts by those new to the field, or who are still put off by the apparent complexity of the techniques, is made very much less forbidding.
Progress in astronomy is dependent on the development of new instrumentation that can provide data which are better in some way than the data which were available before. One improvement that has consistently led to astronomical discoveries is that of seeing finer detail in objects. In astronomy, the majority of the objects under study cannot easily be brought closer for inspection, so the typical angular scales subtended by objects, which depend on the ratio of the typical sizes of the objects to their typical distances from the Earth, are a more useful indicator of how easily they can be seen than their linear sizes alone. The angular separation of two features in a scene which can be just be distinguished from one another is called the angular resolution and the smaller this scale is, the more detail can be seen.
The impact of increased angular resolution can be appreciated from comparing the important angular scales of objects of interest with the angular resolution of the instrumentation available at different times in history. Prior to the invention of the telescope, the human eye was the premier ‘instrument’ in astronomy, with an angular resolution of about 1 arcminute (about 300 microradians). With the notable exceptions of the Sun, Moon and comets, most objects visible in the night sky are ‘star-like’: they have angular sizes smaller than 1 arcminute and so appear as point-like objects. The first telescopes improved the angular resolution of the naked eye by factors of three to six: Galileo's telescopes are thought to have had angular resolutions of about 10–20 arcseconds (Greco et al., 1993; Strano, 2009) and it became possible to see that planets appear as discs or crescents and have their own moons. Subsequent improvements to telescopes have culminated in telescopes like the Hubble Space Telescope (HST) which have typical angular resolutions of around 50 milliarcseconds (about 250 nanoradians) – better by a factor of more than a thousand than the naked eye.
The interferometer described in Chapter 1 is an idealised one, where all the optical delays are known and stable. In such an interferometer the phase of the interference fringes measured for a point-like object at a chosen position on the sky (the phase centre) will be zero, and the fringe phase is a good ‘observable’. For a number of practical reasons, no existing optical interferometer even approaches this ideal.
First of all, the mechanical tolerances required to know the optical path difference (OPD) internal to the interferometer are formidable. If the position of a single mirror in the optical train is in error by a fraction of a millimetre then the phase of the fringes will be hundreds of radians from the value measured by an ideal interferometer.
While these mechanical errors could in principle be overcome through increasing the precision with which the interferometer is built, there are even more serious phase errors that are introduced external to the interferometer which no amount of increased construction accuracy will overcome. These errors are induced by the passage of starlight through the Earth's atmosphere on its way to the interferometer. The atmosphere introduces phase errors, which are large (many radians) and rapidly varying (on timescales of milliseconds), and so these errors present a fundamental limitation to interferometry from the ground.
The optical effects which cause these rapidly changing phase perturbations are known in astronomy as seeing. Seeing can be observed in single telescopes as the ‘boiling’ of stellar images, which looks similar to the ‘shimmering’ of images that can be seen on a hot day.
The effects of seeing on interferometers are of a different character to those on a single telescope: whereas on a single telescope the seeing affects the angular resolution which can be obtained, in an interferometer the resolution can be relatively little affected by the presence of seeing but the sensitivity can be dramatically altered.
The ability to observe faint objects is a key requirement for an astronomical instrument. Many types of object are intrinsically rare and therefore the closest examplars are far away and hence faint. For objects which are less rare, being able to ‘go fainter’ means that more exemplars of the class can be studied to give statistical validity to any findings.
The exposure times used in interferometry are typically much less than those used in other types of astronomical instruments – usually milliseconds instead of minutes or hours. As a result, a target which would be considered bright for many astronomical observations is considered faint in interferometric terms. This means that the majority of potential astronomical targets are likely to be in the faint-object regime for interferometry, because they will have been discovered using techniques which have intrinsically better faint-object sensitivity.
This chapter takes a quantitative look at the limitations of interferometry when observing faint objects. It looks at the trade-offs involved in adjusting the parameters of an interferometric observation such as the exposure time, with the aim of (a) determining the best parameter settings to use for observing faint objects and (b) determining the faintest possible object which can be observed under a given set of conditions.
Adaptive optics (AO) systems can be used to increase the signal-to-noise ratio (SNR) for an interferometric measurement, because they allow the use of a large-aperture telescope while ameliorating the negative effects of atmospheric wavefront errors on interferometric SNRs. In the same way, cophasing fringe trackers allow the use of long exposure times to improve the SNR of observations of faint objects.
Unfortunately, it is precisely on faint objects that the assumption that the atmospheric correction provided by active correction systems such as AO and fringe trackers breaks down. These systems need a sufficient number of photons from a reference object to accurately sense the wavefront perturbations, and the reference object is typically the object under study itself. If the object being studied does not provide enough photons then the level of correction will be worse, degrading the SNR of interferometric measurements, which were already low due to the faintness of the source.
The ‘raw’ data from an interferometer consist of the measurements of the fringe pattern plus auxilliary data required for calibration. These data need to be converted into calibrated power spectrum and bispectrum data or coherently-average visibility data for subsequenty model-fitting and image reconstruction. The exact details of the data-reduction process varies between interferometric instruments and typically software is provided for each instrument that can perform the major parts of the process. This chapter provides an outline of what is going on inside this software in order to provide an understanding of the processes and the rationale for choosing one process over another when analysing a given dataset.
Scientific inference
The data-reduction process is part of a larger process, which aims to gain some knowledge about the astronomical object under investigation based on measurement of fringe patterns, and it is helpful to consider the process as a whole to understand where data reduction fits in.
The process of gaining knowledge based on measurements is known as scientific inference. A conceptual model of scientific inference starts out with an existing state of knowledge. This can be cast in terms of a model of the object, which has a number of unknown parameters. An example model is a binary star system consisting of a pair of stars with unknown brightnesses and diameters for the constituent stars and an unknown separation between them.
A particular set of values for all the model parameters can be thought of as representing a single point in a multi-dimensional space known as the model space. For a particular point in model space, the set of fringe measurements that would be produced by a given interferometer represents a point in the data space of the instrument. This model of inference is shown diagramatically in Figure 8.1.
Optical interferometry uses the combination of light from multiple telescopes to allow imaging on angular scales much smaller than is possible with conventional single-telescope techniques. It is increasingly recognised as the only technique capable of answering some of the most fundamental scientific questions in astronomy, from the origin of planets to the nature of the physical environments of black holes.
Interferometry is an established technique at radio and millimetre wavelengths, with instruments such as the VLA and ALMA being the workhorses at these wavelengths. The development of interferometry in the optical (which we take here to include both visible and infrared wavelengths) has lagged behind that of radio interferometry due both to the extreme precision requirements imposed by the shorter wavelengths and to the severe effects of the Earth's atmosphere. For many years, the use of optical interferometry for scientific measurements was limited to the specialists who designed and built interferometric instruments.
At the beginning of the twenty-first century, the first “facility” optical interferometers such as the VLTI, the CHARA array the Keck Interferometer came online, with the aim of broadening the use of interferometry to the wider community of astronomers who could use it as a tool to do their science. As part of this expansion, organisations in Europe and the USA began to hold summer schools to provide an introduction to the theory and practice of interferometry to astronomers new to the topic. A number of times after giving lectures at these schools, I have had students come up to me wanting to find out more about some ‘well-known’ interferometric idea that I have mentioned in my talk. Often I have had to reply that there is no one place in the literature which provides this further information.
Interferometric observations need to be planned in advance, because observing time on an interferometer is a scarce resource. This planning is often carried out in the context of a competitive proposal scheme like that operated by most research telescopes. The typical process involved is that potential observers submit proposals to a ‘time-allocation committee’ or similar body, this body ranks the proposals and the most highly ranked proposals are given time.
The criteria for ranking the proposals can be somewhat subjective, but usually involve a combination of the technical feasibility of the proposed observations and the likely scientific value of the information that will result from them. In order to be highly ranked, proposals must address both of these aspects, so developing such a proposal requires expertise in two areas: familiarity with a relevant science field to understand where the gaps in knowledge are and an understanding of the ways in which different kinds of interferometric observation can provide critical information to fill these gaps.
This chapter looks at the interaction between the scientific and technical aspects of an interferometric observing proposal, with the aim of highlighting the most important areas to consider. The details of writing a competitive astronomy proposal are beyond the scope of this book, but an online search for ‘how to write an observing proposal’ turns up many useful links on the topic.
Example proposal
The material in this chapter will be illustrated in part through an example proposal for the VLTI, which was awarded observing time in August and September 2012. The science background to the proposal is that of Mira variables, named after the prototype star of the class otherwise known as Omicron Ceti. The name Mira is latin for ‘wonderful’ or ‘astonishing’, a reference to its regular appearance and disappearance from the visible sky; pulsations in the star cause large changes in brightness on periods of a few hundred days.
In a rapidly changing field such as interferometry, a static item like a book quickly goes out of date. To supplement the material in this book, some material has been placed online and can be accessed at the URL http://www. cambridge.org/9781107042179.
The online material includes links to other online material such as websites for the major interferometry projects and sites providing interferometry software.
It also includes additional tools based on the material presented in the book. Although many of the figures in this book have been taken from the literature, many had to be prepared from scratch or reworked into a format suitable for presentation in this book. The graphs were prepared using the programming language Python, using its numerical libraries Numpy and SciPy, together with the Matplotlib plotting library. The source code for these plotting programs, together with the graphical results, have been made available as part of the online material.
It is hoped that these will form a valuable technical and educational resource in themselves. In some cases, readers may want to know the exact values that were used in a given graph. Instead of taking a ruler to the graph, they can print out the values with a small modification to the relevant plotting program. In other cases, the reader may want to compare or s the results from different graphs and, as the software has been written in a relatively modular fashion, this should be straightforward. Access to the source code of the programs should also allow the techniques used in the preparation of the data to be studied in detail and employed elsewhere.
Included in this software is a simple interferometer simulation framework. This includes an atmospheric wavefront perturbation generator of the type described in Section 3.5.2, a means of correcting the perturbations with a simple adaptive optics system and a means of measuring the fringe parameters of spatially filtered and unfiltered fringes. This could be extended in many ways to derive new results and I encourage people to experiment with it.
Previous chapters have assumed a rather generalised and abstract interferometer. This chapter looks at the how the functionality of this abstract interferometer is implemented in reality. This exposition will make use of examples from existing interferometers, with the aim of giving an idea of the diversity and ingenuity of the implementations of this functionality.
Interferometric facilities
The following is a brief summary of the interferometric facilities which were operational at the time of writing or expected to be operational within the next few years. The systems are listed in order of the date (or expected date) of ‘first fringes’ on each of these interferometers. More information can be found in the online supplementary material (see Appendix B).
Aperture-masking instruments Masking the aperture of a single telescope to convert it into an interferometer was used in the very earliest days of interferometry, and yet it is still a competitive technique for many astronomical measurements (Tuthill, 2012). Because the implementation challenges for aperture masking are in some ways different to those for separated-element interferometry, discussion of the practical features of this technique is deferred until Section 4.10.
SUSI The Sydney University Stellar Interferometer (Davis et al., 1999) is sited near to the radio telescopes of the Australia Array in Narrabri, Australia. It operates at visible wavelengths and has baselines ranging from 5m to 640m (currently only baselines up to 80m have been commissioned).
NPOI The Navy Precision Optical Interferometer (formerly the Navy Prototype Optical Interferometer) (Armstrong et al., 1998) is sited on the Lowell Observatory Anderson Mesa Station in Arizona, USA. It operates at visible wavelengths and is capable of performing wide-angle astrometric measurements as well as interferometric imaging. It has baselines from 2m to 437m (at the time of writing only baselines from 8.8m to 79m have been commissioned).
In this compelling book, leading scientists and historians explore the Drake Equation, which guides modern astrobiology's search for life beyond Earth. First used in 1961 as the organising framework for a conference in Green Bank, West Virginia, it uses seven factors to estimate the number of extraterrestrial civilisations in our galaxy. Using the equation primarily as a heuristic device, this engaging text examines the astronomical, biological, and cultural factors that determine the abundance or rarity of life beyond Earth and provides a thematic history of the search for extraterrestrial life. Logically structured to analyse each of the factors in turn, and offering commentary and critique of the equation as a whole, contemporary astrobiological research is placed in a historical context. Each factor is explored over two chapters, discussing the pre-conference thinking and a modern analysis, to enable postgraduates and researchers to better assess the assumptions that guide their research.
Cosmology, the study of the universe as a whole, has become a precise physical science, the foundation of which is our understanding of the cosmic microwave background radiation (CMBR) left from the big bang. The story of the discovery and exploration of the CMBR in the 1960s is recalled for the first time in this collection of 44 essays by eminent scientists who pioneered the work. Two introductory chapters put the essays in context, explaining the general ideas behind the expanding universe and fossil remnants from the early stages of the expanding universe. The last chapter describes how the confusion of ideas and measurements in the 1960s grew into the present tight network of tests that demonstrate the accuracy of the big bang theory. This book is valuable to anyone interested in how science is done, and what it has taught us about the large-scale nature of the physical universe.
Meteor Showers and their Parent Comets is a unique handbook for astronomers interested in observing meteor storms and outbursts. Spectacular displays of 'shooting stars' are created when the Earth's orbit crosses a meteoroid stream, as each meteoroid causes a bright light when it enters our atmosphere at high speed. Jenniskens, an active meteor storm chaser, explains how meteoroid streams originate from the decay of meteoroids, comets and asteroids, and how they cause meteor showers on Earth. He includes the findings of recent space missions to comets and asteroids, the risk of meteor impacts on Earth, and how meteor showers may have seeded the Earth with ingredients that made life possible. All known meteor showers are identified, accompanied by fascinating details on the most important showers and their parent comets. The book predicts when exceptional meteor showers will occur over the next fifty years, making it a valuable resource for both amateur and professional astronomers.