To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The construction of large ground-based optical and infrared telescopes is driven by the desire to obtain astronomical measurements of both higher sensitivity and higher angular resolution. With each increase in telescope diameter the former goal, that of increased sensitivity, has been achieved. In contrast, the angular resolution of large telescopes (D > 1m), using traditional imaging, is limited not by the diffraction limit (θ ∼ λ/D), but rather by turbulence in the atmosphere. This is typically 1″, a factor of 10–20 times worse than the theoretical limit of a 4-meter telescope at near-infrared wavelengths. This angular resolution handicap has led to both space-based and ground-based solutions. With the launching of the Hubble Space Telescope (HST), a 2.4-m telescope equipped with both optical and infrared detectors, the astronomical community has obtained diffraction-limited images. These optical images, which have an angular resolution of ˜0.″1, have led to exciting new discoveries, such as the detection of a black hole in M87 (Ford et al. 1994) and protostellar disks around young stars in Orion (O'Dell et al. 1993, O'Dell and Wen 1994). However, HST has a modest-sized mirror diameter compared to the 8–10 meter mirror diameters of the largest ground-based telescope facilities.
With the development of techniques to overcome the wavefront distortions introduced by the Earth's atmosphere, diffraction-limited observations from the ground have become possible. These techniques cover a wide range of complexity and hence expense. Speckle imaging, which provided the earliest and simplest approach, is described in Sections 10.1 and 23.3.1 and adaptive optics, which has more recently become scientifically productive and which is a much more powerful technique, is discussed in Section 10.2.
The launch of the Hipparcos satellite in 1989 and the Hubble Space Telescope in 1990 revolutionized astrometry. By no means does this imply that not much progress was made in the ground-based techniques used exclusively until then. On the contrary, the 1960s to 1980s saw an intense development of new or highly improved instruments, including photoelectric meridian circles, automated plate measuring machines, and the use of chargecoupled device (CCD) detectors for small-field differential astrometry (for a review of optical astrometry at the time, see Monet 1988). In the radio domain, very long baseline interferometry (VLBI) astrometry already provided an extragalactic reference frame accurate to about 1 milliarcsecond (mas) (Ma et al. 1990). Spectacular improvements were made in terms of accuracy, the faintness of the observed objects, and their numbers. However, there was a widening gulf between small-angle astrometry, where differential techniques could overcome atmospheric effects down to below 1 mas, and large-angle astrometry, where conventional instruments such as meridian circles seemed to have hit a barrier in the underlying systematic errors at about 100 mas. Though very precise, the small-angle measurements were of limited use for the determination of positions and proper motions, due to the lack of suitable reference objects in the small fields, and even for parallaxes the necessary correction for the mean parallax of background stars was highly non-trivial. Linking the optical observations to the accurate VLBI frame also proved extremely difficult.
The Earth's atmosphere imposes several limitations on our ability to perform astrometric measurements from the ground in both the optical and radio regions of the spectrum. First, we are limited to wavelengths where the absorption is not too great, i.e. the broad optical region from the ultraviolet to the near-infrared, scattered regions in the infrared and broad regions at radio wavelengths. The fundamental limitations imposed by the atmosphere are different in the optical and radio and in this chapter we will deal with those important for the optical; the radio part is largely dealt with in Chapter 12 on radio interferometry, except for a summary given here on the precision limitations imposed by the atmosphere. The second problem created by observing through the atmosphere is refraction of the light waves as they pass through different levels of the atmosphere. If it were only refraction through a stable medium, the problem would be very simple; however, the atmosphere is a turbulent medium that causes variations in the amount of refraction both spatially and temporally and it therefore limits the precision and accuracy of our observations. In this chapter we will deal with both effects using the developments in Schroeder (1987, 2000) as the basic reference.
Refraction through a plane-parallel atmosphere
When we are dealing with relative positions in fields of view less than several degrees, it is sufficient to adopt a plane-parallel atmosphere for the model. In cases where we need to consider the total displacement, such as with meridian circles, it is necessary to adopt atmospheric models that are substantially more complicated, such as those developed by Garfinkel (1967) and Auer and Standish (2000). For the purposes of this chapter we can safely use the plane-parallel atmosphere.
Time-delay integration (TDI), also known as drift scanning, is a mode of reading out a charge-coupled device (CCD) camera that allows a continuous image or scan of the sky to be recorded. Normally, most astronomers use CCDs in the point-and-shoot or stare mode. A telescope is pointed to a particular position of interest on the sky and made to track at that position. A shutter is opened to expose the CCD, and then closed while the electronic exposure recorded by the CCD is read out. In drift-scan mode, the telescope is parked, tracking is turned off, and the camera shutter is held open. As the sky drifts across the field, the electronic exposure recorded by the CCD is shifted across the pixel array, row by row, to match the drift rate of the sky. The time it takes a source in the field to drift across the whole array is the exposure time of the scan. Since the readout is continuous, this is the most time-efficient way to survey large areas of sky. There is no pause between exposures to wait for the readout of the camera. The largest-area photometric surveys to date have been made with drift-scanning cameras. Smaller-scale surveys have used drift scans for astrometry of faint standards, suitable for the re-calibration of the relatively imprecise positions of the large photometric and Schmidt-plate catalogs.
Charge-coupled device cameras were first used in drift-scan mode on ground-based telescopes beginning in the 1980s with the CCD Transit Instrument (McGraw et al. 1980) and the Spacewatch Telescope (Gehrels et al. 1986). These early instruments consisted of CCDs with field sizes of ∼10 arcmin, capable of covering 10–20 square degrees in a single scan. The Spacewatch camera was the first to use a CCD for automated detection of near-Earth asteroids (Rabinowitz 1991). The CCD Transit Instrument had two CCDs aligned east–west allowing simultaneous scans of the same field in two different passbands. Advancements in computer speed and capacity have since allowed the construction of much larger scanning cameras made up of CCD mosaics.
Astrometry is the branch of astronomy that studies the position and motion of celestial objects in the Solar System, in the Milky Way Galaxy, and for galaxies that are near the limit of the observable Universe. In radio astronomy, the study can be separated into microastrometry and macro-astrometry. Micro-astrometry deals with the motion of individual or a small number of associated objects in order to determine their space motion, their distance from the Solar System, and their kinetic properties with respect to neighboring stars and planets. The observational techniques and reductions measure the separation of the target object from a nearby calibrator radio source with known and stable properties.
Macro-astrometry, on the other hand, deals with the absolute position of radio sources, which also requires the determination of the Earth's deformations, complex rotations, and space motion. This type of astrometric experiment observes many well-known compact radio sources over the sky within a 24-hour period. From the analysis of systematic residuals in the data, the absolute positions of the sources, as well as the astrometric and geodetic properties the Earth, are determined. From this 30+ year effort, the fundamental celestial inertial frame has been defined to an accuracy of about 0.01 milliarcsec (mas) using the position of nearly 300 radio sources.
For nearly 30 years, the highest astrometric precision has been obtained using radio-interferometric techniques because of several properties of radio waves. First, astronomers and engineers have been able to connect arrays of radio telescopes that span the Earth (even into Earth orbit) to achieve resolutions of a few mas and obtain positional accuracies well under 1 mas.
To describe the Milky Way Galaxy, it is convenient to divide the entire collection of Galactic stars into components with broadly consistent properties (e.g. age, chemical composition, kinematics). The main idea behind this division is that those components represent stars of a loosely common origin and, thus, are an important means to understanding the formation and evolution of galaxies. Thus, the Milky Way can be “partitioned” into the four main stellar populations: the young thin and the older thick disks, the old and metal-poor halo, and the old and metal-rich bulge. Each of these populations has its own characteristics such as the size, shape, stellar density distribution, and the internal velocity distribution. The latter can be as low as ˜15 km/s in one coordinate (for the thin disk) and as high as ˜100 km/s (for the halo). It is important to realize that along any direction in our Galaxy, there is always a juxtaposition of these populations, which can be described only in a statistical sense.
Star clusters are another distinct population of objects permeating the entire Galaxy. Our Milky Way Galaxy hosts a large number of recognized open (˜1800) and globular (˜160) clusters that are extremely valuable tracers of the main four Galactic populations. These stellar systems are gravitationally bound and this property along with virtually no dispersion in metallicity and age within an individual system sets them apart from the other Galactic stellar populations.
By
Tsvi Piran, Racah Institute of Physics, The Hebrew University, Jerusalem 91904, Israel,
Re'em Sari, Racah Institute of Physics, The Hebrew University, Jerusalem 91904, Israel,
Robert Mochkovitch, Institut d'Astrophysique de Paris, UMR 7095 Université Pierre et Marie Curie Paris 6, CNRS, 98bis Boulevard Arago, Paris, 75014
In the mid-sixties the Universe was thought to be quite bland, so the first instruments that detected gamma-ray bursts (GRBs) were intended for a very different application. Through fortuitous similarity with the techniques needed to detect nuclear explosions, the Vela satellites had the ability to recognize that a GRB is occurring, provide enhanced telemetry for the burst, and locate it (Chapter 1; Klebesadel et al. 1973). These characteristics have been necessary for virtually all subsequent GRB instrumentation. Instrumentation for all other high-energy astrophysics applications know what direction to look and when. The randomness of GRBs in space and time requires specialized adaptations of the standard gamma-ray detectors used on the ground (scintillators, proportional counters, solid state detectors, charge-coupled devices (CCDs)). In the following sections, we will discuss the principles and strategies used in GRB instrumentation in four areas that dominate a design: spectral techniques, temporal techniques, determining a direction to the burst, and networks of multiple-wavelength sensors.
Spectral techniques
The early workhorses of GRB instrumentation were gamma-ray scintillators such as NaI and CsI used in, for example, the Konus experiments (Mazets et al. 1981), the Pioneer Venus Orbiter (PVO; Klebesadel et al. 1980), the International Sun–Earth Explorer (ISEE-3; Anderson et al. 1978), the French-Soviet Venera satellites (Barat et al. 1981) and the Solar Maximum Mission (SMM; Forest et al. 1980). Hurley (1984) has a detailed review of the early instrumentation. Gamma rays interact within a crystal, producing an optical signal that is processed by a photomultiplier.
Like the phenomenon it describes, the book before you has followed a long, winding path. The first discussions of putting it all together took place around 1995, when the gamma-ray burst (GRB) field started to mushroom. Not quite a score years later, we have still not caught up. In the last forty years, GRBs have engulfed the entire electromagnetic spectrum and entrenched on supernovae, extragalactic astronomy, nucleosynthesis in the cosmos, observational cosmology, and multi-messenger astronomy. As their perspective zoomed out, the once lone observers now banded in large collaborations, bringing together a huge arsenal of ground- and space-based observatories. GRB-designed missions were flown, adding large volumes of data. Observers are now faced with the embarrassment of riches and the reality of an everlasting mystery. Theories followed observations, originally dealing with the big picture, then gradually focusing on smaller pieces of the GRB puzzle, as – sometimes controversial – evidence gathered. It is still disputable what are the facts, the maybe's and the unknowns in GRBs.
GRBs exploded onto the astrophysical scene in 1973, when the discovery of a new, strange phenomenon, was announced: brief, intense flashes of gamma rays that, for the most part, occur in unpredictable places in the sky at unpredictable times and are never seen again. It had taken the discoverers about half a decade of labor to gather up enough data from the Vela satellites to convince themselves that the phenomenon was indeed astrophysical.
One consequence of observing from a moving platform is that all objects exhibit parallax. The measurement of parallax yields distance, a quantity useful in astrophysics. In particular, with distance we can determine the absolute magnitude of any object, a primary parameter in two of the most useful “maps” in astronomy: the Hertzsprung–Russell diagram (e.g. Perryman et al. 1997, Fig. 3), showing the relation between absolute magnitude (luminosity) and color (temperature); and the mass–luminosity relation (e.g. Henry 2004, Fig. 3), a tool for turning luminosity into mass, a stellar attribute which determines the past and future aging process for any star. Another example of the utility of absolute magnitudes is the Cepheid period–luminosity relation (PLR). The example used here to illustrate parallax determination had improving that relationship as its ultimate goal.
The technology used to generate parallaxes has proceeded from naked-eye measurements with mechanical micrometers (Bessel 1838), through hand measurements of photographic plates (Booth and Schlesinger 1922), through computer-controlled plate scanners (Auer and van Altena 1978), through computer-controlled CCD cameras (Henry et al. 2006, Harris et al. 2007), through the triumph of the Hipparcos astrometric satellite (Perryman et al. 1997), to space-borne optical interferometers (Benedict et al. 2007, 2009) and extremely long baseline radio interferometers (Reid et al. 2009). Each stage of this historical sequence is characterized by improvements in both the centering of the images of the target and reference stars and the mathematical challenge in distilling the final parallax from those centers.
As is evident from the preceding chapters, the field of astrometry requires the precise positions of stars and other celestial objects relative to one another within a chosen reference frame. In the old days, images of celestial objects were recorded on photographic plates affixed in the focal planes of telescopes, often long-focus refractors, with image positions subsequently determined with plate-measuring machines. In recent years, practically all image recording has been done with CCD and CMOS electronic detectors (see Chapter 14) instead of photographic plates, with positions determined by computer analysis and sophisticated software.
Whatever the recording medium, the telescope is an instrument to transform the positions of objects on the sky to recorded images on the detector. That is, the telescope is simply a device to gather light and image distant object space on to the detector in the telescope focal surface. This re-imaging process is usually not perfect because of aberrations associated with the telescope. In this chapter we consider the types of optical aberrations and their effects on astrometric analyses.
For the purposes of astrometry the transformation from object space to image space should be done without distortion, though in practice this is rarely the case. By definition, transformation without distortion means that the pattern of multiple objects recorded in image space is identical and in one-to-one correspondence with the pattern of these objects on the sky. Throughout this chapter the term distortion will refer to optical field-angle distortion, or OFAD.
The preceding fourteen chapters have been written at a good time to take stock of the field of gamma-ray bursts (GRBs). The extraordinary discoveries made over the last decade or so about a phenomenon that has been around for over four decades seem to have attained a mature state. Thousands of bursts have been observed, classified and followed up, and it is now the special and rare cases, which are extreme by some important measure, that are most likely to advance our understanding as radically new gamma-ray and X-ray observing capabilities are at least a decade away. On the theoretical front, some prescient inferences have been vindicated, phenomenological models that are usable by observers have been developed, and simulation has made great strides. The greatest challenge is to explore the underlying physical processes in much more detail and this is likely to require a new generation of high-performance computers. Nonetheless, the GRB pace of discovery, like much of contemporary astrophysics, will likely exceed that in most other subfields of physical science.
I was asked to write a critique of where we are today and what I think will be the major developments going forward. My qualifications for this task are not promising. I have probably contributed most to the study of a high-energy gamma-ray stellar phenomenon unintentionally in the context of trying to explain variability of the lowest frequency radio emission from active galaxies, and my largest attempt to work on what I thought was relevant turned out to be only applicable, at best, to X-ray bursting neutron stars.