To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As is evident from the preceding chapters, the field of astrometry requires the precise positions of stars and other celestial objects relative to one another within a chosen reference frame. In the old days, images of celestial objects were recorded on photographic plates affixed in the focal planes of telescopes, often long-focus refractors, with image positions subsequently determined with plate-measuring machines. In recent years, practically all image recording has been done with CCD and CMOS electronic detectors (see Chapter 14) instead of photographic plates, with positions determined by computer analysis and sophisticated software.
Whatever the recording medium, the telescope is an instrument to transform the positions of objects on the sky to recorded images on the detector. That is, the telescope is simply a device to gather light and image distant object space on to the detector in the telescope focal surface. This re-imaging process is usually not perfect because of aberrations associated with the telescope. In this chapter we consider the types of optical aberrations and their effects on astrometric analyses.
For the purposes of astrometry the transformation from object space to image space should be done without distortion, though in practice this is rarely the case. By definition, transformation without distortion means that the pattern of multiple objects recorded in image space is identical and in one-to-one correspondence with the pattern of these objects on the sky. Throughout this chapter the term distortion will refer to optical field-angle distortion, or OFAD.
The preceding fourteen chapters have been written at a good time to take stock of the field of gamma-ray bursts (GRBs). The extraordinary discoveries made over the last decade or so about a phenomenon that has been around for over four decades seem to have attained a mature state. Thousands of bursts have been observed, classified and followed up, and it is now the special and rare cases, which are extreme by some important measure, that are most likely to advance our understanding as radically new gamma-ray and X-ray observing capabilities are at least a decade away. On the theoretical front, some prescient inferences have been vindicated, phenomenological models that are usable by observers have been developed, and simulation has made great strides. The greatest challenge is to explore the underlying physical processes in much more detail and this is likely to require a new generation of high-performance computers. Nonetheless, the GRB pace of discovery, like much of contemporary astrophysics, will likely exceed that in most other subfields of physical science.
I was asked to write a critique of where we are today and what I think will be the major developments going forward. My qualifications for this task are not promising. I have probably contributed most to the study of a high-energy gamma-ray stellar phenomenon unintentionally in the context of trying to explain variability of the lowest frequency radio emission from active galaxies, and my largest attempt to work on what I thought was relevant turned out to be only applicable, at best, to X-ray bursting neutron stars.
Chapter 1 surveys the opportunities and challenges for astrometry in the twenty-first century (van Altena 2008) while Chapter 2 discusses space satellites primarily designed for astrometry. We now review the situation for ground-based astrometry, since it is often mistakenly stated that there is no longer any need to pursue ground-based research once satellites are operating. It is certainly true that the levels of precision and accuracy projected for Gaia and others are far beyond what can be achieved from the ground. However, there are also consequences of the fairly small aperture size and short flight durations that impose constraints on the limiting magnitudes and our ability to study long-term perturbations. In this chapter we will explore those areas of research using astrometric techniques that will be able to make important contributions to our understanding of the Universe in the coming years, even with high-accuracy satellites such as Gaia operating.
Radio astrometry
It is likely that radio astrometry observations (see Chapter 12 for a detailed discussion of radio astrometry and interferometry) will continue to be made primarily from the ground due to the difficulty and cost of launching large objects into space. Since the diffraction limit of a telescope is inversely proportional to the wavelength and radio wavelengths are about 103 to 105 times longer than those of visible light, no high-resolution imaging or high-precision angular measures can be performed with a single radio telescope.
Astrometry entered a new era with the advent of microarcsecond positions, parallaxes, and proper motions. Cutting-edge topics in science are now being addressed that were far beyond our grasp only a few years ago. It will soon be possible to determine definitive distances to Cepheid variables, the center of our Galaxy, the Magellanic Clouds and other Local Group members. We will measure the orbital parameters of dwarf galaxies and stellar streams that are merging with the Milky Way, define the kinematics, dynamics, and structure of our Galaxy and search for evidence of the dark matter that constitutes most of the Universe's mass. Stellar masses will be determined routinely to 1% accuracy and we will be able to make full orbit solutions and mass determinations for extrasolar planetary systems. If we are to take advantage of microarcsecond astrometry, we need to reformulate our study of reference frames, systems, and the equations of motion in the context of special and general relativity. Methods need to be developed to statistically analyze our data and calibrate our instruments to levels beyond current standards. As a consequence, our curricula must be drastically revised to meet the needs of students in the twenty-first century.
In October 2007, IAU Symposium 248 “A Giant Step: From Milli- to Micro-arcsecond Astrometry” was held in Shanghai, China. Approximately 200 astronomers attended and presented an array of outstanding talks. I was asked to present a talk on the educational needs of students who might wish to study astrometry in the era of microarcsecond astrometry and to organize a round-table discussion on the topic. In the process of preparing my talk and organizing the session, I realized that I had a unique opportunity to bring together the experts on virtually all topics that might be covered in an introductory text on astrometry. This book is the result of the advice from many individuals in the worldwide astrometric community, most of whom were present at that Shanghai meeting and in particular the 28 authors who wrote the 28 chapters.
By
Peter Mészáros, Department of Physics and Center for Particle Astrophysics, 525 Davey Lab, Pennsylvania State University, University Park, PA 16802, USA,
Ralph A. M. J. Wijers, Astronomical Institute Anton Pannekoek, University of Amsterdam, Science Park 904, Amsterdam, The Netherlands
As we have seen in the previous chapters, observational evidence combined with elementary theoretical considerations leads to the view that gamma-ray bursts (GRBs) and their afterglows result from dissipation of energy from an ultrarelativistic flow, which in turn is generated by a catastrophic event that injects a supernova-like amount of energy into a small volume. As discussed in Chapter 7, this dissipation can be both internal (when radial or angular differences in motion lead to heat production and subsequent radiation) and external (when the outflow interacts with its environment). The prevailing view attributes the prompt gamma-ray emission to the internal dissipation, and the afterglow to external dissipation, but the phenomena may overlap in time. A case in point is the so-called reverse-shock emission seen in optical wavelengths, during and soon after the prompt emission, and in radio wavelengths within the first days (Chapters 6 and 7), which is best explained as due to the reverse shock propagating back into the ejecta when they decelerate onto the external mass. The present chapter will deal with the physics of the external interaction of the outflow, regardless of how soon after the burst onset we see its emission.
There are some basic assumptions we make about the physics that dominates the behavior of our system. First, we will here treat only the spherically symmetric case. This is generally not correct, because GRBs are known to be highly collimated.
By
Jonathan Granot, 31 Rupin Street, Rehovot 76345, Israel,
Enrico Ramirez-Ruiz, Astronomy and Astrophysics Department, University of California, Santa Cruz, CA 95064, USA
Evidence for bulk relativistic motion in gamma-ray bursts
The first line of evidence for ultrarelativistic bulk motion of the outflows that produce GRBs arises from the compactness argument. It relies on the observed short and intense pulses of gamma rays and their non-thermal energy spectrum that often extends up to high photon energies. Together, these facts imply that the emitting region must be moving relativistically. In order to understand this better, let us first consider a source that is either at rest or moves at a Newtonian velocity, β≡ v/c ≪1, corresponding to a bulk Lorentz factor Γ (1 – β2)−1/2 ≈ 1. For such a source the observed variability timescale (e.g., the width of the observed pulses) ∆t, implies a typical source size or radius R <c∆t, due to light time travel effects (for simplicity we ignore here cosmological effects, such as redshift or time of dilation). GRBs often show significant variability down to millisecond timescales, implying R<3 × 107(∆t/ 1 ms) cm. At cosmological distances their isotropic equivalent luminosity, L, is typically in the range of 1050–1053 erg s−1. In addition, the (observed part of the) εFε GRB spectrum typically peaks around a dimensionless photon energy of ε≡ Eph/mec2 ˜ 1, so that (for a Newtonian source) a good fraction of the total radiated energy is carried by photons that can pair produce with other photons of similar energy. (F is the radiative flux and Fε ≡ dF/dε.)
A description of the discovery of gamma-ray bursts (GRBs) must necessarily begin with a description of the Ve l a Hotel satellite system, which, because of its unique apabilities, was responsible for the first confident detection of a GRB (see also Bonnell & Klebesadel (1996) for an earlier description of the discovery). In the course of international discussions toward a nuclear test ban treaty in the late 1950s it became apparent that clandestine nuclear tests could be performed beyond the Earth's atmosphere in order to avoid detection. Because of negotiations being conducted toward the treaty and concerns over the possibility of exoatmospheric testing, the Los Alamos National Laboratory (named the Los Alamos Scientific Laboratory at the time) was charged in 1959 with development of a satellite-borne system for the detection of nuclear devices detonated in space. In these early days of space experimentation the program provided planning for a total of five launches, each placing a pair of satellites in orbit, in order to assure success. The first of these launches was conducted in 1963. In fact, the program was successful beyond the most hopeful expectations, so much so that the spare hardware was assembled for a sixth launch, in 1970.
The satellites were launched in pairs and placed into a common circular orbit at a radius of 120 000 km. This orbit provided a very benign environment for instrumentation designed to detect the radiation signature of a nuclear detonation performed in the near vacuum of space.
Most of the progress in the gamma-ray burst (GRB) field over the last decade and prior to the launch of the Fermi Gamma-ray Space Telescope (FGST; Fermi henceforth) occurred in our understanding of the GRB afterglow emission and its surroundings. Classical observational astronomy, from radio to X-rays, played a vital role in this progress as it allowed the identification of GRB counterparts by drastically improving the position accuracy of the bursters down to the sub-arcsec level. Once the afterglows were identified, the full power of optical and near-infrared instrumentation came into play. This resulted in an overwhelming diversity of observational results and consequently in the understanding of the properties of the relativistic outflows, their interaction with the circumsource medium, as well as the surrounding interstellar medium (ISM) and the host galaxies. Here we describe the basic multiwavelength observational properties of afterglows, of both long- and short-duration GRBs, obtained with space- (Table 6.1) and ground-based instruments. The present sample consists of ~550 X-ray and ~350 optical afterglows (see http://www.mpe.mpg.de/~jcg/grbgen.html).
Early searches for transient optical emission
Over the first two decades after the discovery of GRBs (until 1996), GRB localizations were either delayed but accurate (i.e., arcmin localizations on timescales of days, provided by the Interplanetary Network (IPN); Hurley et al. 1999), or rapid but rough (i.e., minutes after the GRB trigger, but with at least 2° error circles, provided by the BATSE Coordinate Distribution Network system (BACODINE); Barthelmy et al. 1994, 1996).
Space observatories, such as Hubble Space Telescope (HST), Spitzer, and Kepler, offer unique opportunities and challenges for astrometry. The observing platform above the atmosphere allows us to reach the diffraction limit with a stable point-spread function (PSF). The downside is that there are limits to how much data can be downloaded per day, which in turn limits the number of pixels each detector can have. This imposes a natural compromise between the sampling of the detector and its field of view, and in order to have a reasonable field of view the camera designers often tolerate moderate undersampling in the detector pixels.
A detector is considered to be well sampled when the full width at half maximum (FWHM) of a point source is at least 2 pixels. If an image is well sampled, then the pixels in the image constitute a complete representation of the scene. We can interpolate between the image pixels using sinc-type functions and recover the same result that we would get if the pixels were smaller. This is because nothing in the scene that reaches the detector can have finer detail than the PSF. On the other hand, if the image is undersampled, then there can be structure in the astronomical scene that changes at too high a spatial frequency for the array of pixels to capture completely.
Some of the information that is lost to undersampling can be recovered. One common strategy to do this involves dithering. By taking observations that are offset from each other by sub-pixel shifts, we can achieve different realizations of the scene. These realizations can be combined to produce a single, higher-resolution representation of the scene that contains more information than any single observation. “Drizzle” is a popular tool that was produced by the Space Telescope Science Institute (STScI; Fruchter and Hook 1997, 2002) to combine multiple dithered pointings into a single composite image.
By
Gerald J. Fishman, NASA/Marshall Space Flight Center, Space Science Office, VP62, Huntsville, AL 35812, USA,
Charles A. Meegan, Universities Space Research Association, NSSTC, 320 Sparkman Drive, Huntsville, AL 35805, USA
This chapter describes advances in gamma-ray burst (GRB) research during the time of the Burst and Transient Source Experiment (BATSE), which was proposed to NASA in 1978, launched into Earth orbit by the Space Shuttle Atlantis in April 1991 as part of the Compton Gamma Ray Observatory (CGRO), and de-orbited in June 2000. The chapter focuses primarily on BATSE results, although other advances during this time are also discussed. BATSE was the first large, comprehensive experiment specifically designed to study GRBs.
State of the field in 1991
In the two decades prior to BATSE, and following the initial discovery and observations of GRBs with the Vela spacecraft (see also Chapter 1), considerable observational progress was made by means of other spacecraft. In the 1970s and 1980s, several comprehensive catalogs of GRB time profiles were obtained with the Konus instruments on the Venera spacecraft (Mazets et al. 1981a). Perhaps the most accurate spectrum of a GRB at the time was made over a wide energy range by observations from the gamma-ray spectrometer on the Apollo 16 spacecraft (Metzger et al. 1974).
Observations of GRBs were also made by detectors on many other spacecraft designed for other types of research. These included the proportional counters on the UHURU X-ray astronomy spacecraft (R. Harnden, private communication), solar instruments (OSO-7), lunar and planetary X-ray and gamma-ray instruments, and charged particle detectors designed for magnetospheric studies, such as IMP-6 and IMP-7 (e.g., Cline et al. 1973).
Hipparcos (see ESA 1997), launched in 1989, repeatedly observed the whole sky for about 3.5 years but only measured a pre-selected sample of stars (˜ 120000). The typical positional precision at the end of mission is of the order of one milliarcsecond (mas). Although it was primarily an astrometric mission, its on-board photometer also yielded some sparse light curves of eclipsing binaries. All astrometric observations are one-dimensional measurements, whether the observations are those of a resolved or unresolved binary, or a single star. That measurement is the separation between some reference point and the target projected along the scanning direction (which changes continuously). With a primary mirror of 30 cm operating in the visible (up to V ˜ 12 mag), its resolving power was not that impressive (about 0.5″).
The Hubble Space Telescope (HST) is a pointing instrument launched by NASA in 1990. The main instrument for binary observation (and astrometry in general) is the Fine Guidance Sensor (FGS, see Benedict et al. 2008). Two out of the three FGSs lock onto guide stars while the science FGS observes the target with a single-measurement precision of 1 mas.
Gaia, yet another European Space Agency (ESA) mission, will be launched in 2013. Although it will include a spectrograph, primarily for radial-velocity determination, as well as a photometer operating in two distinct bands, Gaia will be based on the same principles as Hipparcos, namely a spinning and precessing instrument with two pointing directions with a large angular separation.
Less than 300 years after Galilei's first telescope observations of celestial objects, Fizeau (1868) suggested a way to improve the measurement of stellar diameters by masking the telescope aperture with two small sub-apertures. Light passing through these sub-apertures would then interfere in the telescope focal plane. The first successful measurement using this principle was performed on Mt. Wilson in 1920 by Michelson and Pease (1921) who determined the diameter of α Orionis to be 0.047 arcsec. This was at a time when the smallest diameter that could be measured with a full aperture was about 1 arcsec, equivalent to the angular resolution of the telescope when observing through atmospheric turbulence.
Although the measurement of a stellar diameter is not the same as an image, the dramatic increase in angular resolution sparked enough interest in the new method that it was soon understood how such contrast measurements with different pairs of sub-apertures – different in separation and orientation – can be combined to form a high-resolution image not only of stars but of any type of object.
However, due to insurmountable technical problems with the mechanical stability at larger separations of the sub-apertures, optical interferometry was abandoned in the late 1920s. It was not until 1974 that Labeyrie (1975) was able to combine the light from two independent telescopes at the Observatoire de la Côte d'Azur, demonstrating that optical interferometry was feasible.
While angular resolution increases linearly with the telescope diameter when eliminating atmospheric turbulence with adaptive optics, even today's largest telescopes cannot resolve features on the surface of individual stars. The diffraction limit is still so much larger than a star’s disk that their images in the telescope focal plane are indistinguishable from point sources. For example, an angular resolution of 50 milliarcseconds (mas) on an 8-m telescope is only just about the angular size of Betelgeuse.
The goal of this book is to present an introduction to the techniques of astrometry and to highlight several applications of those techniques to the solution of current problems of astrophysical interest. In some cases we require the absolute positions of objects to establish reference frames and systems, while in others we need the change in position with time to obtain the distances and tangential velocities of the objects. The astrometric procedures necessary to solve those problems have been laid out in considerable detail by Konig (1933, 1962), Smart (1931), and others, so we will summarize the methods used to transform raw coordinate measurements of celestial objects on photographic plates and CCDs to their corresponding coordinates on the celestial sphere. We recommend that the reader consult the above references where clarification is needed. Once we have the desired celestial coordinates and their changes with time that yield parallaxes and proper motions, we can then create catalogs of those quantities and apply them to the solution of problems in galactic structure, the masses of stars, membership in star clusters, dynamical studies of objects in the Solar System, and extrasolar planets, and to help to set limits to some cosmological models.
Telescope and detector alignment
We normally assume that our telescope and detector have been carefully aligned so that the image quality will be optimum over the field of view (FOV). However, even if the apparent image quality is good over the FOV, it may be that residual misalignments remain that complicate the transformation from detector to sky. Quite often, high-order polynomials are used to absorb the effects of those misalignments. Unfortunately, that procedure obscures the interpretation of the transformation terms and can lead to spurious and/or lower-accuracy results.
For 40 years theorists have struggled to understand gamma-ray bursts (GRBs), not only where they are and the systematics of their observed properties, but what they are and how they operate. These broad questions of origin are often referred to as the problem of the “central engine.” So far, this prime mover remains hidden from direct view, and will remain so until neutrino or gravitational-wave signatures are detected. As discussed elsewhere in this volume, there is compelling evidence that all GRBs require the processing of some small amount of matter into a very exotic state, probably not paralleled elsewhere in the modern Universe. This matter is characterized by an enormous ratio of thermal or magnetic energy to mass, and the large energy-loading drives anisotropic, relativistic outflows. The burst itself is made far away from this central source, outside the star that would otherwise obscure it, by processes that are still being debated (Chapters 7 and 8). The flow of energy is modulated by passing through the star, which also explodes as a supernova, and this modulation further obscures details of the central engine.
The study of GRBs experienced spectacular growth after 1997 when the first cosmological counterparts were localized (Chapter 4), and with that growth in data came increased diversity. Still, it is customary to segregate GRBs into “long-soft” (LSBs) and “short-hard” (SHBs) categories (Kouveliotou et al. 1993), though the distinction is not always clear (Chapters 3 and 5; Section 10.5.9).