To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the last four decades of the twentieth century, the detection of neutrinos from the Sun became a reality. Using a detection scheme beginning with a chloroethylene-filled tank in the Homestake mine in South Dakota, Raymond Davis and his collaborators (Davis, Harmer, & Hoffman, 1968) established upper limits on the fluxes of neutrinos made in the Sun by the reactions 8B→8Be* + e+ + ve and 7Be + e−→ 7Li + ve and reaching the Earth as electron-flavor neutrinos. The experiment relied on the reaction 37Cl(ve, e−)37 A, which has a threshold (at 0.814 MeV) approximately twice as large as the 0.43 MeV maximum energy of the neutrino emitted in the pp reaction, slightly smaller than the 0.861 MeV energy of the neutrino emitted in the 7Be + e−→ 7Li +ve reaction, and much smaller than the maximum energy of the neutrino emitted in the 8B(e+ve)8Be* reaction.
The limits established by Davis et al. were an order of magnitude smaller than fluxes which had been predicted on the basis of solar models that incorporated the then best guesses as to the appropriate input physics. Among other consequences, the discrepancy, commonly referred to as the solar neutrino problem, led to a re-examination of available data on relevant nuclear cross sections, revisions and new measurements of these cross sections, and to a refinement over time in the solar models.
In this chapter we consider aircraft and satellites as platforms for remote sensing. There are other, less commonly used, means of holding a sensor aloft, for example towers, balloons, model aircraft and kites, but we do not discuss these. The reason for this, apart from their comparative infrequency of use, is that most remote sensing systems make direct or indirect use of the relative motion of the sensor and the target, and this is more easily controllable or predictable in the case of aircraft and spacecraft. Figure 10.1 shows schematically the range of platforms, and their corresponding altitudes above the Earth’s surface.
The spatial and temporal scales of the phenomenon to be studied will influence the observing strategy to be employed, and this in turn will affect the choice of operational parameters in the case of an airborne observation or of the orbital parameters in the case of a spaceborne observation. After a brief introduction to the use of aircraft as platforms for remote sensing, this chapter focusses on the use of artificial satellites.
The bright stars in the familiar constellations of the Milky Way have intrigued mankind for millennia. Over the past several centuries we have obtained by observations a quantitative understanding of the intrinsic global and surface characteristics of these stars, and over the past century we have learned something about their internal structure and the manner in which they change with time. An awareness that one kind of star can transform into another kind of star and an appreciation of how this transformation is achieved have been accomplishments of the last half of the twentieth century. One of the objectives of this monograph is to describe some of the transformations and to understand how they come about.
The microscopic and macroscopic physics that enters into the construction of the equations of stellar structure and evolution is described in many other monographs and texts. For highly personal reasons, this physics is nonetheless developed here in some detail. My undergraduate and graduate training was in physics, but I did not fully appreciate the beauty of physics until, just prior to my second year of college teaching, during an enforced sedentary period occasioned by a collision between myself on a bicycle and an automobile, I discovered the book Frontiers of Astronomy by Fred Hoyle and became entranced with the idea that the evolution of stars could be understood by applying the principles of physics. During my next two years of teaching, I embarked on a self study course heavily influenced by the vivid discription of physical processes in stars by Arthur S. Eddington in his book The Internal Constitution of the Stars and by the straightforward description of how to construct solutions to the equations of stellar structure by Martin Schwarzschild in his book The Structure and Evolution of the Stars. These books taught me that stars provide a context for understanding physics on many different levels.
Because faster moving particles at higher temperatures transfer energy to more slowly moving particles at lower temperatures, the very existence of a temperature gradient implies a flow of energy in the direction in which the temperature decreases. In the stellar interior, because of their small mass and consequent high velocities, free electrons are the dominant contributors to this mode of thermal energy transfer, which is called thermal or heat conduction.
Thermal conduction does not play a significant role in transporting heat in stars during most of the main sequence phase. However, towards the end of the main sequence phase, as detailed in Chapter 11 of Volume 1 (Section 11.1), low mass stars develop hydrogenexhausted cores in which electrons become increasingly degenerate, and evolve into red giants with fully electron-degenerate helium cores. Under electron-degenerate conditions, only those electrons with energies within about kT of the Fermi energy ϵF participate in transporting heat, but their average cross section for scattering from ions and other electrons is reduced by a factor of the order of (kT/ϵF)2 relative to their average cross section under non-degenerate conditions. Hence, conduction becomes very effective in slowing the rate at which temperatures increase in the electron-degenerate cores and prevents low mass red giants from igniting helium until the degenerate core has grown to almost one-half of a solar mass. As described in Chapters 17 and 18 of this volume, electron conduction plays a similar role in both low and intermediate mass stars after they have exhausted helium at their centers and become asymptotic giant branch stars with electron-degenerate carbon–oxygen or oxygen–neon cores.
The general direction of this book has been to follow approximately the flow of information, from the thermal or other mechanism for the generation of electromagnetic radiation, to its interaction with the surface to be sensed, thence to its interaction with the atmosphere, and finally to its detection by the sensor. It is clear that the information has not yet reached its final destination. First, it is still at the sensor and not with the data user. Second, the ‘raw’ data will in general require a significant amount of processing before they can be applied to the task for which they were acquired.
In this chapter we shall discuss the more important aspects of the processes to which the raw data are subjected. For the most part, it will be assumed that the data have been obtained from an imaging sensor so that the spatial form of the data is significant. The principal processes are transmission and storage of the data, preprocessing, enhancement and classification. The last three processes are generally regarded as aspects of image processing, a major field of study in its own right, and we shall not be able to do much more than outline its general features. There are many books on the subject to which the interested reader may be referred, for example Campbell (2008), Schowengerdt (2007), Mather and Koch (2010), Burger and Burge (2005).
The construction of large ground-based optical and infrared telescopes is driven by the desire to obtain astronomical measurements of both higher sensitivity and higher angular resolution. With each increase in telescope diameter the former goal, that of increased sensitivity, has been achieved. In contrast, the angular resolution of large telescopes (D > 1m), using traditional imaging, is limited not by the diffraction limit (θ ∼ λ/D), but rather by turbulence in the atmosphere. This is typically 1″, a factor of 10–20 times worse than the theoretical limit of a 4-meter telescope at near-infrared wavelengths. This angular resolution handicap has led to both space-based and ground-based solutions. With the launching of the Hubble Space Telescope (HST), a 2.4-m telescope equipped with both optical and infrared detectors, the astronomical community has obtained diffraction-limited images. These optical images, which have an angular resolution of ˜0.″1, have led to exciting new discoveries, such as the detection of a black hole in M87 (Ford et al. 1994) and protostellar disks around young stars in Orion (O'Dell et al. 1993, O'Dell and Wen 1994). However, HST has a modest-sized mirror diameter compared to the 8–10 meter mirror diameters of the largest ground-based telescope facilities.
With the development of techniques to overcome the wavefront distortions introduced by the Earth's atmosphere, diffraction-limited observations from the ground have become possible. These techniques cover a wide range of complexity and hence expense. Speckle imaging, which provided the earliest and simplest approach, is described in Sections 10.1 and 23.3.1 and adaptive optics, which has more recently become scientifically productive and which is a much more powerful technique, is discussed in Section 10.2.
The launch of the Hipparcos satellite in 1989 and the Hubble Space Telescope in 1990 revolutionized astrometry. By no means does this imply that not much progress was made in the ground-based techniques used exclusively until then. On the contrary, the 1960s to 1980s saw an intense development of new or highly improved instruments, including photoelectric meridian circles, automated plate measuring machines, and the use of chargecoupled device (CCD) detectors for small-field differential astrometry (for a review of optical astrometry at the time, see Monet 1988). In the radio domain, very long baseline interferometry (VLBI) astrometry already provided an extragalactic reference frame accurate to about 1 milliarcsecond (mas) (Ma et al. 1990). Spectacular improvements were made in terms of accuracy, the faintness of the observed objects, and their numbers. However, there was a widening gulf between small-angle astrometry, where differential techniques could overcome atmospheric effects down to below 1 mas, and large-angle astrometry, where conventional instruments such as meridian circles seemed to have hit a barrier in the underlying systematic errors at about 100 mas. Though very precise, the small-angle measurements were of limited use for the determination of positions and proper motions, due to the lack of suitable reference objects in the small fields, and even for parallaxes the necessary correction for the mean parallax of background stars was highly non-trivial. Linking the optical observations to the accurate VLBI frame also proved extremely difficult.
The Earth's atmosphere imposes several limitations on our ability to perform astrometric measurements from the ground in both the optical and radio regions of the spectrum. First, we are limited to wavelengths where the absorption is not too great, i.e. the broad optical region from the ultraviolet to the near-infrared, scattered regions in the infrared and broad regions at radio wavelengths. The fundamental limitations imposed by the atmosphere are different in the optical and radio and in this chapter we will deal with those important for the optical; the radio part is largely dealt with in Chapter 12 on radio interferometry, except for a summary given here on the precision limitations imposed by the atmosphere. The second problem created by observing through the atmosphere is refraction of the light waves as they pass through different levels of the atmosphere. If it were only refraction through a stable medium, the problem would be very simple; however, the atmosphere is a turbulent medium that causes variations in the amount of refraction both spatially and temporally and it therefore limits the precision and accuracy of our observations. In this chapter we will deal with both effects using the developments in Schroeder (1987, 2000) as the basic reference.
Refraction through a plane-parallel atmosphere
When we are dealing with relative positions in fields of view less than several degrees, it is sufficient to adopt a plane-parallel atmosphere for the model. In cases where we need to consider the total displacement, such as with meridian circles, it is necessary to adopt atmospheric models that are substantially more complicated, such as those developed by Garfinkel (1967) and Auer and Standish (2000). For the purposes of this chapter we can safely use the plane-parallel atmosphere.
Time-delay integration (TDI), also known as drift scanning, is a mode of reading out a charge-coupled device (CCD) camera that allows a continuous image or scan of the sky to be recorded. Normally, most astronomers use CCDs in the point-and-shoot or stare mode. A telescope is pointed to a particular position of interest on the sky and made to track at that position. A shutter is opened to expose the CCD, and then closed while the electronic exposure recorded by the CCD is read out. In drift-scan mode, the telescope is parked, tracking is turned off, and the camera shutter is held open. As the sky drifts across the field, the electronic exposure recorded by the CCD is shifted across the pixel array, row by row, to match the drift rate of the sky. The time it takes a source in the field to drift across the whole array is the exposure time of the scan. Since the readout is continuous, this is the most time-efficient way to survey large areas of sky. There is no pause between exposures to wait for the readout of the camera. The largest-area photometric surveys to date have been made with drift-scanning cameras. Smaller-scale surveys have used drift scans for astrometry of faint standards, suitable for the re-calibration of the relatively imprecise positions of the large photometric and Schmidt-plate catalogs.
Charge-coupled device cameras were first used in drift-scan mode on ground-based telescopes beginning in the 1980s with the CCD Transit Instrument (McGraw et al. 1980) and the Spacewatch Telescope (Gehrels et al. 1986). These early instruments consisted of CCDs with field sizes of ∼10 arcmin, capable of covering 10–20 square degrees in a single scan. The Spacewatch camera was the first to use a CCD for automated detection of near-Earth asteroids (Rabinowitz 1991). The CCD Transit Instrument had two CCDs aligned east–west allowing simultaneous scans of the same field in two different passbands. Advancements in computer speed and capacity have since allowed the construction of much larger scanning cameras made up of CCD mosaics.
Astrometry is the branch of astronomy that studies the position and motion of celestial objects in the Solar System, in the Milky Way Galaxy, and for galaxies that are near the limit of the observable Universe. In radio astronomy, the study can be separated into microastrometry and macro-astrometry. Micro-astrometry deals with the motion of individual or a small number of associated objects in order to determine their space motion, their distance from the Solar System, and their kinetic properties with respect to neighboring stars and planets. The observational techniques and reductions measure the separation of the target object from a nearby calibrator radio source with known and stable properties.
Macro-astrometry, on the other hand, deals with the absolute position of radio sources, which also requires the determination of the Earth's deformations, complex rotations, and space motion. This type of astrometric experiment observes many well-known compact radio sources over the sky within a 24-hour period. From the analysis of systematic residuals in the data, the absolute positions of the sources, as well as the astrometric and geodetic properties the Earth, are determined. From this 30+ year effort, the fundamental celestial inertial frame has been defined to an accuracy of about 0.01 milliarcsec (mas) using the position of nearly 300 radio sources.
For nearly 30 years, the highest astrometric precision has been obtained using radio-interferometric techniques because of several properties of radio waves. First, astronomers and engineers have been able to connect arrays of radio telescopes that span the Earth (even into Earth orbit) to achieve resolutions of a few mas and obtain positional accuracies well under 1 mas.
To describe the Milky Way Galaxy, it is convenient to divide the entire collection of Galactic stars into components with broadly consistent properties (e.g. age, chemical composition, kinematics). The main idea behind this division is that those components represent stars of a loosely common origin and, thus, are an important means to understanding the formation and evolution of galaxies. Thus, the Milky Way can be “partitioned” into the four main stellar populations: the young thin and the older thick disks, the old and metal-poor halo, and the old and metal-rich bulge. Each of these populations has its own characteristics such as the size, shape, stellar density distribution, and the internal velocity distribution. The latter can be as low as ˜15 km/s in one coordinate (for the thin disk) and as high as ˜100 km/s (for the halo). It is important to realize that along any direction in our Galaxy, there is always a juxtaposition of these populations, which can be described only in a statistical sense.
Star clusters are another distinct population of objects permeating the entire Galaxy. Our Milky Way Galaxy hosts a large number of recognized open (˜1800) and globular (˜160) clusters that are extremely valuable tracers of the main four Galactic populations. These stellar systems are gravitationally bound and this property along with virtually no dispersion in metallicity and age within an individual system sets them apart from the other Galactic stellar populations.