To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In less than ten years, since the early 1980s, observational astronomy has been revolutionized by the appearance of the charge couple device (CCD) detector. During this period, the large professional observatories constructed their own CCD cameras, which immediately replaced the photographic cameras in almost all areas of application.
But for amateur astronomers, doing CCD photography in the 1980s required building one's own camera, that is, mastering digital and analog electronics, computers, the science of heat… The situation was dire except for those whose profession gave them the necessary skills. A few pioneers, who were part of the latter group, set an example with their work and brought this new technology to the attention of amateurs. Little by little, a few groups began the adventure of constructing their own CCD camera.
By the end of the 1980s, the first commercial cameras destined for amateur astronomers made their appearance. Today, these cameras are becoming better specified and easier to use. A wider selection is available at affordable prices. It is now that we are seeing the real CCD revolution for the amateur astronomer: each will be able to use this tool and thereby increase the observational possibilities tenfold.
In 1988, the Association for the Development of Large Observing Instruments (ADAGIO) established the ambitious project of producing an 80 cm telescope geared toward amateur astronomers. It was decided, after initial research, that the principle equipment of this telescope would be a CCD camera.
A CCD's primary function is to produce images. At first, it appears to rival photography. Furthermore, it allows luminous flux measurements, and therefore rivals photometers.
It is in comparing its performance with existing detectors, photographic film in particular, that we can easily see in what areas the CCD will assert itself and what new areas can be opened up.
Image quantification and linearity
By its nature, the CCD image is digitized with a regular spatial sampling. Moreover, the digital value representing each image point (after the dark and flat field corrections) is proportional to the amount of light received. Hence, the image is directly usable in digital processing, which makes it accessible to powerful information extraction tools, described in chapter 5.
Digitally processing a photographic image is much less natural. We must first sample the image at regular intervals: a microdensitometer is placed in front of the film which measures its density over an area a few micrometers wide; but this measurement is not proportional to the quantity of light the film received and the area measured must be converted into the amount of light received by the film's standard response curve, for which there is no precise source. Furthermore the mechanism which moves the microdensitometer from one measurement zone to another on the film's surface must be accurate to within a micrometer, which is not easy to achieve.
The first observations of the sky relied on the naked eye. In this way, we can observe celestial objects to the 6th magnitude, with an angular resolution in the order of an arcminute. At the beginning of the 17th century, Galileo showed us that, with the use of an optical instrument, we can observe much fainter objects with a better resolution. Hence, a modest 20 cm telescope allows observation, visually, of 12th-magnitude stars with a resolution in the order of an arcsecond.
At the end of the 19th century, the appearance of photographic film turned our vision of the cosmos upside down. Photography, coupled with large telescopes, allowed the observation of objects of the 20th magnitude thanks to the possibility of integrating light. The general public was thus able to see for themselves superb images from the celestial world. And is this not the usual starting point for amateur astronomers?
The quality of specialized photographic films for astronomy has continued to improve, especially during the 1970s, thanks to the hypersensitization of finegrain films. Bear in mind that the grains, whose average size is about 5 micrometers (5 thousandths of a millimeter), are the elementary points that form the photographic image.
The 1980s saw the rise of CCD cameras, which replaced photography in astronomy. CCD stands for charge-coupled device. A CCD camera takes the form of a box equipped with a transparent window inside which is located in the CCD chip.
This contribution reviews the current status of optical wide field survey astronomy and the basic techniques that have been developed to capitalize on the large volumes of data generated by modern optical survey instruments. Topics covered include: telescope design constraints on wide field imaging; the properties of CCD detectors and wide field CCD mosaic cameras; preprocessing CCD data and combining independent digitized frames; optimal detection of images and digital image centering and photometry methods. Although the emphasis is geared toward optical imaging problems, most of the techniques reviewed are applicable to any large format two-dimensional astronomical image data.
Wide Field Survey Astronomy
Background
Astronomy is basically an observational science, rather than an experimental one, and the development and advancement of the subject has relied heavily on surveys of the sky at optical wavelengths to expand our knowledge of the observable Universe. Surveys form a basic foundation of observational astronomy, and provide three generic types of information:
(a) quantitative statistical information on the distribution of objects in our own galaxy and the Universe
(b) the ability to discover radically new types of object
(c) the means of selecting representative samples of certain types of (rare) objects, particularly the brightest examples, for further study with large telescopes.
Statistical surveys are beginning to rely ever more heavily on the wide field multi-object fibre spectroscopy capabilities of large telescopes, described elsewhere in this volume.
Astronomical telescopes are devices which collect as much radiation from astronomical (stellar) objects and put it in an as sharp (small) an image as possible. Both collecting area and angular resolution play a role. The relative merit of these two functions has changed over the years in optical astronomy, with the angular resolution initially dominating and then, as the atmospheric seeing limit was reached, the collecting area becoming the most important factor. Therefore it is the habit these days to express the quality of a telescope by its (collecting) diameter rather than by its angular resolution. With the introduction of techniques which overcome the limits set by atmospheric seeing, the emphasis is changing back to angular resolution. This time, however, it is set by the diffraction limit of the telescope so that both angular resolution and collecting power of a telescope will be determined by its diameter. Both telescope functions will therefore go hand-in-hand.
Although image selection and various speckle image reconstruction techniques have been successful in giving diffraction limited images (see, e.g., the paper by Oskar von der Lühe in the First Canary Island Winter School, 1989), the most powerful and promising technique for all astronomical applications is the one using adaptive optics. That is because, for an unresolved image, it puts most of the collected photons in an as small an image as possible which benefits both in discriminating against the sky background, in doing high spectral and spatial resolution spectroscopy and in doing interferometric imaging with telescope arrays.
The new generation of 8-10m telescopes is opening up important possibilities for polarimetry of astrophysically interesting sources, mainly because the large collecting area is particularly advantageous in this technique, which requires high S/N ratio. This course starts by emphasizing the importance of polarimetry in astronomy and giving some examples of polarizing phenomena in everyday life. Then an introduction to the Stokes parameters and to Mueller calculus is given, with examples on how to describe the most common polarizing optical components, and the main mechanisms producing polarized light in astrophysics are reviewed. The section devoted to instruments starts with a brief overview of the classical photopolarimeter, follows with a description of an imaging polarimeter, with examples of data obtained and an analysis of the sources of errors, and ends with a discussion of modern spectropolarimetry. The following section is devoted to an analysis of the gains of large 8–10 m telescopes for polarimetry and to a review of the polarimeters planned for them. The course ends with a discussion of polarimetry of AGN, as an example of a field of research, where polarimetry has provided important results, by disentangling unresolved geometries and mixed spectral components.
The beauty of polarimetry
Astronomy is an observational science, not an experimental one in the usual sense, since for the understanding of the objects in the Universe we cannot perform controlled experiments, but have to relay on observations of what these objects do, independently of us.
An alternative title of this material could be “The Data Everyone Would Like to Get for their Research!” The first thing we seem to do in astronomy is ‘see’ something, be it simply looking in the sky, using a big telescope, or helping ourselves with sophisticated adaptive optics or space probes. But the very next thing we want to do is get that light into a spectrograph! We might get spectral information from colors, energy distributions, modest resolution or real honest high resolution spectroscopy, but we desperately need such information. Why? Well, because that's where most of the physical information is, and higher spectral resolution means access to more and better information. High resolution implies actually resolving the structure of the spectrum. Naturally we want to do this as precisely as possible, not only pushing toward good spectral resolution and high signal-to-noise, but also by understanding how the equipment has modified the true spectrum and by weeding out problems and undesirable characteristics. The main focus here will be on the machinery of spectroscopy, but oriented toward optical spectrographs and the spectral lines they are best suited to analyze. I do not concentrate on the specific instruments, but rather on the techniques and thought patterns we need. These are the fundamental things you can take with you and apply to any spectroscopic work you do. Of course, you will always have to fill in specific details for the particular machinery and tools you use.
Astronomy is entering a new observational era with the advent of several Large Telescopes, 8 to 10 metre in size, which will shape the kind of Astrophysics that will be done in the next century. Scientific focal plane instruments have always been recognized as key factors enabling astronomers to obtain the maximum performance out of the telescope in which they are installed. Building instruments is therefore not only a state of the art endeavour, but the ultimate way of reaching the observational limits of the new generation of telescopes. Instruments also define the type of science that the new telescopes will be capable of addressing in an optimal way. It is clear therefore that whatever instruments are built in the comming years they will influence the kind of science that is done well into the 21st century.
The goal of the 1995 Canary Islands Winter School of Astrophysics was to bring together advanced graduate students, recent postdocs and interested scientists and engineers, with a group of prominent specialists in the field of astronomical instrumentation, to make a comprehensive review of the driving science and techniques behind the instrumentation being developed for large ground based telescopes. This book is unique indeed in that it combines the scientific ideas behind the instruments, something at times not appreciated by engineers, with the techniques required to design and build scientific instruments, something that few astronomers grasp during their education.
Chapter 1 reviews the image restoration/reconstruction problem in its general setting. We first discuss linear methods for solving the problem of image deconvolution, i.e. the case in which the data is a convolution of a point-spread function and an underlying unblurred image. Next, non-linear methods are introduced in the context of Bayesian estimation, including Maximum-Likelihood and Maximum Entropy methods. Finally, the successes and failures of these methods are discussed along with some of the roots of these problems and the suggestion that these difficulties might be overcome by new (e.g. pixon-based) image reconstruction methods.
Chapter 2 discusses the role of language and information theory concepts for data compression and solving the inverse problem. The concept of Algorithmic Information Content (AIC) is introduced and shown to be crucial to achieving optimal data compression and optimized Bayesian priors for image reconstruction. The dependence of the AIC on the selection of language then suggests how efficient coordinate systems for the inverse problem may be selected. This motivates the selection of a multiresolution language for the reconstruction of generic images.
Chapter 3 introduces pixon-based image restoration/reconstruction methods. The relationship between image Algorithmic Information Content and the Bayesian incarnation of Occam's Razor are discussed as well as the relationship of multiresolution pixon languages and image fractal dimension. Also discussed is the relationship of pixons to the role played by the Heisenberg uncertainty principle in statistical physics and how pixon-based image reconstruction provides a natural extension to the Akaike information criterion for Maximum Likelihood estimation.
This paper reviews near infrared instrumentation for large telescopes. Modern instrumentation for near infrared astronomy is dominated by systems which employ state-of-the-art infrared array detectors. Following a general introduction to the near infrared wavebands and transmission features of the atmosphere, a description of the latest detector technology is given. Matching of these detectors to large telescopes is then discussed in the context of imaging and spectroscopic instruments. Both the seeing-limited and diffraction-limited cases are considered. Practical considerations (e.g. the impact of operation in a vacuum cryogenic environment) that enter into the design of infrared cameras and spectrographs are explored in more detail and specific examples are described. One of these is a 2-channel IR camera and the other is a NIR echelle spectrograph, both of which are designed for the f/15 focus of the 10-m W. M. Keck Telescope.
The Near Infrared Waveband
In the last ten years there has been tremendous growth in the field of Infrared Astronomy. This growth has been stimulated in large part by the development of very sensitive imaging devices called infrared arrays. These detectors are similar, but not identical, to the better-known silicon charge-coupled device or CCD, which is limited to wavelengths shorter than 1.1 µm. In particular, near infrared array detectors are now sufficiently sensitive that images of comparable depth to those obtained with visible-light CCDs can be achieved from 1.0 µm to 2.4 µm and high resolution IR spectrographs are now feasible.
This lecture introduces the opportunities presented by ground-based telescopes for new discoveries in the thermal infrared, and discusses techniques used to make sensitive observations in an environment with high background flux levels from atmospheric emission and from the telescope structure and mirrors.
Mid-IR astronomy—opportunities and problems
The capability now exists to observe mid-IR astronomical objects with spatial resolution of a third of an arcsecond and sensitivities reaching well below a mJy. Both imaging and spectroscopy with new array instruments on optimized large telescopes are producing new data on sources from comets, to active galactic nuclei. With sensitivity to emission from cool dust, diagnostic lines from ionized gas and molecular species, and the capability to look through clouds opaque in the visible, many new results are appearing, and many more can be anticipated. In particular, our understanding of the star formation process should improve significantly in the next decade. Yet all of this is achieved operating through the earth's atmosphere which absorbs and distorts the signals, and which, together with the telescope structure itself, radiates into the beam up to a million times the power detected from the source. The problems encountered, and the techniques used to make ground based mid-IR observations will be discussed here.
IRAS (Infrared Astronomical Satellite) revealed how fascinating and complex the IR sky is at wavelengths of 12, 25, 60 and 100 µm. The IRAS mission lasted for 300 days in 1983 completing an all sky survey with a 57-cm diameter cooled telescope.
In Chapter 3, we studied how single charged particles move in specified electric and magnetic fields, and we then applied our knowledge of single particle motion to the radiation belt and ring current plasma. However, the fields in some situations depend too much on the particle distributions to be readily specified and must be found self-consistently using the charged particle distribution functions. Often, it is not necessary to have complete information about the distribution functions in a system. In fact, it is usually sufficient to know only a few of the velocity moments of the distribution function, as derived in Chapter 2. In Chapter 4, we will adopt the “fluid” picture of a plasma, introduced in Chapter 2, and further refine it to obtain an analytical tool useful for studying space plasma phenomena. This analytical tool is called magnetohydrodynamics (or MHD for short). We cannot adequately cover in one chapter all the material that would be desirable to know about this subject and so the reader is encouraged to consult one or more of the references listed in the bibliography at the end of this chapter.
Two-fluid plasma
Let us consider a plasma consisting of two species: electrons (e) with mass me and a single ion species (i) with mass mi.
Most of the visible matter in the universe exists as a fluid composed of electrically charged particles rather than as a gas made of neutral atoms or molecules. Gas mixtures of electrically charged particles, such as electrons and ions, are called plasmas. Plasmas are found in the following solar system environments: the solar atmosphere, the interplanetary medium, planetary magnetospheres, and planetary ionospheres. Most of the interstellar medium is also plasma, as are most other regions of our galaxy.
Most of the plasma found in our own solar system is accessible to in situ measurements made by instruments onboard spacecraft. Since the advent of the space age in the late 1950s, space probes have visited Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune, and comets Giacobini–Zinner, Halley, and Grigg–Skjellerup. The space environment surrounding the Earth has also been extensively studied by experiments onboard rockets and satellites. The Sun and astrophysical plasma environments outside our own solar system are not subject to direct measurements but must be observed remotely with sophisticated instruments located either at ground-based observatories or on orbiting observatories. An exception to this are the very energetic particles called cosmic rays, which can be observed using Earthbased or balloon-borne experiments. Solar cosmic rays have energies up to about 100 million electron volts (100 MeV) and originate in the solar corona.