To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
During the fall semester of 1984 I visited the State University of New York at Stony Brook and taught an advanced graduate course on wave equations and quantum fields in curved space-times. This book is based on the notes for that course. I am very grateful to the Mathematics Department of SUNY, particularly Professor Michael Taylor, for arranging my temporary faculty appointment there.
The audience for the course consisted of graduate students and faculty members, mostly in mathematics but some in physics. They were assumed to have some knowledge of differential geometry and general relativity, and therefore not much time was spent on expounding those subjects. (The major exception is a chapter on connections on vector bundles and the Synge–DeWitt formalism needed for curved-space renormalization.) More time was spent on establishing a background in quantum theory and certain aspects of analysis, notably eigenfunction expansions.
Addressing such a mixed audience forces two difficult decisions – the relatively superficial one of what language to adopt, and the deeper one of what background knowledge to assume. I have an easy way out of the first problem: because of my own mixed background, the terminology which comes most naturally to me is a roughly equal mixture of the standard vocabularies of mathematicians and physicists. I have tried to say things in several different ways, to keep as many readers comfortable as possible.
When the quantum mechanics of particles appeared, circa 1925, its own inventors immediately realized that it was not really adequate for physics, for several reasons:
(1) It's nonrelativistic. This shortcoming can't be overcome simply by using (special) relativistic classical particle mechanics in place of nonrelativistic mechanics as the source of the equations of motion or the Hamiltonian. Attempts to do so led to relativistic wave equations, the Klein–Gordon and Dirac equations, which were afflicted with negative probabilities or negative energies, respectively. Superficially these could be eliminated by discarding half the solutions ad hoc, but the resulting theories developed inconsistencies when interactions were included.
(2) The states describe fixed numbers of particles. Therefore, the theory can't describe processes in which particles are produced or destroyed. Such interactions are observed experimentally, as when a proton and an antiproton annihilate into an electron, a positron, and a number of pions and photons.
(3) In electromagnetism (and also in gravitation) the principal object in the classical theory is a field, not a particle. It was therefore expected that the quantization process could be extended to fields.
In a sense, all these problems are the same problem. Special relativity implies the equivalence of mass and energy, hence the possibility that new particles can be produced when the particles coming into an interaction have total kinetic energy in excess of the total rest mass of the prospective products.
Probably the most radical advance in X-ray instrumentation in the past five years has been the development of the single photon calorimeter, in which X-rays are detected via the temperature pulses they induce in a small (< 1 mm3) absorber, cooled to a fraction of a degree kelvin.
The detection of individual 5.9 keV X-rays (fig. 6.1) was first demonstrated by groups at NASA's Goddard Space Flight Center (GSFC) and the University of Wisconsin in 1984, using a silicon microcalorimeter operating at 0.3 K (McCammon et al, 1984). This work was specifically directed towards the production of a high-efficiency, non-dispersive focal plane spectrometer with energy resolution comparable to that of a Bragg crystal. It can, however, still be seen as the culmination of several decades’ research in fields other than X-ray astronomy, originally in nuclear physics and latterly in infrared astronomy. Andersen (1986) and Coron et al (1985a) trace calorimeter development back as far as 1903, and the radioactivity studies of Pierre Curie. They record how, by the mid-1970s, the sensitivity (in detectable watts) of IR bolometers operating at liquid helium temperatures, where heat capacities are very low, had reached the point where Niinikoski and Udo (1974) could identify the extraneous spikes seen in the output of balloon-borne bolometers with local heating produced by the passage of cosmic rays. Niinikoski and Udo appear to have been the first to suggest that it might be possible to thermally detect single photons or particles, rather than continuous fluxes.
This chapter describes the uses-past, present and proposed-of three types of solid X-ray converter in soft X-ray astronomy: scintillators and phosphors, which work by the conversion of X-ray energy into visible light and negative electron affinity detectors (NEADs), which rely on external photoemission from a surface activated to a state of negative electron affinity. Although the terms scintillator and phosphor are formally synonyms (Thewlis, 1962), we shall adopt the usage prevalent in the detector literature and distinguish between bulk, crystalline materials such as Nal(Tl) and CsI(Na) (scintillators) and thin, granular layers of, for example, the rare earth oxysulphides (phosphors). Phosphors are often identified by a commercial T-number’. A partial list of such numbers is given by Gruner et al (1982).
The use of luminescent solids in nuclear physics has a long tradition. Rutherford's nuclear model of the atom (1909), for example, had as its experimental basis the observation, by eye, of α-particle induced light flashes (scintillations) on a zinc sulphide screen. The substitution of a photomultiplier tube for the human observer, which first occurred towards the end of the second world war, produced a sensitive electronic counter for γ-rays and particles, whose operation is described in texts such as those of Curran (1953), Birks (1964) and Knoll (1979).
The first use of a scintillation counter in X-ray astronomy was in a balloon-borne observation of the Crab Nebula in 1964 (Clark, 1965). As described in section 1.2, such balloon payloads were limited, because of atmospheric opacity, to the spectral band E > 20 ke V, where source fluxes decrease rapidly with increasing X-ray energy.
MicroChannel plates (MCPs) are compact electron multipliers of high gain and military descent which, in their two decades as ‘declassified’ technology (Ruggieri, 1972), have been used in a wider range of particle and photon detection problems than perhaps any other detector type.
A typical MCP consists of ∼ 107 close-packed channels of common diameter D, formed by the drawing, etching and firing in hydrogen of a lead glass matrix. At present, the most common values of D are 10 or 12.5 μm, although pore sizes as small as 2 μm have begun appearing in some manufacturer's literature. Each of the channels can be made to act as an independent, continuous-dynode photomultiplier. MicroChannel plates (or channel multiplier arrays or multichannel plates, as they are sometimes known) are therefore used, in X-ray astronomy as in many other fields, for distortionless imaging with very high spatial resolution.
The idea of replacing the discrete dynodes (gain stages) of a conventional photomultiplier (Knoll, 1979) with a continuous resistive surface dates from 1930 (Ruggieri, 1972). It was only in the early 1960s, however, that the first channel electron multipliers (CEMs), consisting of 0.1-1 mm diameter glass or ceramic tubes internally coated with semiconducting metallic oxide layers, were constructed in the USSR (Oshchepkov et al. 1960) and United States (Goodrich and Wiley, 1962). Somewhat later, parallel-plate electron multipliers (PPEMs) were developed with rectangular apertures more suited to the exit slits of certain types of spectrometer (Spindt and Shoulders, 1965; Nilsson et al. 1970).
Gas proportional counters have been the ‘workhorses’ of X-ray astronomy throughout the subject's entire history. The roots of proportional counter development, however, go back much further, to the pioneering counters of Rutherford and Geiger (1908), to the first quantitative gas ionisation studies of J. J. Thomson (1899) and beyond.
The physics of gas-filled particle and X-ray detectors was very intensively researched during the four decades up to 1950. The classic texts of Curran and Craggs (1949), Rossi and Staub (1949) and Wilkinson (1950) describe a highly developed field at the zenith of its importance: before first Nal scintillators (in the 1950s) and then semiconductor detectors (in the early 1960s) replaced gas detectors in many areas of nuclear physics research.
Outside X-ray astronomy, proportional counter fortunes began to revive in the late 1960s when position-sensitive variants of the single-wire proportional counter (SWPC) were introduced as focal plane detectors for magnetic spectrographs (Ford, 1979). Multi-wire detectors (first developed, but not fully exploited, at Los Alamos as part of the Manhattan Project – Rossi and Staub, 1949) then rapidly evolved to provide an imaging capability in two dimensions over large areas. Here, the impetus was provided by the particle physicists (Charpak et al, 1968) who continue to dominate the field of gaseous detector development.
This chapter does not attempt to give a complete account of gaseous electronics, nor does it describe in detail related detector developments in fields such as particle physics (Fabjan and Fischer, 1980; Bartl and Neuhofer, 1983; Bartl et al, 1986).
The first cosmic X-ray source was discovered in June, 1962, during the flight of an Aerobee sounding rocket from the White Sands missile range in New Mexico (Giacconi et al, 1962). As the rocket spun about its axis, three small gas-filled detectors scanned across a powerful source of low-energy X-rays in the constellation of Scorpius, in the southern sky. Even though the position of the source (later designated Sco X-l) could only be determined to within an area of some hundred square degrees, cosmic X-ray astronomy had begun.
As usually recounted, the story of Sco X-l and the birth of X-ray astronomy bears a not inconsiderable resemblance to the story of X-rays themselves. The element of serendipity seems all-important in both discoveries. Wilhelm Roentgen, in 1896, had been intent on measuring the aether waves emitted by a low-pressure gas discharge tube when, by chance, he discovered his new and penetrating radiation. In 1962, the expressed aim of the American Science and Engineering (AS&E)-MIT research group led by Riccardo Giacconi was to detect the X-ray emission, not of distant stars, but from the moon.
Detailed consideration undermines this neat parallel. There is in fact a clear evolutionary line linking X-ray astronomy and the pioneering solar studies carried out in the USA by the Naval Research Laboratory (NRL) group under Herbert Friedman. Friedman's solar X-ray observations had begun with the flight of a captured German V2 rocket in September, 1949.
This chapter describes the use in X-ray astronomy of semiconductor ionisation detectors as non-dispersive spectrometers of high energy resolution. Semiconductor detectors which operate on a calorimeteric principle are described separately, in Chapter 6.
The early history of semiconductor radiation detectors is comprehensively described by McKenzie (1979). The first practical ‘solid state’ detectors – the term is usually taken to exclude scintillation counters (Chapter 5) – were small germanium surface barrier devices with gold electrodes (section 4.2.1), fabricated in the late 1950s. Such devices could be regarded as solid state analogues of the gas-filled ionisation chamber (Wilkinson, 1950), in that the primary ionisation produced in the dielectric and collected, without multiplication, by its internal electric field, was found to be proportional to the energy of an incident particle.
Over the past 30 years, improvements in material purity (Eichinger, 1987) and advances in microelectronic process technology have given rise to an array of detector types based on electron-hole pair creation in cooled silicon or germanium or in a number of ‘room temperature’ materials, of which the most developed is currently mercuric iodide. As in the case of gas-filled detectors (Chapter 2) much of the impetus for the new semiconductor detectors comes from the particle physics community (Kemmer and Lutz, 1987). Trends in modern particle physics include the development of both large (1 m2) silicon detector arrays (Borer et al. 1987) and of integrated detectors in which some at least of the signal processing is embodied ‘on chip’.
In X-ray astronomy, however, solid state detector research is still driven primarily by the desire for high spectral resolution.
While variable star observing is a specialized branch of observational astronomy, the basic procedures of patience and care that apply to all observing also work with variables.
Plan your program in advance, but be flexible, since the sky often offers surprises. Choose your variable carefully. Is the star likely to be visible through your telescope, or is it obviously too faint? At the other extreme, is your star so bright that observing it is a waste of your precious telescope time?
Telescope
Telescope size
This is more of a consideration than most observers realize. In a sense, each variable star has its own best combination of telescope and eyepiece. The general rule is to use only enough power and magnification to see the variable clearly but not have it so bright that it is hard to estimate. Ideally, the variable should be about two magnitudes brighter than the faintest star you can see with your telescope. If it is much fainter than that, you will have a problem of perceiving the star, and if the variable is several magnitudes brighter, so many photons will enter your eye that its sensitivity to subtle magnitude variations will be affected.
At minimum, a star might be fair game for most telescopes smaller than 30 cm (12 inches), but as the star brightens you could use a smaller telescope. (When discussing a telescope size, I refer to the size of the mirror or objective lens.)