To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A. Labeyrie, Observatoire de la Cote d'Azur,S. G. Lipson, Technion - Israel Institute of Technology, Haifa,P. Nisenson, Smithsonian Astrophysical Observatory, Cambridge, Massachusetts
Imaging with very high resolution using multimirror telescopes
Recent years have seen very large aperture telescopes constructed by piecing together smaller mirrors and carefully mounting them on a frame so that the individual images interfere constructively at the focus. This way the two multimirror Keck telescopes on Mauna Kea are constructed, each from 36 hexagonally shaped mirrors which together form a paraboloidal mirror about 10 m in diameter. The frame is constructed very rigidly, but since each segment weighs half a ton, it still distorts significantly when the telescope is pointed, so that the mirror positions have to be actively corrected to compensate for the small movements. Then, together with adaptive optics correction for atmospheric turbulence, diffraction-limited images are obtained since, for a small enough field of view, the off-axis aberrations of the paraboloidal mirror are insignificant. However, the maximum aperture which can be operated this way is expected to be of the order of 100 m, the size of the “Overwhelmingly Large Array” (OWL) being studied by the European Southern Observatories. The resolution achievable for a pointable direct imaging telescope is therefore limited to a few milli-arcseconds at optical wavelengths.
By using synthetic imaging, interferometry represents one way around this problem, when angular resolution rather than light-gathering power is the dominant aim. Effective apertures of hundreds of meters have been achieved, but the number of subapertures is still quite modest. As we have seen in the preceding chapters, interferometers with even tens of apertures and optical delay lines become extremely expensive and complicated to operate.
A. Labeyrie, Observatoire de la Cote d'Azur,S. G. Lipson, Technion - Israel Institute of Technology, Haifa,P. Nisenson, Smithsonian Astrophysical Observatory, Cambridge, Massachusetts
We saw in the chapter on atmospheric turbulence that the real limitation to the resolution of a ground-based telescope is not the diameter of the telescope aperture, but the atmosphere. As a result, a telescope of any diameter will rarely give an angular resolution in visible light better than 1 arcsec, which is equivalent to the diffraction limit of an aperture of about 10 cm diameter (the Fried parameter, r0, defined in section 5.4.1). This limitation has been considered so fundamental that large telescope mirrors might not even have been polished to an accuracy which could give a better resolution than this. The ideas behind the various methods of astronomical interferometry are all directed at exceeding it.
The first idea was due to Fizeau (1868) who conceived the idea of masking the aperture of a large telescope with a mask containing two apertures each having diameter less than r0, but separated by a distance considerably greater than this. The result would be to modulate the image with Young's fringes and, from the contrast of the fringes, to glean information about the source dimensions. A few years after the publication of Fizeau's idea, Stéphan (1874) tried it out experimentally with the 1-m telescope at Marseilles and concluded (correctly) that the fixed stars were too small for their structure to be resolved by this telescope. Michelson (1891) later developed the necessary theory to make this idea quantitative and was the first to succeed in using Fizeau's technique, when he measured the diameters of the moons of Jupiter using the 12-inch Lick refractor telescope.
Galaxies and clusters of galaxies are complex systems, but the aim of the cosmologist is not to explain all their detailed features. Rather, it is to explain howlarge-scale structures formed in the expanding Universe in the sense that, if δρ is the enhancement in density of some region over the average background density ρ, the density contrast δρ/ρ reached amplitude 1 from initial conditions which must have been remarkably isotropic and homogeneous. Once the initial perturbations have grown in amplitude to δρ/ρ ~ 1, their growth becomes non-linear and they rapidly evolve towards bound structures in which star formation and other astrophysical process lead to the formation of galaxies and clusters of galaxies as we know them. The cosmologist's objectives are therefore twofold – to understand how density perturbations evolve in the expanding Universe and to derive the initial conditions necessary for the formation of structure in the Universe.
Galaxies, clusters of galaxies and other large-scale structures of our local Universe must have formed relatively late in the history of the Universe. The average density of matter in the Universe today corresponds to a density parameter Ω 0 ~ 0. 3. The average densities of gravitationally bound systems, such as galaxies and clusters of galaxies, are much greater than this value, typically their densities being about 106 and 1000 times greater than the mean background density, respectively. Superclusters have mean densities a few times the background density.
Somewhat surprisingly, Fraunhofer's great discoveries in astronomical spectroscopy were not followed up in any detail until 1863, almost 40 years later,when a number of independent investigators, Giovanni Donati (1826–1873) in Florence, Rutherfurd in New York, George Airy (1801–1892) at the Royal Greenwich Observatory, Huggins in London and Secchi in Rome, began the systematic study of the spectra of the stars and nebulae.
William Huggins – the founder of stellar astrophysics
William Huggins (1824–1910) was inspired to take up astronomical spectroscopy on reading Kirchhoff 's great papers of 1861 to 1863 on the chemical composition of the solar atmosphere. In his words,
This news came to me like the coming upon a spring of water in a dry and thirsty land. Here, at last presented itself the very order of work for which in an indefinite way I was looking for – namely, to extend his novel methods of research upon the Sun to the other heavenly bodies.
Huggins was an inspired amateur astronomer who had no formal university training in the sciences, but from 1856 until his death in 1910 he supported himself by his private income and dedicated his efforts to the advance of astrophysics. Much of his early work was carried out in collaboration with William Miller (1817–1870), who was professor of chemistry at King's College London and an expert on spectral analysis, as well as being his friend and neighbour at Tulse Hill in London.
The early history of radio astronomy was recounted in Section 7.3; that story ended in the mid 1950s, by which time the Galactic and extragalactic nature of the discrete radio sources was established. From the point of view of astrophysics, the key realisation was that, in most cases, the radio emission was the synchrotron radiation of ultra-high-energy electrons gyrating in magnetic fields within the source regions. The synchrotron radiation process began to be applied to other astronomical objects in which there was evidence for high-energy astrophysical activity.
In 1942, Rudolph Minkowski showed that the emission of the supernova remnant known as the Crab Nebula consists of two components, the filaments, which form a network defining the outer boundary of the remnant, and diffuse continuum emission originating within the nebula, which contributes most of its optical luminosity (Minkowski, 1942). The continuum emission had a featureless spectrum and could not be accounted for by any form of thermal spectrum. In 1949, John Bolton and Gordon Stanley found that the flux density of the Crab Nebula at radio wavelengths was about 1000 times greater than in the optical waveband (Bolton and Stanley, 1949). To account for the continuum emission, Iosif Shklovsky (1916–1985) proposed in 1952 that both the radio and optical continuum was synchrotron radiation, the energies of the electrons radiating in the optical waveband being very much greater than those radiating in the radio waveband (Shklovsky, 1953).
By 1945, many of the physical processes involved in the evolution of stars on the main sequence were beginning to be understood, but there remained an enormous amount of detailed work to be undertaken before a precise comparison between theory and observation could be made. To build detailed models of the stars, three types of data are required. The first is the equation of state of the material of the star; the second are accurate nuclear reaction rates; and the third is the opacity of stellar material for the transfer of radiation. These quantities need to be known for the wide ranges of temperature and density encountered inside the stars. Then, the problems of radiation transfer through the body of the star and its surface layers have to be solved so that meaningful comparisons can be made between the theory and observations. As a result, the astrophysicists had to have access to a very wide range of data from nuclear, atomic and molecular physics, which began to become available with the great expansion in the funding for the physical sciences after the Second World War.
Then, there was the need to develop models for the evolution of stars from one region of the Hertzsprung–Russell diagram to another. It was a daunting task, but there was light at the end of the tunnel with the development of high-speed digital computers in the 1950s and 1960s, which was to convert the study of the structure and evolution of the stars into a precise astrophysical science. The new wavebands brought important new insights into many of the key phases of stellar evolution using techniques which could not have been imagined by the pioneers of the first half of the twentieth century.
Evidence for strong evolutionary changes in the properties of extragalactic objects with cosmic epoch was first found in the 1950s and 1960s as a result of surveys of radio sources and quasars. An excess of faint sources was found in radio source and quasar surveys, as compared with the expectations of uniform world models. The inference was that there were many more of these classes of object at early cosmic epochs as compared with their number at the present epoch. During the 1980s, as the first deep counts of galaxies became available, a large excess of blue galaxies at faint apparent magnitudes was discovered. These studies culminated in the remarkable observations of the Hubble Deep Field in 1998 and the Hubble Ultra-Deep Field in 2004 by the Hubble Space Telescope.
In the 1990s, the first deep surveys of the X-ray sky were carried out by the ROSAT X-ray observatory, and evidence for an excess of faint X-ray sources was found, similar in many ways to the evolution inferred from studies of extragalactic radio sources and quasars. In the thermal infrared wavebands, the IRAS survey, although not extending to as large redshifts as the surveys mentioned above, also provided evidence for an excess of faint sources, which appear to be evolving in a manner similar to the active galaxies. Then, in the last few years of the century, evidence was found for a large population of submillimetre or far-infrared galaxies at large redshifts.
The origin of the theory of stellar structure and evolution can be traced to the understanding of the first law of thermodynamics. As a result of the experimental ingenuity of Julius Mayer (1814–1878) and, particularly, of James Prescott Joule (1818–1889), and the deep theoretical insights of Rudolph Clausius (1822–1888) and William Thomson, later Lord Kelvin (1824–1907), the two laws of thermodynamics were established in the early 1850s. In popular terms, they can be stated as follows.
Energy is conserved when heat is taken into account.
The entropy of any isolated system can only increase.
Applying the first lawto the stars, the source of energy could be attributed to the heat liberated when matter is accreted onto their surfaces. The kinetic energy of infall from infinity, which is equal to the gravitational binding energy of the material at the surface, is converted into heat when the matter hits the surface. A popular version of the theory involved meteoritic bombardment of stars as the means of providing the necessary energy release. This proposal contained, however, the serious flaw that the necessary flux of meteoroids would perturb the orbits of the inner planets and would also have resulted in a quite unacceptably high rate of meteoroid bombardment of the Earth.
Gravity is the one long-range force which acts upon all matter. Soon after Isaac Newton had completed the unification of the laws of gravity and celestial physics through his discovery of the inverse square law of gravity, he appreciated that the unique form of this law has important consequences for the large-scale distribution of matter in the Universe. In 1692–1693, the cosmological problem was addressed in a remarkable exchange of letters between Newton and the young clergyman Richard Bentley (1662–1742), later to become master of Trinity College, Cambridge. The correspondence concerned the stability of a Universe uniformly filled with stars under Newton's law of gravity. The attractive nature of the force of gravity meant that matter tends to fall together, and Newton was well aware of this problem. His first solution was to suppose that the distribution of stars extends to infinity in all directions so that the net gravitational attraction on any star in the uniform distribution is zero. As he wrote,
The fixt Stars, everywhere promiscuously dispers'd in the heavens, by their contrary attractions destroy their mutual actions.
Newton made star counts to test the hypothesis that the stars are uniformly distributed in space and found that the numbers increased more or less as expected with increasing apparent magnitude. The problem, which was fully understood by Newton and Bentley, was that a uniform distribution of stars is dynamically unstable.
While the understanding of main-sequence stars proceeded apace through the 1920s and 1930s, there remained the problem of accounting for the red giant stars, which are very much more luminous than main-sequence stars at the same effective temperatures. Russell adopted the position that matter existed in different states in the dwarf and giant stars, what he termed ‘giant stuff’ and ‘dwarf stuff’. Atkinson assumed that different nuclear processes were responsible for the luminosities of the giant stars.
The stellar models of Eddington are homogeneous, and it was assumed that homogeneity was maintained, probably by large-scale meridional circulation driven by the internal rotation of the star. It was only in the early 1950s, that a number of astrophysicists, Peter Sweet (1921–2005), Martin Schwarzschild, Ernst Öpik and Leon Mestel, showed that the mixing assumption was highly implausible.
The solution to the red giant problem was discovered in 1938 by the Estonian astrophysicist Ernst Öpik (1893–1985), then working at the University of Tartu (Öpik, 1938). Öpik realised that if the stars are not well mixed, it is inevitable that they become inhomogeneous. Within the central core of the star, nuclear burning of hydrogen into helium leads to the depletion of the nuclear fuel in the core. In Öpik's model it was assumed that the central core of the star was maintained in convective equilibrium, resulting in a uniform depletion of hydrogen in this region.
The Hubble sequence of galaxy types shown in Figure 5.5 gives some impression of the diversity of forms found among the galaxies. Hubble planned to publish an atlas of galaxies illustrating the different galaxy types but, although all the plates for this project were taken with the 60-inch and 100-inch telescopes by 1948, he died in 1953, before what became the Hubble Atlas of Galaxies was published. The project was completed by Allan Sandage, who was Hubble's last research assistant, and it was published in 1961 (Sandage, 1961b). The basic Hubble sequence was preserved, including the S0 galaxies, and the irregular galaxies were placed at the end of the sequence.
The morphological classification of large samples of galaxies was pursued by Antoinette (1921–1987) and Gérard de Vaucouleurs (1918–1995), who published a series of Reference Catalogues of Bright Galaxies, in which the Hubble classification was refined, the basic linear sequence being preserved (de Vaucouleurs et al., 1991). The distinction between the normal and barred spirals was maintained, but they showed that all intermediate types between pure barred spirals and normal spirals are also observed. What gave this morphological scheme physical significance was the fact that certain physical properties of galaxies are correlated with their position along the sequence.
The second part of our history concerns the understanding of the large-scale distribution of matter in the Universe. At the beginning of the period 1900 to 1939, little was known even about the structure of our own Galaxy; by the end of it, the Universe of galaxies was established, the system was known to be expanding and general relativity provided a theory capable of describing the distribution of matter in the Universe on the very largest scales.
‘Island universes’ and the cataloguing of the nebulae
The earliest cosmologies of the modern era were speculative conjectures. The ‘island universe’ model of René Descartes (1596–1650), published in The World of 1636, involved an interlocking jigsaw puzzle of solar systems. In 1750, Thomas Wright of Durham (1711–1786) published An Original Theory or New Hypothesis of the Universe, in which the Sun was one of many stars which orbit about the ‘Divine Centre’ of the star system. Immanuel Kant (1724–1804) in 1755 and Johann Lambert (1728–1777) in 1761 took these ideas further and developed the first hierarchical, or fractal, models of the Universe. Kant made the prescient suggestion that the flattening of these ‘island universes’ was due to their rotation. The problem with these early cosmologies was that they lacked observational validation, in particular because of the lack of information on the distances of astronomical objects.
Towards the end of the eighteenth century, William Herschel (1738–1822) was one of the first astronomers to attempt to define the distribution of stars in the Universe in some detail on the basis of careful observation. To determine the structure of the Milky Way, he counted the number of stars in different directions, assuming they all have the same intrinsic luminosities. In this way, he derived his famous picture for the structure of our Galaxy, consisting of a flattened disc of stars with diameter about five times its thickness, the Sun being located close to its centre (Figure 5.1) (Herschel, 1785).
The origin of this book was a request by Brian Pippard to contribute a survey of astrophysics and cosmology in the twentieth century to the three-volume work that he edited with Laurie Brown and the late Abraham Pais, Twentieth Century Physics (Bristol: Institute of Physics Publishing and New York: American Institute of Physics Press, 1995). This turned out to be a considerable undertaking, my first draft far exceeding the required page limit. By drastic editing, I reduced the text to about half its original length and the survey appeared in that form as Chapter 23 of the third volume.
I was reluctant to abandon all the important material which had to be excised from the published survey and was delighted that the Institute of Physics agreed to my approaching Cambridge University Press about publishing the full version. The Press were keen to take on the project, with some further expansion of the text and, in particular, with a number of explanatory supplements to chapters where a little simple mathematics can make the arguments more convincing for the enthusiast. I have also made liberal use of references to my other books, where I have already given treatments of topics covered in this book. The result has been a complete rethink of the whole project and an expansion of the text by a factor of five as compared with the original published version.