To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Why is the Earth generally hotter near the Equator than at the poles? Why is it generally hotter in summer than in winter, especially outside the tropics? Would this be true on other planets as well? How would the pattern change over time, as features of the planet's orbit vary? Would a very slowly rotating planet lose its atmosphere to condensation on the nightside? Would a planet whose rotation axis was steeply inclined relative to the normal to the plane of the orbit, or a planet in a highly elliptical orbit, have such an extreme seasonal cycle that it would be uninhabitable? The answers to these questions are to be found in the way the geographic and temporal pattern of illumination of the planet plays off against the thermal response time of the atmosphere, ocean, and solid surface of the planet. Generally speaking, in this section we seek to understand the features of a planet that determine the magnitude and pattern of geographic and seasonal variations in temperature.
Most of the discussion of temporal variability will focus on seasonal rather than diurnal variations, but much of the same considerations apply to both cycles, and so some remarks will be offered on the diurnal cycle as well. It should be kept in mind that the distinction between diurnal and seasonal cycle is meaningful only for bodies such as the Earth, Mars, or Titan whose rotation period is short compared with the period of orbit about the Sun.
When it comes to understanding the whys and wherefores of climate, there is an infinite amount one needs to know, but life affords only a finite time in which to learn it; the time available before one's fellowship runs out and a PhD thesis must be produced affords still less. Inevitably, the student who wishes to get launched on significant interdisciplinary problems must begin with a somewhat hazy sketch of the relevant physics, and fill in the gaps as time goes on. It is a lifelong process. This book is an attempt to provide the student with a sturdy scaffolding upon which a deeper understanding may be built later.
The climate system is made up of building blocks which in themselves are based on elementary physical principles, but which have surprising and profound collective behavior when allowed to interact on the planetary scale. In this sense, the “climate game” is rather like the game of Go, where interesting structure emerges from the interaction of simple rules on a big playing field, rather than complexity in the rules themselves. This book is intended to provide a rapid entrée into this fascinating universe of problems for the student who is already somewhat literate in physics and mathematics, but who has not had any previous experience with climate problems. The subject matter of each individual chapter could easily fill a textbook many times over, but even the abbreviated treatment given here provides enough core material for the student to begin treating original questions in the physics of climate.
Our objective is to understand the factors governing the climate of a planet. In this chapter we will be concerned with energy balance and planetary temperature. Certainly, there is more to climate than temperature, but equally certainly temperature is a major part of what is meant by “climate,” and greatly affects most of the other processes which come under that heading.
From the preceding chapter, we know that the temperature of a chunk of matter provides a measure of its energy content. Suppose that the planet receives energy at a certain rate. If uncompensated by loss, energy will accumulate and the temperature of some part of the planet will increase without bound. Now suppose that the planet loses energy at a rate that increases with temperature. Then, the temperature will increase until the rate of energy loss equals the rate of gain. It is this principle of energy balance that determines a planet's temperature. To quantify the functional dependence of the two rates, one must know the nature of both energy loss and energy gain.
The most familiar source of energy warming a planet is the absorption of light from the planet's star. This is the dominant mechanism for rocky planets like Venus, Earth, and Mars. It is also possible for energy to be supplied to the surface by heat transport from the deep interior, fed by radioactive decay, tidal dissipation, or high temperature material left over from the formation of the planet.
Our objective in this chapter is to treat the computation of a planet's energy loss by infrared emission in sufficient detail that the energy loss can be quantitatively linked to the actual concentration of specific greenhouse gases in the atmosphere. Unlike the simple model of the greenhouse effect described in the preceding chapter, the infrared radiation in a real atmosphere does not all come from a single level; rather, a bit of emission is contributed from each level (each having its own temperature), and a bit of this is absorbed at each intervening level of the atmosphere. The radiation comes out in all directions, and the rate of emission and absorption is strongly dependent on frequency. Dealing with all these complexities may seem daunting, but in fact it can all be boiled down to a conceptually simple set of equations which suffice for a vast range of problems in planetary climate.
It was shown in Chapter 3 that there is almost invariably an order of magnitude separation in wavelengths between the shortwave spectrum at which a planet receives stellar radiation and the longwave (generally infrared) spectrum at which energy is radiated to space. This is true throughout the Solar System, for cold bodies like Titan and hot bodies like Venus, as well as for bodies like Earth that are habitable for creatures like ourselves. The separation calls for distinct sets of approximations in dealing with the two kinds of radiation.
Of all the standard clinical imaging techniques ultrasound is by far the least expensive and most portable (including handheld units smaller than a laptop computer), and can acquire continuous images at a real-time frame rate with few or no safety concerns. In addition to morphological and structural information, ultrasound can also measure blood flow in real-time, and produce detailed maps of blood velocity within a given vessel. Ultrasound finds very wide use in obstetrics and gynaecology, due to the lack of ionizing radiation or strong magnetic fields. The real-time nature of the imaging is also important in measuring parameters such as foetal heart function. Ultrasound is used in many cardiovascular applications, being able to detect mitral valve and septal insufficiencies. General imaging applications include liver cysts, aortic aneurysms, and obstructive atherosclerosis in the carotids. Ultrasound imaging is also used very often to guide the path and positioning of a needle in tissue biopsies.
Ultrasound is a mechanical wave, with a frequency for clinical use between 1 and 15 MHz. The speed of sound in tissue is ∼1540 m/s, and so the range of wavelengths of ultrasound in tissue is between ∼0.1 and 1.5 mm. The ultrasound waves are produced by a transducer, as shown in Figure 4.1, which typically has an array of up to 512 individual active sources. In the simplest image acquisition scheme, small subgroups of these elements are fired sequentially to produce parallel ultrasound beams.
Of the four major clinical imaging modalities, magnetic resonance imaging (MRI) is the one developed most recently. The first images were acquired in 1973 by Paul Lauterbur, who shared the Nobel Prize for Medicine in 2003 with Peter Mansfield for their shared contribution to the invention and development of MRI. Over 10 million MRI scans are prescribed ever year, and there are more than 4000 scanners currently operational in 2010.
MRI provides a spatial map of the hydrogen nuclei (water and lipid) in different tissues. The image intensity depends upon the number of protons in any spatial location, as well as physical properties of the tissue such as viscosity, stiffness and protein content. In comparison to other imaging modalities, the main advantages of MRI are: (i) no ionizing radiation is required, (ii) the images can be acquired in any two- or three-dimensional plane, (iii) there is excellent soft-tissue contrast, (iv) a spatial resolution of the order of 1 mm or less can be readily achieved, and (v) images are produced with negligible penetration effects. Pathologies in all parts of the body can be diagnosed, with neurological, cardiological, hepatic, nephrological and musculoskeletal applications all being widely used in the clinic. In addition to anatomical information, MR images can be made sensitive to blood flow (angiography) and blood perfusion, water diffusion, and localized functional brain activation.
In nuclear medicine scans a very small amount, typically nanogrammes, of radioactive material called a radiotracer is injected intravenously into the patient. The agent then accumulates in specific organs in the body. How much, how rapidly and where this uptake occurs are factors which can determine whether tissue is healthy or diseased and the presence of, for example, tumours. There are three different modalities under the general umbrella of nuclear medicine. The most basic, planar scintigraphy, images the distribution of radioactive material in a single two- dimensional image, analogous to a planar X-ray scan. These types of scan are mostly used for whole-body screening for tumours, particularly bone and metastatic tumours. The most common radiotracers are chemical complexes of technetium (99mTc), an element which emits mono-energetic γ-rays at 140 keV. Various chemical complexes of 99mTc have been designed in order to target different organs in the body. The second type of scan, single photon emission computed tomography (SPECT), produces a series of contiguous two-dimensional images of the distribution of the radiotracer using the same agents as planar scintigraphy. There is, therefore, a direct analogy between planar X-ray/CT and planar scintigraphy/SPECT. A SPECT scan is most commonly used for myocardial perfusion, the so-called ‘nuclear cardiac stress test’. The final method is positron emission tomography (PET). This involves injection of a different type of radiotracer, one which emits positrons (positively charged electrons).
X-ray planar radiography is one of the mainstays of a radiology department, providing a first ‘screening’ for both acute injuries and suspected chronic diseases. Planar radiography is widely used to assess the degree of bone fracture in an acute injury, the presence of masses in lung cancer/emphysema and other airway pathologies, the presence of kidney stones, and diseases of the gastrointestinal (GI) tract. Depending upon the results of an X-ray scan, the patient may be referred for a full three-dimensional X-ray computed tomography (CT) scan for more detailed diagnosis.
The basis of both planar radiography and CT is the differential absorption of X-rays by various tissues. For example, bone and small calcifications absorb X-rays much more effectively than soft tissue. X-rays generated from a source are directed towards the patient, as shown in Figure 2.1(a). X-rays which pass through the patient are detected using a solid-state flat panel detector which is placed just below the patient. The detected X-ray energy is first converted into light, then into a voltage and finally is digitized. The digital image represents a two-dimensional projection of the tissues lying between the X-ray source and the detector. In addition to being absorbed, X-rays can also be scattered as they pass through the body, and this gives rise to a background signal which reduces the image contrast. Therefore, an ‘anti-scatter grid’, shown in Figure 2.1(b), is used to ensure that only X-rays that pass directly through the body from source-to-detector are recorded.
A clinician making a diagnosis based on medical images looks for a number of different types of indication. These could be changes in shape, for example enlargement or shrinkage of a particular structure, changes in image intensity within that structure compared to normal tissue and/or the appearance of features such as lesions which are normally not seen. A full diagnosis may be based upon information from several different imaging modalities, which can be correlative or additive in terms of their information content.
Every year there are significant engineering advances which lead to improvements in the instrumentation in each of the medical imaging modalities covered in this book. One must be able to assess in a quantitative manner the improvements that are made by such designs. These quantitative measures should also be directly related to the parameters which are important to a clinician for diagnosis. The three most important of these criteria are the spatial resolution, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). For example, Figure 1.1(a) shows a magnetic resonance image with two very small white-matter lesions indicated by the arrows. The spatial resolution in this image is high enough to be able to detect and resolve the two lesions. If the spatial resolution were to have been four times worse, as shown in Figure 1.1(b), then only the larger of the two lesions is now visible. If the image SNR were four times lower, illustrated in Figure 1.1(c), then only the brighter of the two lesions is, barely, visible.
Ariel Lipson, Imperial College of Science, Technology and Medicine, London,Stephen G. Lipson, Technion - Israel Institute of Technology, Haifa,Henry Lipson, University of Manchester Institute of Science and Technology
Optics is the ideal subject for lecture demonstrations. Not only is the output of an optical experiment usually visible (and today, with the aid of closed circuit video, can be projected for the benefit of large audiences), but often the type of idea being put across can be made clear pictorially, without measurement and analysis being required. Recently, several institutes have cashed in on this, and offer for sale video films of optical experiments carried out under ideal conditions, done with equipment considerably better than that available to the average lecturer. Although such films have some place in the lecture room, we firmly believe that students learn far more from seeing real experiments carried out by a live lecturer, with whom they can interact personally, and from whom they can sense the difficulty and limitations of what may otherwise seem to be trivial experiments. Even the lecturer's failure in a demonstration, followed by advice and help from the audience which result in ultimate success, is bound to imprint on the student's memory far more than any video film can do.
The purpose of this appendix is to transmit a few ideas that we have, during the years, found particularly valuable in demonstrating the material covered in this book, and can be prepared with relatively cheap and easily available equipment. Need we say that we also enjoyed developing and performing these experiments?
Ariel Lipson, Imperial College of Science, Technology and Medicine, London,Stephen G. Lipson, Technion - Israel Institute of Technology, Haifa,Henry Lipson, University of Manchester Institute of Science and Technology
Most optical systems are used to create images: eyes, cameras, microscopes, telescopes, for example. These image-forming instruments use lenses or mirrors whose properties, in terms of geometrical optics, have already been discussed in Chapter 3. But geometrical optics gives us no idea of any limitations of the capabilities of such instruments and indeed, until the work of Ernst Abbe in 1873, microscopists thought that spatial resolution was only limited by their expertise in grinding and polishing lenses. Abbe showed that the basic scale is the wavelength of light, which now seems obvious. The relationship between geometrical and physical optics is like that between classical and quantum (wave) mechanics; although classical mechanics predicts no basic limitation to measurement accuracy, it arises in quantum mechanics in the form of the Heisenberg uncertainty principle.
This chapter describes the way in which physical optics is used to describe image formation by a single lens (and by extension, any optical system). The theory is based on Fraunhofer diffraction (Chapter 8) and coherence (Chapter 11) and leads naturally both to an understanding of the limits to image quality and to ways of extending them. We shall learn:
how Abbe described optical imaging in terms of wave interference;
that imaging can be formulated as a double process of diffraction;
what are the basic limits to spatial resolution;
how microscopes are constructed to achieve these limits;
Ariel Lipson, Imperial College of Science, Technology and Medicine, London,Stephen G. Lipson, Technion - Israel Institute of Technology, Haifa,Henry Lipson, University of Manchester Institute of Science and Technology
Why did it take so long for the wave theory of light to be accepted, from its instigation by Huygens in about 1660 to the conclusive demonstrations by Young and Fresnel in 1803–12? In retrospect, it may be that Huygens did not take into account the wavelength; as a result the phenomenon of interference, particularly destructive interference, was missing. Only when Huygens' construction was analyzed in quantitative detail by Young and Fresnel did interference fringes and other wavelength-dependent features appear, and when these were confirmed experimentally the wave theory became generally accepted. It was because the wavelength, as measured by Young, was so much smaller than the size of everyday objects that special experiments had to be devised in order to see the effects of the waves; these are called ‘diffraction’ or ‘interference’ experiments and will be the subject of this chapter. Even so, some everyday objects, such as the drops of water that condense on a car window or the weave of an umbrella, do have dimensions commensurate with the wavelength of light, and the way they diffract light from a distant street light is clearly visible to the unaided eye (Fig. 7.1).
The distinction between the terms diffraction and interference is somewhat fuzzy. We try to use the term diffraction as a general term for all interactions between a wave and an obstacle, with interference as the case where several separable waves are superimposed.