To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In several earlier examples it has been shown how one can give the effect of one long exposure by stacking a number of sub-exposures. Most telescope mounts can track sufficiently well to allow exposures of up to 60 seconds without elongating the star images, so why bother to auto-guide? The fundamental reason is that longer exposures have a greater signal-to-noise ratio than short ones so, in principle, one 60-minute exposure would be better than the stacking of sixty 1-minute exposures. Stacking a number of images does increase the signal-to-noise ratio but not quite enough to compensate. The maths relating to this is somewhat complex and will not be covered here, but it turns out that to get the equivalent to one 60-minute exposure would require perhaps eighty 1-minute exposures to be stacked.
But it would be totally stupid to attempt a single 60-minute exposure: firstly, any bright stars in the field would be grossly over-exposed, causing highly unattractive ‘blooming’ of their images; secondly, the chance of a single 60-minute exposure not having a satellite or aircraft trail is quite low; and thirdly, for best results, you are going to have to take a number of 60-minute dark frames. As a result, astro-imagers have to compromise. An optimum strategy might well be to take eight or nine 10-minute exposures to give a comparable result, but care must be taken not to over-expose an important part of the image, for example, the core of M31 discussed in the preceding chapter.
Although I have been a radio astronomer all my working life, I have also greatly enjoyed observing the heavens. At the age of 12, I first observed the craters on the Moon and the moons of Jupiter with a simple telescope made from cardboard tubes and lenses given to me by my optician. I also made crystal and valve radios, and my friends and I set up our own telephone network across our village using former army telephones. Both of these activities were to have a major bearing on my later life.
As I write, I have my father’s thin, red-bound copy of Fred Hoyle’s book The Nature of the Universe on the desk beside me. It was this book that inspired me to become an astronomer.
I was able to study a little astronomy at Oxford but, continuing my interest in radios, was also in the signals unit of the Ofi cers’ Training Corps. As I was revising for my i nals I spotted an advertisement for a new course in radio astronomy at the Jodrell Bank Observatory. Because I was interested in both astronomy and radios this seemed a good idea and I began to study there in 1965.
Imaging is now becoming a major branch of the hobby, and a good feature of lunar and planetary imaging is that it can be done under light-polluted skies. It does not have to be expensive, as DSLRs and even iPhones can be used to image the Moon and, perhaps surprisingly, DSLRs can even be used to image the planets. Webcams are now routinely used for planetary imaging and can also be used to produce stunning lunar images. All of these possibilities are fully covered in this chapter.
Image Processing Programs
Virtually all images can be improved with the use of some post-processing in an image manipulation program such as Adobe Photoshop Elements or Adobe Photoshop and a freeware program such as GIMP (GNU Image Manipulation Program).
It has to be said that the majority of astro-imagers use Adobe Photoshop , and packages of ‘actions’ specii cally designed to help process astro-images can be used with it. It is expensive, but bona i de students can purchase it for signii cantly less. Might it be worth signing on at a local college for an evening course in Photoshop use? Adobe Photoshop Elements is considerably cheaper and can carry out some of the image manipulation functions that are used, but it works only in 8-bit mode (whereas Photoshop can handle 16-bit images) and it does not have the curves function, which can be very useful for ‘stretching’ the brightness of an image to bring out detail in, for example, the spiral arms of a galaxy, which are far less bright than the central core.
This is a wonderful branch of the hobby and one that can be pursued even in heavily light-polluted skies. What does one need? Well, of course, any telescope will do but, as has been discussed earlier, telescopes which have an inherently high contrast are best. Many of the planetary surface features are of low contrast, and for viewing the Moon, which will be very bright both inside and, at higher magnifications, outside the field of view, the overall contrast of the telescope becomes quite important. Refractors are best for both overall contrast and micro-contrast but are very expensive in larger apertures. If a reflecting telescope is to be used for lunar observations, mirrors with high-reflectivity coatings will prove significantly better, and these will also improve planetary observations − but to a lesser extent, as there is far less light entering the telescope and hence less that can be scattered by the mirror surfaces. Maksutov-Newtonians are a very close second to refractors in terms of their micro-contrast and greater affordability in larger apertures. Newtonians with focal ratios of f6 and above and Maksutovs come next, and even Schmidt-Cassegrains, if well collimated, can give very worthwhile views. So this is not something to get hung up about. The best telescope for observing is the one that you have!
As the angular sizes of the planets are small, high magnifications are needed, with the use of up to 200 on a reasonable night when the seeing is good. Such magnifications would also be used for exploring lunar features. Higher magnifications can be used on nights of exceptional seeing. One useful accessory is a zoom eyepiece. This can help you find the optimum magnification to suit your telescope, eye and the seeing. Reduce the local length until the image quality begins to fall off and then choose the nearest focal length eyepiece in your collection. Simple eyepieces, such as Plössls and orthoscopics, contain less glass and fewer air/glass interfaces than wide-field types, so they can be very good for lunar and planetary observing. It is probably best not to use eyepieces with too short a focal length, as their eye relief can be very small; instead, use a greater-focal-length eyepiece with a ×2 Barlow lens. Using a binoviewer can also add something to the visual experience.
By definition, a discrete random medium (DRM) is a scattering object in the form of an imaginary volume V populated by a large number N of particles in such a way that the spatial distribution of the particles throughout the volume is statistically uniform or quasi-uniform. Over time, particle positions and states change randomly, thereby resulting in random changes of the state ψ of the entire object (Section 10.4). Classical examples of a DRM are clouds and particle suspensions (Plates 1.1b—1.1d). In many cases a particulate surface (Plates 1.1e and 1.1f) can also be modeled as a DRM, since even minute changes of the source-of-light → object → detector configuration during the measurement are equivalent to multi-wavelength shifts in particle positions and, in essence, result in a stochastic scattering object. The volume packing density of a DRM can vary from almost zero for a cloud to more than 50% for a particulate surface.
Given their specific morphological traits and ubiquitous presence, scattering objects in the form of a DRM deserve a detailed study. As always, the desirable way to model electromagnetic scattering by an ergodic DRM is to solve the MMEs numerically for a representative set of realizable states ψ of the object and then average the relevant optical observables or energy-budget characteristics using an appropriate probability density function ρ(ψ) (Section 10.4).
Measurements of electromagnetic energy flow are an integral part of solving various energy-budget and optical-characterization problems. For example, the physical state of a cloud of water droplets or ice crystals in the terrestrial atmosphere can be affected by an imbalance between the incoming and outgoing electromagnetic energy, while measurements of specific manifestations of electromagnetic energy flow with a suitable device can potentially be analyzed to infer useful information about the cloud. Conceptually similar problems are encountered in many other areas of science and engineering. It is therefore very important to understand clearly what specific measurement is afforded by an optical instrument and how to model this measurement theoretically.
Let us recall, for example, the energy-budget problem for a macroscopic volume element of an idealized liquid-water cloud discussed in Section 1.4. Suppose that we have at our disposal a Poynting-meter, i.e., a device that can determine both the direction and the absolute value of the time-averaged local Poynting vector. Then measuring ≪S(r,t≫ at a sufficiently representative number of points densely distributed over the boundary ΔS would enable one to evaluate the integral in Eq. (1.12) numerically and thereby quantify the degree of electromagnetic energy imbalance of the volume element ΔV.
Unfortunately, none of the existing photometers can, strictly speaking, be considered a Poynting-meter.
The definition of a purely monochromatic electromagnetic field given in Section 2.3 implies that the time dependence of the complex vectors ε (r,t) and H(r,t) is fully described by the complex-exponential factor exp(—iωt) with a fixed angular frequency ω. This can be a good model for beams generated by certain types of laser, but not for the majority of natural and artificial electromagnetic fields. In reality, the electromagnetic field is typically polychromatic, i.e., is a superposition of a (possibly very large) number of monochromatic fields with different angular frequencies distributed over a given range [ωmin, ωmax]. Furthermore, in many cases the amplitudes of the complex electric and magnetic fields representing the component with an angular frequency ω are not constant but rather fluctuate in time, albeit much more slowly than the factor exp(—iωt). Then the resulting polychromatic field is said to consist of quasi-monochromatic components. The range of angular frequencies [ωmin, ωmax] of monochromatic or quasi-monochromatic components can be relatively narrow for some artificial sources of light. However, it can also be very wide, the solar radiation and the light produced by incandescent lamps being prime examples.
Given the ubiquity of polychromatic electromagnetic fields in natural and artificial environments, it is essential to analyze how the results of Chapters 7 and 8 can be generalized to account for a mix of different angular frequencies and/or random fluctuations of the amplitudes of the constituent complex fields.
The diagram in Fig. 22.1 provides a schematic summary of this textbook and serves to classify the place of the microphysical theories of radiative transfer and WL within the broader context of Maxwell's electromagnetics. Although we have been using the adjective “microphysical” in order to emphasize the back-traceability of both theories to the MMEs, it can also be said that these theories have a mesoscopic origin. Indeed, the term “mesoscopic physics” refers to a size regime that is intermediate between the microscopic and macroscopic and is characteristic of a region where a large number of particles can interact in a correlated fashion. The direct computer solutions of the Maxwell equations described in Chapter 18 demonstrate indeed how the “macroscopic” regime of radiative transfer and WL emerges from the “microscopic” particle-level regime of Maxwell's electromagnetics upon averaging over random realizations of a multi-particle group. Extensive discussions of mesoscopic optical phenomena can be found in the monographs by Sheng (2006) and Akkermans and Montambaux (2007).
Besides being a one-page summary of the book, Fig. 22.1 also helps identify problems that still await solution. First of all, by using the frequency-domain MMEs as the point of departure, we have completely excluded from consideration such phenomena as emission of electromagnetic waves and frequency redistribution, as well as situations involving pulsed illumination.
The phenomena of scattering and absorption of light and other electromagnetic radiation by small particles and particle groups are central to a great variety of science and engineering fields. Owing to a large body of research, the discipline of studying these phenomena has recently undergone profound and paradigm-shifting developments. Among the most important advances are the following:
Dramatic improvements in numerical solvers of the Maxwell equations coupled with the ever-growing computer capability have enabled direct, numerically exact modeling of electromagnetic scattering by particles and particle groups of unprecedented morphological complexity.
The rigorous physical basis of monochromatic and polychromatic scattering by random particles and random particle groups has been established.
Owing to the development of a rigorous microphysical approach, the centuries-old disciplines of directional photometry and radiative transfer have become legitimate branches of physical optics.
Direct computer solutions of the Maxwell equations have confirmed the mesoscopic origin of radiative transfer and weak localization of electromagnetic waves (also known as coherent backscattering) in sparse particulate media.
The main purpose of this textbook is to provide a self-contained and accessible summary of these developments in the framework of a thorough introduction to the fundamental physical and mathematical principles of the subject. Particular attention is paid to key (and often overlooked) aspects, such as time and ensemble averaging at different scales, ergodicity of stochastic scattering objects, and the physical nature of measurements afforded by actual directional photometers and photopolarimeters.
Equation (4.24) expresses the scattered (and thus the total) monochromatic field in terms of the incident monochromatic field (we remind the reader that the incident field is the total field in the absence of the scattering object). However, neither field can be measured directly with conventional optical instruments, which obviously calls for the derivation of the corresponding relationships between observable characteristics of the total and incident fields. In view of the discussion in Chapter 8, all such relationships should be particular cases of a general expression of the PST of the total field in the presence of the scattering object in terms of that of the incident field. This general expression will be derived below.
There are two other important practical issues to consider. Indeed, our previous discussion of electromagnetic scattering has been based on the assumptions that: (i) the electromagnetic field is purely monochromatic, and (ii) the scattering object does not change with time. However, in the majority of actual applications the electromagnetic field is polychromatic and the scattering object changes in time randomly or quasi-randomly. Furthermore, the temporal variability of the object can be rapid enough to affect the result of averaging an actual optical observable over the time interval required to take a measurement.
Electromagnetic scattering by an isolated particle or a multi-particle group is a ubiquitous phenomenon central to a wide variety of science and engineering disciplines. Field—matter interactions described by macroscopic electromagnetics typically occur in a natural way. They can affect accompanying physical and chemical processes as well as the very state of the scattering object and often yield an electromagnetic signal that can be measured and analyzed with the purpose of retrieving useful information about the object. Electromagnetic scattering can also be induced artificially and used as an active means of in situ or remote diagnostics of certain physical properties of the particle(s). In order to interpret laboratory, field, and remote-sensing measurements of electromagnetic scattering by various single- and multi-particle objects, one needs a deep understanding of this phenomenon, as well as the ability to predict quantitatively its various manifestations as functions of the physical parameters of the objects.
The diversity of sizes, morphologies, and refractive indices of particles encountered in natural and artificial environments is virtually limitless, as illustrated by Fig. 1.1. This factor complicates accurate quantitative modeling of electromagnetic scattering and absorption, even by solitary particles such as those suspended individually in the trap volume of an electrostatic (as shown in Plate 1.1a) or optical levitator. The task of optical modeling of a large group of sparsely distributed particles such as a cloud (see, e.g., Plates 1.1b and 1.1c) is significantly more involved.
Solving the energy-budget and optical characterization problems formulated in Section 1.4 relies on one's ability to:
• compute the time-averaged Poynting vector and/or
• model theoretically the net signal recorded by a (polarimetric) WCR.
To accomplish either task one usually needs a direct computer solver of the MMEs. This solver may be required, for example, to calculate the spatial distribution of the Poynting vector inside a densely packed particulate medium, or to compute the extinction and phase matrices needed to analyze the reading of a far-field WCR.
Depending on the complexity and size of the scattering object (cf. Plate 1.1), direct computer solvers of the MMEs can become inefficient and may need to be replaced with a well-characterized and manageable approximate solution. For example, we will see in Chapter 19 that certain observable manifestations of scattering by a large random group of sparsely distributed particles, as well as its electromagnetic energy budget can be quantified by solving the so-called radiative transfer equation. However, two key quantities entering this equation, the single-particle extinction and phase matrices averaged over all particle micro-physical states ξ, must still be calculated by using a numerical solver of the MMEs. We have seen that the same is true of the FOSA derived in Chapter 14 for a small group of randomly and sparsely distributed particles observed from a sufficiently large distance.