To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Having identified the location of ongoing or recent volcanic activity and quantified the thermal emission, many questions remain. Is the eruption emplacing flows? How thick are they? How far will they flow? Models of the physical processes taking place can at least constrain the answers to these questions, quantifying the eruption parameters (volumetric flux, areal coverage rate), constraining eruption behavior (total volume erupted, time taken, style of eruption), and allowing comparison with other eruptions, both on Io and Earth.
Volcanology has been transformed over the past four decades by the consideration of the processes taking place from the perspective of applied physics, leading to the development and application of mathematical models of eruption mechanics and flow emplacement. Models are compared with remote and field observations and laboratory studies and then retained and refined – or discarded if found to be unrealistic. The laws of physics being universal, the resulting physical models derived on Earth can be used on other Solar System bodies so long as local conditions are taken into account.
The need for process modeling arose from data collected by the first generations of planetary missions during the 1960s and 1970s, when the importance of large-scale volcanism throughout the inner Solar System was realized, as was the primary role played by basalt (described in the massive Basaltic Volcanism on the Terrestrial Planets, published by the Lunar and Planetary Science Institute [BVSP, 1981]).
Volcanism: the manifestation at the surface of a planet or satellite of internal thermal processes through the emission at the surface of solid, liquid, or gaseous products.
Peter Francis (1993), Volcanoes: A Planetary Perspective
Few geological phenomena inspire as much awe as a volcanic eruption. Eruptions are, quite frankly, extremely exciting to watch and experience. Could there then be any more exciting place to a volcanologist than the jovian moon Io (Plate 1), which has more active volcanoes per square kilometer than anywhere else in the Solar System? Io is the only body in the Solar System other than the Earth where current volcanic activity can be witnessed on such a wide scale. As a result of this high level of volcanic activity, Io has the most striking appearance of any planetary satellite.
The detection of an umbrella-shaped plume extending high above the surface of Io was the most spectacular discovery made by NASA's Voyager spacecraft during their encounters at Jupiter; in fact, it was one of the most important results from NASA's planetary exploration program. The discovery of active extraterrestrial volcanism meant that Earth was no longer the only planetary body where the surface was being reworked by volcanoes. With this exciting discovery a revolution in planetary sciences began, leaving behind the perception of planetary satellites as geologically dead worlds, where any dynamic process had been damped down into extinction over geologic time (billions of years).
Many aspects of volcanic activity such as thermal emission, gas emission, and the changing shape of a volcano can be studied remotely (see Mouginis-Mark et al., 2000). This chapter, however, limits discussions of remote sensing to techniques also available in the study of volcanism on Io.
Remote sensing of volcanic activity on Earth
Remote sensing has become an essential tool for terrestrial volcanologists since its origins in the data collected in the mid 1960s by the High-Resolution Infrared Radiometer (HRIR) on Nimbus 1. Those data were used to show that the Hawaiian volcano Kilauea had a higher infrared radiance than Mauna Loa, its then-inactive neighbor (Gawarecki et al., 1965). More than four decades later, Earth-orbiting platforms are now being used to detect and monitor volcanic activity at different temporal, spatial, and spectral resolutions (Plate 3). Reviews of the development of spacecraft, orbits, sensor capabilities, and data analysis techniques up to the launch of the first Earth Observing System (EOS) spacecraft (Earth Observing 1 [EO-1], Terra and Aqua) are summarized in a series of papers collected in the monograph Remote Sensing of Volcanic Activity (Mouginis-Mark et al., 2000). The reader is specifically directed to Mouginis-Mark and Domergue-Schmidt (2000) for their comprehensive appraisal of the strengths and limitations of terrestrial satellite remote sensing capabilities. More recent EOS spacecraft observations of active volcanism are described by Ramsey and Flynn (2004) and references therein, and by Davies et al. (2006a).
Io boasts some of the most impressive topography in the Solar System, with mountains higher than Mt. Everest and volcano-tectonic depressions deeper than the Grand Canyon on Earth. The shapes of these volcanic depressions, associated shields, and volcanic cones and the morphology of lava flows on Io's surface yield important clues to the nature of the magma and the interior processes that generate the observed geomorphology.
Paterae on Io and calderas on Earth
The most common volcanic feature on Io, a patera, is currently defined by the International Astronomical Union, guardian of planetary nomenclature and feature names, as an “irregular saucer-like crater.” This may be a misnomer because few saucers are perfectly flat, as the floors of many paterae appear to be. Io's paterae have steep walls and arcuate margins and, geomorphologically, are unique to Io. However, in some respects they do resemble some calderas formed on Earth, Mars, and Venus (e.g., Radebaugh et al., 2001). In some cases, notably at Maasaw Patera (Figure 1.7c), the craters are nested and look strikingly similar to nested calderas at the summit of Mauna Loa, Hawai'i, on Earth, and Olympus Mons on Mars (Figure 15.1a, b).
Four hundred and twenty-eight paterae have been mapped within a region covering about 70% of Io's surface (Radebaugh et al., 2001). Locations are shown in Plate 13b.
The eruptions at Pillan and Tvashtar Paterae are very different, in terms of lava emplacement, to that seen at Pele. At Pillan, in particular, the eruption began in a spectacular explosive style and then transitioned to effusive activity. As at Pele, the explosive eruptions at these locations were driven by low-viscosity silicate magmas. At Pillan and Tvashtar Paterae, however, the explosive phases were short-lived, with thermal emission rapidly building up to a peak and then gradually subsiding as the eruption style changed. Finally, lava effusion came to an end and the emplaced flows cooled and solidified. The early stages of the eruptions were dominated by lava-fountain activity that fed large flows. Extensive pyroclastic deposits were laid down in both cases.
An interesting result from the study of the Pillan eruption was the indication of the presence of ultramafic magmas on Io. These magnesium-rich flows dominated volcanism on Earth aeons ago. If they were being erupted on Io, then the mechanisms of eruption and evolution of a process long extinct on Earth were being revealed. This possibility raised an intriguing question: could Io be an analogue for the early Earth (Matson et al., 1998; McEwen et al., 1998b)? The merits of the case for ultramafic magmas on Io were discussed in Chapter 9.
The volcanoes described in the previous chapters are either unique on Io or are class types of silicate eruptions. Many other volcanic centers display diverse styles of eruption, and colors and geomorphologies indicative of other lava compositions. The tour of Io's volcanoes continues with a closer look at some of these features.
Tupan Patera
Tupan Patera (141°W, 19°S) (Plate 12b) is one of Io's most colorful features. The patera is 75 km × 50 km in area and about 900 m deep (Turtle et al., 2004). Bright red material, probably short-chain sulphur allotropes, colors most of the patera floor and diffuse deposits are seen on the surface southeast of the patera (Keszthelyi et al., 2001a; Turtle et al., 2004). Black silicates cover the floor of the eastern half of the patera and appear in patches in the western half. In NIMS data these dark areas are the warmest areas, whereas the central “island” is cold (Lopes et al., 2004). A relatively uniform black line traces the edge of the patera floor in the western half of the patera and may be a tide line like that seen at Emakong (Turtle et al., 2004; see Section 14.5). The appearance of bright material in patches on the eastern patera floor is consistent with the melting of sulphur from the patera walls and patera margins; the sulphur then flows and pools on cooling silicates on the floor of the patera.
My brief historical overview in Chapter 1 alluded to the crucial influence of the Newtonian mechanistic picture on the development of our view of the Universe. According to this, the cosmos operates likes a giant machine, oblivious to whether life or any form of consciousness is present, i.e. the laws of physics and the characteristics of the Universe are independent of whether anybody actually observes them. In the last fifty years, however, the Anthropic Principle has developed [1], and this might be regarded as a reaction to the mechanistic view. This claims that, in some respects, the Universe has to be the way that it is because otherwise it could not produce life and we would not be here speculating about it. Although the term ‘anthropic’ derives from the Greek word for ‘man’, it should be stressed that most of the arguments pertain to life in general.
As a simple example of an anthropic argument, consider the following question: why is the Universe as big as it is? The mechanistic answer is that, at any particular time, the size of the observable Universe is the distance travelled by light since the Big Bang, which is about 1010 light-years. There is no compelling reason the Universe has the size it does; it just happens to be 1010 y old.
Each field has a set of questions which are universally viewed as important, and these questions motivate much of the work in the field. In particle physics, several of these questions are directly related to experimental problems. Examples include questions such as: Does the Higgs boson exist and, if so, what is its mass? What is the nature of the dark matter in the Universe? What is the mechanism that generated the net number of baryons in the Universe? For these topics, there is a well posed problem related to experimental findings or theoretical predictions. These are problems that must be solved if we are to achieve a complete understanding of the fundamental theory.
There also exists a different set of questions which have a more aesthetic character. In these cases, it is not as clear that a resolution is required, yet the problems motivate a search for certain classes of theories. Examples of these are the three ‘naturalness’ or ‘fine-tuning’ problems of the Standard Model; these are associated with the cosmological constant Λ, the energy scale of electroweak symmetry-breaking ν and the strong CP-violating angle θ. As will be explained more fully below, these are free parameters in the Standard Model that seem to have values 10 to 120 orders of magnitude smaller than their natural values and smaller than the magnitude of their quantum corrections.
Modified version of summary talk at the symposium Expectations of a Final Theory at Trinity College, Cambridge, 4 September 2005
A new Zeitgeist
Our previous ‘Rees-fest’ Anthropic Arguments in Fundamental Physics and Cosmology at Cambridge in 2001 had much in common with this one, in terms of the problems discussed and the approach to them. Then, as now, the central concerns were apparent conspiracies among fundamental parameters of physics and cosmology that appear necessary to ensure the emergence of life. Then, as now, the main approach was to consider the possibility that significant observational selection effects are at work, even for the determination of superficially fundamental, universal parameters.
That approach is loosely referred to as anthropic reasoning, which in turn is often loosely phrased as the anthropic principle: the parameters of physics and cosmology have the values they do in order that intelligent life capable of observing those values can emerge. That formulation upsets many scientists, and rightly so, since it smacks of irrational mysticism.
On the other hand, it is simply a fact that intelligent observers are located only in a miniscule fraction of space, and in places with special properties. As a trivial consequence, probabilities conditioned on the presence of observers will differ grossly from probabilities per unit volume. Much finer distinctions are possible and useful; but I trust that this word to the wise is enough to it make clear that we should not turn away from straightforward logic just because it can be made to sound, when stated sloppily, like irrational mysticism.
Opening talk at the symposium Expectations of a Final Theory at Trinity College, Cambridge, 2 September 2005
Introduction
We usually mark advances in the history of science by what we learn about nature, but at certain critical moments the most important thing is what we discover about science itself. These discoveries lead to changes in how we score our work, in what we consider to be an acceptable theory.
For an example, look back to a discovery made just one hundred years ago. Before 1905 there had been numerous unsuccessful efforts to detect changes in the speed of light, due to the motion of the Earth through the ether. Attempts were made by Fitzgerald, Lorentz and others to construct a mathematical model of the electron (which was then conceived to be the chief constituent of all matter) that would explain how rulers contract when moving through the ether in just the right way to keep the apparent speed of light unchanged. Einstein instead offered a symmetry principle, which stated that not just the speed of light, but all the laws of nature are unaffected by a transformation to a frame of reference in uniform motion. Lorentz grumbled that Einstein was simply assuming what he and others had been trying to prove. But history was on Einstein's side. The 1905 Special Theory of Relativity was the beginning of a general acceptance of symmetry principles as a valid basis for physical theories.
After the development of inflationary cosmology, anthropic reasoning (AR) became one of the most important methods in theoretical cosmology. However, until recently it was not in the toolbox of many high-energy physicists studying 11- or 10-dimensional M/string theory and supergravity. The attitude of high-energy physicists changed dramatically in 1998, when the physics community was shocked by the new cosmological observations suggesting that we may live in a world with a tiny cosmological constant, Λ ~ 10–120M4P, with a weird combination of matter and dark energy.
The recent WMAP observations seem to confirm the earlier data and also support the existence of an inflationary stage in the very early Universe. In view of the accumulating observational evidence, the level of tolerance towards AR is currently increasing. More people are starting to take it into consideration when thinking about cosmology from the perspective of M/string theory and particle physics. I belong to this group, and I recently had two rather impressive encounters with AR that I would like to discuss in this chapter.
In the first encounter, Andrei Linde and I considered a model of maximal supergravity related to the 11-dimensional M-theory, which has a 4-dimensional de Sitter (dS) solution with spontaneously broken super-symmetry [1]. We found that this model offers an interesting playground for the successful application of AR.
There are good reasons to view attempts to deduce basic laws of matter from the existence of mind with scepticism. Above all, it seems gratuitous. Physicists have done very well indeed at understanding matter on its own terms, without reference to mind. We have found that the governing principles take the form of abstract mathematical equations of universal validity, which refer only to entities — quantum fields — that clearly do not have minds of their own. Working chemists and biologists, for the most part, are committed to the programme of understanding how minds work under the assumption that it will turn out to involve complex orchestration of the building blocks that physics describes [1]; and while this programme is by no means complete, it has not encountered any show-stopper and it is supporting steady advances over a wide front. Computer scientists have made it plausible that the essence of mind is to be found in the operation of algorithms that in principle could be realized within radically different physical embodiments (cells, transistors, tinkertoys) and in no way rely on the detailed structure of physical law [2].
To put it shortly, the emergence of mind does not seem to be the sort of thing we would like to postulate and use as a basic explanatory principle. Rather, it is something we would like to understand and explain by building up from simpler phenomena. So there is a heavy burden to justify use of anthropic reasoning in basic physics. And yet there are, it seems to me, limited, specific circumstances under which such reasoning can be correct, unavoidable and clearly appropriate.
When our measurement instruments sample from only a subspace of the domain that we are seeking to understand, or when they sample with uneven sampling density from the target domain, the resulting data will be affected by a selection effect. If we ignore such selection effects, our conclusions may suffer from selection biases. A classic example of this kind of bias is the election poll taken by the Literary Digest in 1936. On the basis of a large survey, the Digest predicted that Alf Langdon, the Republican presidential candidate, would win by a large margin. But the actual election resulted in a landslide for the incumbent, Franklin D. Roosevelt. How could such a large sample size produce such a wayward prediction? The Digest, it turned out, had harvested the addresses for its survey mainly from telephone books and motor vehicle registries. This introduced a strong selection bias. The poor of the depression era — a group that disproportionally supported Roosevelt — often did not have phones or cars.
The Literary Digest suffered a major reputation loss and soon went out of business. It was superseded by a new generation of pollsters, including George Gallup, who not only got the 1936 election right, but also managed to predict what the Digest's prediction would be to within 1%, using a sample size that was only one-thousandth as large. The key to his success lay in his accounting for known selection effects. Statistical techniques are now routinely used to correct for many kinds of selection bias.
A good point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it.
Bertrand Russell
Of late, there has been much interest in multiverses. What sorts could there be? And how might their existence help us to understand those life-supporting features of our own universe that would otherwise appear to be merely very fortuitous coincidences [1,2]? At root, these questions are not ultimately matters of opinion or idle speculation. The underlying Theory of Everything, if it exists, may require many properties of our universe to have been selected at random, by symmetry-breaking, from a large collection of possibilities, and the vacuum state may be far from unique.
The favoured inflationary cosmological model — that has been so impressively supported by observations of the COBE and WMAP satellites — contains many apparent ‘coincidences’ that allow our universe to support complexity and life. If we were to consider a ‘multiverse’ of all possible universes, then our observed universe appears special in many ways. Modern quantum physics even provides ways in which the possible universes that make up the multiverse of all possibilities can actually exist.
Once you take seriously that all possible universes can (or do) exist, then a slippery slope opens up before you. It has long been recognized that technical civilizations, only a little more advanced than ourselves, will have the capability to simulate universes in which self-conscious entities can emerge and communicate with one another [3]. They would have computer power that exceeded ours by a vast factor. Instead of merely simulating their weather or the formation of galaxies, like we do, they would be able to go further and watch the appearance of stars and planetary systems.
As a prescription for ascribing a priori probability weightings to the eventuality of finding oneself in the position of particular conceivable observers, the anthropic principle was originally developed for application to problems of cosmology [1] and biology [2]. The purpose of this chapter is to provide a self-contained introductory account of the motivation and reasoning underlying the recent development [3] of a more refined version of the anthropic principle that is needed for the provision of a coherent interpretation of quantum theory.
In order to describe ordinary laboratory applications, it is commonly convenient, and entirely adequate, to use a ‘Copenhagen’ type representation, in which a Hilbert state vector undergoes ‘collapse’ when an observation is made. However, from a broader perspective it is rather generally recognized that such a collapse cannot correspond to any actual physical process.
A leading school of thought on this subject was founded by Everett [4], who maintained the principle of the physical reality of the Hilbert state, and deduced that — in view of the agreement that no physical collapse process occurs — none of the ensuing branch channels can be ‘more real than the rest’. This was despite the paradox posed by the necessity that they be characterized by different (my italics) ‘weightings’, the nature of which was never satisfactorily explained. This intellectual flaw in the Everett doctrine was commonly overlooked, not so much by its adherents, who were seriously concerned about it [5], as by its opponents, who were upset by its revolutionary ‘multi-universe’ implications.
Nearly thirty years ago I wrote an article in the journal Nature with Martin Rees [1], bringing together all of the known constraints on the physical characteristics of the Universe — including the fine-tunings of the physical constants — which seemed to be necessary for the emergence of life. Such constraints had been dubbed ‘anthropic’ by Brandon Carter [2] — after the Greek word for ‘man’ — although it is now appreciated that this is a misnomer, since there is no reason to associate the fine-tunings with mankind in particular. We considered both the ‘weak’ anthropic principle — which accepts the laws of nature and physical constants as given and claims that the existence of observers then imposes a selection effect on where and when we observe the Universe — and the ‘strong’ anthropic principle — which (in the sense we used the term) suggests that the existence of observers imposes constraints on the physical constants themselves.
Anthropic claims — at least in their strong form — were regarded with a certain amount of disdain by physicists at the time, and in some quarters they still are. Although we took the view that any sort of explanation for the observed fine-tunings was better than none, many regarded anthropic arguments as going beyond legitimate science. The fact that some people of a theological disposition interpreted the claims as evidence for a Creator — attributing teleological significance to the strong anthropic principle — perhaps enhanced that reaction.
The standard models of particle physics and cosmology are both rife with numerical parameters that must have values fixed by hand to explain the observed world. The world would be a radically different place if some of these constants took a different value. In particular, it has been argued that if any one of six (or perhaps a few more) numbers did not have rather particular values, then life as we know it would not be possible [1]; atoms would not exist, or no gravitationally bound structures would form in the Universe, or some other calamity would occur that would appear to make the alternative universe a very dull and lifeless place. How, then, did we get so lucky as to be here?
This question is an interesting one because all of the possible answers to it that I have encountered or devised entail very interesting conclusions. An essentially exhaustive list of such answers follows.
(i) We just got very lucky. All of the numbers could have been very different, in which case the Universe would have been barren, but they just happened by pure chance to take values in the tiny part of parameter space that would allow life. We owe our existence to one very, very, very lucky roll of the dice.
(ii) We were not particularly lucky. Almost any set of parameters would have been fine, because life would find a way to arise in nearly any type of universe.
The parameters we call constants of nature may, in fact, be stochastic variables taking different values in different parts of the Universe. The observed values of these parameters are then determined by chance and by anthropic selection. It has been argued, at least for some of the constants, that only a narrow range of their values is consistent with the existence of life [1—5].
These arguments have not been taken very seriously and have often been ridiculed as handwaving and unpredictive. For one thing, the anthropic worldview assumes some sort of a ‘multiverse’ ensemble, consisting of multiple universes or distant regions of the same Universe, with the constants of nature varying from one member of this ensemble to another. Quantitative results cannot be obtained without a theory of the multiverse. Another criticism is that the anthropic approach does not make testable predictions; thus it is not falsifiable, and therefore not scientific.
While both of these criticisms had some force a couple of decades ago, much progress has been made since then, and the situation is now completely different. The first criticism no longer applies, because we now do have a theory of the multiverse. It is the theory of inflation. A remarkable feature of inflation is that, generically, it never ends completely. The end of inflation is a stochastic process; it occurs at different times in different parts of the Universe, and at any time there are regions which are still inflating [6,7].