To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Three decades after the Viking missions, which failed to detect any biorelics, not even a slight trace of organic activity, the question as to Mars having harboured habitable conditions, if not life, has been dramatically reopened. A key ingredient, liquid water, might have covered large fractions of early Mars over sustained periods, as indicated by the ongoing space missions. This chapter presents our understanding of the evolution over time of the Martian water reservoirs.
It took centuries for Mars to evolve (in human minds) from a ‘planet of death’ to a ‘world of life’: its colour no longer referred to blood (thus its being named after the God of war) but to rust; rust: thus water; water: thus life. These later syllogisms have persisted until very recently, translating the transcendental quest of life far beyond the scientific sphere. And yet: is Mars actually covered by ferric material? If so, is liquid water responsible for the oxidation? More importantly still, would that be sufficient for life to have emerged on Mars? Without direct means to address (and possibly answer) such questions, Mars has always been viewed as the closest and most favourable planet to have harboured extraterrestrial life. A variety of similarities between Mars and the Earth could support the ‘plurality of worlds’ that was conceived as the operational dogma.
Since the beginning of this century, astronomers have discovered hundreds of exoplanets, almost all of them being giant planets. However, we expect the possibility of detecting relatively smaller exoplanets similar in size to our Earth soon. Among the thousands of exoplanets that will be discovered by the end of this century, some may host life. Obviously the possibility of finding life on another planet is not only a function of the number of discovered planets, but also of the stability of life on those planets: if life is only a glimpse, the search for life will be much harder! We have under our feet a marvellous example demonstrating that one kind of life may be hosted on a planet (Earth) for billions of years. This relative stability of life on Earth seems to be strongly correlated with the stable environmental conditions that have prevailed on the surface of our planet for several billion years (1 Ga = 109 years). This is the reason why this chapter is devoted to deciphering and understanding the ‘stable’ climate conditions on Earth since 3.8 Ga. Observation of our neighbouring planets in the Solar System teaches us that the conditions for the development of life (habitability) and sustainability at the surface of a planet are not widespread – at least in the Solar System. Nowadays, Mars is a very cold desert experiencing dust storms, whereas Venus is a burning hell whose surface is totally hidden by a thick greenhouse-gas atmosphere.
The high stress resistance of the bacterium Deinococcus radiodurans
Deinococcus radiodurans (D. radiodurans), initially isolated in canned meat that had been irradiated at 4000 grays in order to achieve sterility (Anderson et al., 1956), is a bacterium belonging to a bacterial genus characterized by an exceptional ability to withstand the lethal effects of DNA-damaging agents, including ionizing radiation, ultraviolet light and desiccation (Battista and Rainey, 2001).
Initially, D. radiodurans was named Micrococcus radiotolerans because of its morphological similarity to members of the genus Micrococcus. Subsequent studies led to its reclassification into a distinct phylum within the domain Bacteria and this bacterium was renamed Deinococcus radiodurans, the Greek adjective deinos meaning strange or unusual. Deinococcaceae were isolated from diverse environments after exposure to high doses of ionizing radiation. Among this family, containing to date more than 20 identified members, D. radiodurans is by far the best characterized. D. radiodurans cells are non-motile, non-spore-forming and are obligate aerobes that grow optimally at 30°C in rich medium. On agar plates, they are pigmented and appear pink-orange. In liquid media, cells divide alternately into two planes, exhibiting pairs or tetrads (Figure 22.1A).
Ionizing radiation, when applied to any living organism, leads to the formation of highly reactive radicals (e.g. hydroxy radicals) and can cause a variety of DNA damage, such as DNA single- and double-strand breaks and base modifications.
The Sun is somewhat a late-comer in our Galaxy, the Milky Way. It was born 4.6 Gyr ago (4.5685 Gyr ± 0.5 Myr, to be precise, from the decay of specific radioactive heavy elements in the most primitive meteorites – see below). This is to be compared with the age of the Universe, constrained by the best theoretical fits to the observed spatial fluctuations of the ‘cosmic background radiation’ to be 13.7 Gyr after the Big Bang, within 2%. When galaxies form is less certain, but current estimates give a time lapse of less than 1 Gyr after the Big Bang – implying that our own Galaxy has an age of over 12.7 Gyr and that the Sun was born over 8.1 Gyr later. So at the time the Sun formed, our Galaxy was already sufficiently evolved by successive generations of stars that it presented no major differences with the one we observe today. Therefore, we can safely derive conclusions about the distant birth of the Sun from observations of contemporary young stars.
In a nutshell, from various observations we know that bright nebulae, including some famous ones like Orion, the Eagle or Carina nebulae, are ‘stellar nurseries’ (Figure 8.1), where stars like the Sun form in clusters of thousands of low- to intermediate-mass stars. A few massive stars, like the Orion Trapezium, for which the highest mass is of order 45 Mʘ also form in these stellar nurseries.
Introduction: some conceptual remarks on metabolism
Metabolism is the set of enzymatic reactions that allow living beings to use external energy sources to drive the building of their biochemical components from external chemical sources and also to carry out energy-consuming functions, such as osmotic and mechanical work. The role played by gene-encoded enzymatic catalysts is one of the essential properties of life. Since their stability is finite and there is a need for constant replacement, enzymes are themselves products of metabolism (Cornish-Bowden et al., 2004). Thus, the proteome (i.e. the totality of proteins and their concentrations that exists in a particular cell state) is a product of the metabolome (i.e. the totality of metabolites and their concentrations that exists in a particular metabolic state). This situation gives rise to the ‘metabolic circularity’ or ‘recursivity’; a concept needed for a complete understanding of metabolism (Cornish-Bowden et al., 2007).
Extant metabolic networks are certainly complex, with hundreds or thousands of concatenated enzymatic reactions. Since there is a limited number of coenzymes (i.e. the special reactants that help enzymes to perform their catalytic functions), recurrently used by different enzymes, and some central metabolites are true crossroads between different lines of chemical transformation, complex networks emerge. From a topological perspective, metabolic networks show a power-law distribution of connectivity (Fell and Wagner, 2000). In other words, most metabolites are poorly connected whereas a few of them (coenzymes and metabolic crossroads) support many connections.
Ionized hydrogen or H II regions and gaseous nebulae are generally low-density objects that appear as extended and diffuse clouds. Typical electron temperatures are of the order of 104 K, or ~ 1 eV, and densities are between 102 and 106 cm-3. But ionizing sources of H II regions in general are quite diverse. Among the most common variety are those found in giant molecular clouds photoionized by newly formed hot stars with sufficient UV flux to ionize hydrogen and several other elements to low ionization states. Similar H II regions are commonplace in astronomy, as part of otherwise unrelated objects, such as active galactic nuclei (Chapter 13) and supernova remnants. Such regions are also easily observable, since they are largely optically thin. Furthermore, a number of nebular ions are commonly observed from a variety of gaseous objects. In fact, in Chapter 8 we had developed the spectral diagnostics of optical emission lines, as observed from the Crab nebula in Fig. 8.3. That nebula is the remnant of a supernova explosion, in the constellation of Taurus, witnessed in AD 1054 by Arab and Chinese astronomers. The central object is a fast spinning neutron star – pulsar – energizing the surrounding nebula. Nebular spectroscopy therefore forms the basis of most spectral analysis in astrophysics.
We describe the essentials of nebular astrophysics with emphasis on spectroscopic analysis, and address the pervasive problem of atomic data sources of varying accuracy.
Most of the observable matter in the Universe is ionized plasma. The two main sources of ionization are collisional ionization due to electron impact as discussed in Chapter 5, and photoionization due to a radiative source. Among the prominent radiation sources we discuss in later chapters are stars and active galactic nuclei. The nature of these sources, and physical conditions in the plasma environments activated by them, vary considerably. The photoionization rate and the degree of ionization achieved depends on (i) the photon distribution of the radiation field and (ii) the cross section as a function of photon energy. In this chapter, we describe the underlying physics of photoionization cross sections, which turns out to be surprisingly full of features revealed through relatively recent experimental and theoretical studies. Theoretically, many of these features arise from channel coupling, which most strongly manifests itself as autoionizing resonances, often not considered in the past in the data used in astronomy. The discussion in this chapter will particularly focus on the nearly ubiquitous presence of resonances in the cross sections, which later would seen to be intimately coupled to (e + ion) recombination (Chapter 7).
The interaction of photons and atoms inducing transitions between bound states has been discussed in Chapter 4. Here we describe the extension to the bound–free transitions. We first revisit a part of the unified picture of atomic processes in Fig. 3.5.
In this part, we discuss the properties of the Galaxy center, and finish with a selection of multiwavelength all-sky images of the Galaxy.
The Galaxy is an Sbc/SBbc spiral galaxy and in the atlas it is classified in the normal (N) category of our sample, yet additional categories interacting (I), and active? (A?) (see Table 1, page 18) are also assigned. The Galaxy (along with the LMC and SMC) is classified I due to the existence of the H I Magellanic Stream (Wannier and Wrixon 1972; Murai and Fujimoto 1986) as shown in Figure 1.22. An A? classification is also given because the optical spectrum integrated over the central parsec of the Galaxy resembles a Seyfert galaxy (Mezger et al. 1996). This is consistent with results that suggest the presence of a several million M⊙ black hole in the nucleus.
Quiet monster – Sagittarius A*
Our Galaxy's nucleus, some 8.5 kpc distant in the constellation of Sagittarius, has long been a target for multiwavelength observations. Because of the large amounts of obscuring dust towards the nucleus and in the disk of the Galaxy, infrared observations have proved invaluable in showing the content and structure of this region. Figure 3.1 shows a 48° by 33° IRAS IR image of the central region of the Galaxy. The disk of the Galaxy is the bright band running diagonally across the image.
The bright central region contains the nucleus. The concentrated blobs of yellow are giant clouds of interstellar gas and dust heated by recently formed stars and nearby massive, hot stars.
Human common experience tells us that the individuals of a particular species reproduce among themselves to produce a progeny which tends to resemble their parents. This inheritance of characters from one generation to the following one is known as ‘vertical inheritance’. Although the nature of the physical support of the genetic information transmitted through generations remained mysterious until 1944 when DNA was shown to constitute such support, Gregor Mendel had already described the laws that control this kind of inheritance in plants in the nineteenth century. The vertical inheritance of favourable modifications is one of the pillars of the Darwinian theory of evolution: natural selection can be effective only if the advantageous characters selected can be transmitted to the progeny. This central role of vertical inheritance in evolution was adopted later on in the twentieth century by the neo-Darwinian evolutionists. For example, they proposed the ‘biological species concept’, which states that a species is defined by the capacity of its members to reproduce among themselves, namely, a species concept that, instead of being based on morphological characters as in the classical definitions, was based on the capacity of vertical inheritance. In addition, the vertical transmission of the genetic information with changes due to selection defines evolutionary lineages of organisms that are distinct from the rest of the lineages (for example our own Homo sapiens lineage). As Darwin stated in his The Origin of Species, the evolutionary relationships between those lineages are best represented by a phylogenetic tree.
Not all photons emitted by astronomical objects are detected by ground-based telescopes. A major barrier to the photons path is the Earth's ionosphere and upper atmosphere that absorbs or scatters most incoming radiation except for the optical (wavelengths of 3300–8000 Å where 1 Å = 10-10 m), parts of the near-IR (0.8–7 μm) and radio (greater than 1 mm) regions. Absorption greatly affects radiation with the shortest wavelengths. In general, gamma rays are absorbed by atomic nuclei, X-rays by individual atoms and UV radiation by molecules. Incoming IR and submillimeter radiation are strongly absorbed by molecules in the upper atmosphere (e.g. H2O and carbon monoxide, CO). Observations in these regions greatly benefit by locating telescopes at high altitude. Mountain-top sites like Mauna Kea (altitude 4200 m) in Hawaii, Cerro Pachon (2700 m), Las Campanas (2500 m) and Paranal (2600 m) in Chile, and La Palma (2300 m) in the Canary Islands are used to decrease the blocking effect of the atmosphere. The Antarctic, in particular the South Pole, provides an atmosphere with low water vapor content. Most of the continent is at high altitude, with the South Pole 2835 m above sea level, again helping to reduce the amount of obscuring atmosphere. The Antarctic has therefore also become a very useful IR and submillimeter site.
The transmission properties of the Earth's atmosphere (Figure 2.1) has prompted the exploration of the gamma ray, X-ray, UV, mid- and far-IR regions of the electromagnetic spectrum via satellite and high-altitude balloon observations.
Looking for a definition of life raises various issues, the first being its legitimacy. Does seeking such a definition make sense, in particular to scientists? I will successively refute the different arguments of those who consider that looking for such a definition makes no sense, and then propose good reasons to do just that, but also add some caveats regarding what sort of definition is sought. After considering definitions proposed in the past, I will examine various present-day definitions, what they share and how they differ. I will show that the recent suggestion that viruses are alive makes no sense and obscures discussions about life. Finally, I will emphasize two important recent transformations in the way life is defined.
Philosophical and scientific legitimacy of a definition of life
Two questions immediately emerge. Are we seeking a definition of life or a definition of organisms? And what kind of definition should be sought? Two types of definition are, in fact, traditionally distinguished. A definition may aim to give the essential characteristics that causally explain the existence of the category of objects considered. Or a definition may be of more limited scope: to establish a list of properties that are necessary and sufficient to define this category of objects and to distinguish them from objects belonging to other categories. If one adopts the first kind of definition it will be possible to define life. If one opts for the second, one will look for a definition of organisms.
Would a theoretical biologist be surprised to be told that computer use and software developments should help him make substantial progress in his discipline? It is doubtful. There is a long tradition of software simulations in theoretical biology to complement pure analytical mathematics which are often limited to reproducing and understanding the self-organization phenomena resulting from non-linear and spatially grounded interactions of the huge number of diverse biological objects. Nevertheless, proponents of artificial life would bet that they could help them further by enabling them to transcend their daily modelling/measuring practice by using software simulations in the first instance and, to a lesser degree, robotics, in order to abstract and elucidate the fundamental mechanisms common to living organisms. They hope to do so by resolutely neglecting much materialistic and quantitative information deemed as not indispensable. They want to focus on the rule-based mechanisms making life possible, supposedly neutral with respect to their underlying material embodiment, and to replicate them in a non-biochemical substrate. In artificial life, the importance of the substrate is purposefully understated for the benefit of the function (software should ‘supervene’ to an infinite variety of possible hardware). Minimal life begins at the intersection of a series of processes that need to be isolated, differentiated and duplicated as such in computers. Only software development and usage make it possible to understand the way these processes are intimately interconnected in order for life to appear at those crossroads.