The study of evolution most typically involves inferring past events on the basis of evidence from extant organisms. There are a number of challenges associated with this, such as uncertainties about the precise time of origin of character states, the rate of molecular evolution and confounding effects of population processes. Accessing evolutionary information directly from the fossil and sub-fossil record – in fact, any past period from which a measurable change has occurred – is therefore extremely useful in addressing these uncertainties. Museum, archaeology department and herbarium collections are the ‘banks’ of biomolecular information from which our scientific understanding of such processes can be extrapolated. Precautions taken to preserve biological material such as controlled environments, tissue-specific storage materials and the conservation of depositional environments are often conducive to long-term survival of genetic material. Consequently, these biomolecular banks hold material with a wide geographical and temporal range, often outside the typical age range of material used in phylogenetic analyses, as well as genetic diversity that is rare or lost in the living world. The advent of ancient biomolecular analyses in the 1990s was a technological milestone in this respect, in which oligogenic analyses based on one or a few genes enabled the reconstruction of extinct stages of phylogenies, such as the renowned placement of the thylacine among dasyuroid marsupials using evidence from cytochrome b DNA sequences (Krajewski et al. 1992; 1997).
NGS allows deep sequencing of single PCR targets, so generating systematic data for thousands or millions of organisms (Sogin et al. 2006). It also facilitates the study of multiple PCR targets of exons, introns, non-coding regions, mRNA transcripts or even complete genomic organization between organisms allowing a much greater depth of understanding in genetic phylogenies than could be gained from a handful of genes or simple morphological analysis (Horner et al. 2010). For the most part, NGS technology has been applied to extant species in systematics research. The applicability of NGS to sub-fossil material was first demonstrated by Poinar et al. (2006) in permafrost preserved mammoth bones. Subsequently, the application of NGS to generate data directly from historical, archaeological or paleontological sources holds the potential to view genomic evolution in real time.
This chapter provides an overview of Earth system models, the various model ‘flavours’, their state of development including model evaluation, benchmarking and optimization against observational data and their application to climate change issues.
The Earth system can be conceptualized as a suite of interacting physical, chemical, biological and anthropogenic processes that regulate the planet’s low of matter and energy. Earth system models (ESMs; Box 5.1 ) are built to mirror these processes. In fact, ESMs are the only tool available to the scientific community to investigate the system properties of the Earth, as we do not have an alternative planet to manipulate that could serve as a scientist’s laboratory.
The term ‘Earth system model’ is commonly used to describe coupled land–ocean–atmosphere models that include interactive biogeochemical components. Such models have developed progressively from the physical climate models first created in the 1960s and 1970s. Conventional climate models apply physical laws to simulate the general circulation of atmosphere and ocean. As our understanding of the natural and anthropogenic controls on climate has grown, and given the steady advances in computing power, global climate models have been extended to include more comprehensive representations of biological and geochemical processes, involving the addition of the various interacting components of the Earth system with their own feedback mechanisms. Figure 5.1 shows the conceptual differences between a conventional global coupled atmosphere–ocean general circulation model (AOGCM) and an ESM. In terms of the coupling between components, ESMs are more complex, and they have correspondingly higher computational demands.
Contingent valuation (CV) has been used by economists to value public goods for about 25 years. The approach posits a hypothetical market for an unpriced good and asks individuals to state the dollar value they place on a proposed change in its quantity, quality, or access. Development of the CV concept has been described in reviews by Cummings, Brookshire, and Schulze (1986) and Mitchell and Carson (1989). The approach is now widely used to value many different goods whose quantity or quality might be affected by the decisions of a public agency or private developer. Environmental goods have received particular attention, because they are highly valued by society and entail controversial tradeoffs (e.g., manufacturing costs vs. pollution, urban development vs. wetlands protection) but are not usually sold through markets (Bromley, 1986).
The visibility of CV methods has greatly increased following the 1989 interpretation of the Comprehensive Environmental Response, Compensation, and Liability Act of 1986 (CERCLA) by the District of Columbia Circuit Court of Appeals (in Ohio v. United States Department of the Interior). This decision (a) granted equal standing to expressed and revealed preference evaluation techniques (with willingness to pay measures preferred in all cases), (b) accepted nonuse values as a legitimate component of total resource value, and (c) recognized a “distinct preference” in CERCLA for restoring damaged natural resources, rather than simply compensating for the losses (Kopp, Portney, & Smith, 1990).
Email your librarian or administrator to recommend adding this to your organisation's collection.