To supply a connection between the modal syllogistic developed in Aristotle's Prior Analytics and the rest of his philosophy interpreters frequently assume that the modal syllogistic must somehow have been intended for use in Aristotle's theory of demonstration as found in the Posterior Analytics, and, on this basis, try to find explanations for some of its peculiarities. According to the most common such proposal, Aristotle's claims about necessity syllogisms are often explained by making a distinction between essential and accidental predication and supposing that the necessary premises of demonstrations are necessary because they rest on definitional or essential predications. The evidence adduced for such claims is essentially that if we suppose Aristotle to have some notions in mind that appear in the Posterior Analytics when he develops the modal syllogistic, then we can see why he makes the claims that he does about what is or is not a necessary proposition, or a valid inference, etc. However, these are at best speculative answers to the question why Aristotle developed the modal syllogistic as he did, given that he had already decided to develop it. They do not explain why he might have wanted to develop such a theory in the first place. They do not tell us what use the modal syllogistic might have been to him in his theory of demonstration – or, more precisely, what use he might have had in mind for it. In order even to speculate productively about that, we should really like to identify some puzzles or difficulties concerning demonstrative science that results of the modal syllogistic might help resolve. At the very least, we would hope to find some evidence in the Posterior Analytics of use of even some general results found in the modal syllogistic. However, all we see is the claim that if what is demonstrated is itself necessary, then the premises from which it is demonstrated must also be necessary.
In order to link the modal syllogistic with the rest of Aristotle's philosophy we must look elsewhere. It turns out that there is a passage in the Prior Analytics in which Aristotle articulates a principle about possibility, which is almost word for word identical with a passage which appears in book Θ of the Metaphysics.
The study of evolution most typically involves inferring past events on the basis of evidence from extant organisms. There are a number of challenges associated with this, such as uncertainties about the precise time of origin of character states, the rate of molecular evolution and confounding effects of population processes. Accessing evolutionary information directly from the fossil and sub-fossil record – in fact, any past period from which a measurable change has occurred – is therefore extremely useful in addressing these uncertainties. Museum, archaeology department and herbarium collections are the ‘banks’ of biomolecular information from which our scientific understanding of such processes can be extrapolated. Precautions taken to preserve biological material such as controlled environments, tissue-specific storage materials and the conservation of depositional environments are often conducive to long-term survival of genetic material. Consequently, these biomolecular banks hold material with a wide geographical and temporal range, often outside the typical age range of material used in phylogenetic analyses, as well as genetic diversity that is rare or lost in the living world. The advent of ancient biomolecular analyses in the 1990s was a technological milestone in this respect, in which oligogenic analyses based on one or a few genes enabled the reconstruction of extinct stages of phylogenies, such as the renowned placement of the thylacine among dasyuroid marsupials using evidence from cytochrome b DNA sequences (Krajewski et al. 1992; 1997).
NGS allows deep sequencing of single PCR targets, so generating systematic data for thousands or millions of organisms (Sogin et al. 2006). It also facilitates the study of multiple PCR targets of exons, introns, non-coding regions, mRNA transcripts or even complete genomic organization between organisms allowing a much greater depth of understanding in genetic phylogenies than could be gained from a handful of genes or simple morphological analysis (Horner et al. 2010). For the most part, NGS technology has been applied to extant species in systematics research. The applicability of NGS to sub-fossil material was first demonstrated by Poinar et al. (2006) in permafrost preserved mammoth bones. Subsequently, the application of NGS to generate data directly from historical, archaeological or paleontological sources holds the potential to view genomic evolution in real time.
This chapter provides an overview of Earth system models, the various model ‘flavours’, their state of development including model evaluation, benchmarking and optimization against observational data and their application to climate change issues.
The Earth system can be conceptualized as a suite of interacting physical, chemical, biological and anthropogenic processes that regulate the planet’s low of matter and energy. Earth system models (ESMs; Box 5.1 ) are built to mirror these processes. In fact, ESMs are the only tool available to the scientific community to investigate the system properties of the Earth, as we do not have an alternative planet to manipulate that could serve as a scientist’s laboratory.
The term ‘Earth system model’ is commonly used to describe coupled land–ocean–atmosphere models that include interactive biogeochemical components. Such models have developed progressively from the physical climate models first created in the 1960s and 1970s. Conventional climate models apply physical laws to simulate the general circulation of atmosphere and ocean. As our understanding of the natural and anthropogenic controls on climate has grown, and given the steady advances in computing power, global climate models have been extended to include more comprehensive representations of biological and geochemical processes, involving the addition of the various interacting components of the Earth system with their own feedback mechanisms. Figure 5.1 shows the conceptual differences between a conventional global coupled atmosphere–ocean general circulation model (AOGCM) and an ESM. In terms of the coupling between components, ESMs are more complex, and they have correspondingly higher computational demands.
The acquisition of a metastatic phenotype is the most deadly trait a tumor can develop. Secondary tumors compromise organ function, are refractory to standard chemotherapeutics, and ultimately lead to the demise of the patient. Although tumor progression contains stochastic elements, there is an emerging pattern of organotropism, as various cancers display a predilection to metastasize to distinct secondary sites. A common trait of highly metastatic tumors is the ability to adapt the topology of local and distant microenvironments to better aid their progression. Indeed, many metastasis-regulating genes are components of, or require interactions with, stromal cells or the extracellular matrix (ECM) to exert proper function [1–4]. As such, the propensity to metastasize to specific sites is controlled in part by endemic homing mechanisms that involve coordinated ligand–receptor interactions between the cancer cell and the host microenvironment. Despite large advances in our understanding of metastasis biology, the molecular mechanisms guiding these processes remain largely uncharacterized.
Combinatorial phage-display libraries are a powerful screening tool that can readily identify functional protein interactions in vivo. Their utility has revealed that the stromal microenvironment – specifically the vasculature – of an organ contains a unique “molecular address” that can be modulated during inflammation, tumor growth, and metastasis [3, 5–7]. This chapter explores the role of the stroma during metastatic progression and highlights how phage display technology has been used to discover novel endothelial markers that disrupt tumor progression and metastasis.
The traumatic exposure
The dramatic effect of the pictures notwithstanding, the physical and mental effects of being at “Ground Zero” were described by many “Ground Zero workers” as only able to be understood through direct experience of it. The site occupied 16 acres in lower Manhattan, with buildings grouped around a 5-acre central plaza. The site is bounded by Vesey Street on the north, Church Street on the east, Liberty Street on the south, and West Street on the west, about three blocks north of the New York Stock Exchange. The Twin Towers were 110 stories, 1353 feet (412 meters) tall. In total, there were about 10,000,000 square feet of rentable space. About 50,000 people occupied the buildings. There were 43,200 square feet (4020 square meters) – about an acre of rentable space – on each floor. The seven buildings were made up of 95% air by volume, and contained 15 million square feet of space. Commercially, the seven-story mall beneath the World Trade Center (WTC) was America's third most heavily trafficked mall (Tomasky, 2003). In the aftermath of 9/11, the site continues to be an object of much interest, discussion and meaning for droves of visitors.
Whatever else “Ground Zero” may have been, it was also the workplace for a large number of workers and volunteers. In addition to the firemen and policemen whose volunteer, rescue, and recovery efforts have been chronicled in the media, “Ground Zero” also provided employment for at least 50 other professions, as well as a host of volunteers.
Email your librarian or administrator to recommend adding this to your organisation's collection.