Cambridge Catalogue  
  • Your account
  • View basket
  • Help
Home > Catalogue > Complexity and the Arrow of Time
Complexity and the Arrow of Time

Details

  • 14 b/w illus.
  • Page extent: 368 pages
  • Size: 228 x 152 mm
  • Weight: 0.73 kg

Library of Congress

  • Dewey number: 003
  • Dewey version: 23
  • LC Classification: Q175.32.C65 C64 2013
  • LC Subject headings:
    • Complexity (Philosophy)
    • Science--Philosophy
    • SCIENCE / Physics.--bisacsh

Library of Congress Record

Add to basket

Hardback

 (ISBN-13: 9781107027251)

Available, despatch within 3-4 weeks

US $31.99
Singapore price US $34.23 (inclusive of GST)
Complexity and the Arrow of Time
Cambridge University Press
9781107027251 - Complexity and the Arrow of Time - Edited by Charles H. Lineweaver, Paul C. W. Davies and Michael Ruse
Excerpt

1    What is complexity? Is it increasing?

C. H. Lineweaver, P. C. W. Davies and M. Ruse

One of the principal objects of theoretical research is to find the point of view from which the subject appears in the greatest simplicity.

(Gibbs, 1881)

Most people don't need to be persuaded that the physical world is bewilderingly complex. Everywhere we look, from molecules to clusters of galaxies, we see layer upon layer of complex structures and complex processes. The success of the scientific enterprise over the last 300 years largely stems from the assumption that beneath the surface complexity of the universe lies an elegant mathematical simplicity. The spectacular progress in particle and atomic physics, for example, comes from neglecting the complexity of materials and focusing on their relatively simple components. Similarly, the amazing advances in cosmology mostly ignore the complications of galactic structure and treat the universe in a simplified, averaged-out, approximation. Such simplified treatments, though they have carried us far, sooner or later confront the stark reality that many everyday phenomena are formidably complex and cannot be captured by traditional reductionist approaches. The most obvious and important example is biology. The techniques of particle physics or cosmology fail utterly to describe the nature and origin of biological complexity. Darwinian evolution gives us an understanding of how biological complexity arose, but is less capable of providing a general principle of why it arises. “Survival of the fittest” is not necessarily “survival of the most complex”.

In recent decades, partly due to the ready availability of fast and powerful computation, scientists have increasingly begun to seek general principles governing complexity. Although this program remains a work in progress, some deep issues of principle have emerged. How are the different forms of complexity – for example, the chaotic jumble of a pile of rocks versus the exquisite organization of a living organism – related? Secondly, does complexity have a general tendency to increase over time, and if so, when? Cosmologists agree that one second after the big bang the universe consisted of a simple soup of subatomic particles bathed in uniform radiation, raising the question of how the many levels of complexity originated that emerged over time as the universe evolved. In biological evolution too, complexity seems to rise and rise, and yet this trend is notoriously hard to pin down. If there is a complexity “arrow of time” to place alongside the thermodynamic and cosmological arrows of time, it has yet to be elucidated precisely.

The contributors to this volume tackle the foregoing issues head-on, and grapple with the questions: “What is complexity?” and “Is it increasing?”. In the tradition of Lord Kelvin, the physical scientists tend toward the view that without a quantitative definition of complexity we won't be able to measure it precisely and a fortiori won't know if it is increasing:

…when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the stage of science, whatever the matter may be.

(Kelvin, 1883)
In the first section of the book, the physicists attempt to construct a definition of complexity – one that can be measured and is trans-disciplinary. They want to provide a recipe for measuring complexity while keeping their urges for speculative interpretations on a short leash. The problem, however, is that there are many complexity-generating processes and many forms of complexity (e.g. see Clayton, Chapter 14). Without a unified definition that permits application across a range of disciplines, speculative short leashes are moot. Biologists are less troubled by the lack of a unified definition. After all, shouldn't we expect complexity to be complex? As Conway Morris points out, “First there were bacteria, now there is New York”. Even without a rigorous definition, or a broadly acceptable way to measure it, isn't it qualitatively obvious that biological complexity has increased? Do we really need to wait for a precise definition to think about complexity and its limits?

Although advances in the study and simulation of complex systems have thrown the problems into sharp relief, Ruse's historical review reminds us that wrestling with complexity is not a new sport. He describes how evolutionary biologists from Darwin to Dawkins have tried to sharpen the intuitive notion of biological complexity. Darwin repeatedly picked at the notion like one picks at a scab. Subsequently, five generations of the world's finest evolutionary biologists have failed to agree on a precise metric to quantify biological complexity. A few plausible proxies have been explored. One is genome length. However, scientists predisposed to believe that humans are the culmination of the animal kingdom (as Darwin believed) are not favorably inclined towards this proxy since (as Davies points out): “Salamanders have genomes much larger than humans”. The lack of a tight correlation between genome length and our subjective assessment of phenotypic complexity is known as the C-value enigma (Gregory, 2001). Wimsatt describes a specific kind of genomic complexity that he calls “generative entrenchment”. It is the genomic complexity of multicellular organisms that has accumulated over hundreds of millions of years and controls ontogeny. Much of this regulatory complexity has been found recently in introns (ENCODE Project Consortium, 2012), but quantifying it is a daunting task.

Comparing the complexities of different organisms can be confusing. For example, Ruse describes a proxy for complexity based on the number of cell types in a multicellular organism. But if this proxy is used, it leaves us unsure of how to quantify the extreme diversity and complexity of unicellular extremophiles. An alternative approach is to focus on ecosystems rather than specific organisms. As Smith (Chapter 9) discusses, the metabolic complexity or the number of species in an ecosystem may be more important than the number of cell types in a single species, in which case a bacterial mat could be judged more complex than a worm or a vertebrate brain.

Both Wolpert and Conway Morris stress the importance of considering the degree of non-uniformity across different scales of size and different parts of a system when assigning a level of complexity. Thus biological complexity can be found in the increased specialization of body parts such as the duplication and subsequent differentiation of animal limbs, in the relationships between species, and in networks of ecosystems. Although the degree of specialization seems a reasonable proxy for biological complexity, as Conway Morris reminds us, there are many examples in the history of life on Earth in which specialization has led to extinction, while simplification has led to adaptive success and survival. Thus, macro-evolution displays trends towards both complexity and simplicity.

The late Stephen J. Gould strongly cautioned against the temptation to ascribe any overall directionality to biological evolution (Gould, 1996). In a famous metaphor he compared the evolution of a species in complexity space to the random walk of a drunkard who starts out leaning against a wall, but whose movement towards the gutter is unrestricted. Gould remarked that in biology there is a “wall” of minimal complexity necessary for life to function at all. He then argued that a mere constraining wall of minimal complexity does not amount to a driving force towards increasing complexity. If there is any sort of driving force at work in evolution it might be more accurately described as a tendency towards diversity, or an entropically-driven tendency to spread out in possibility space, occupying new niches made available by a varying environment.

A similar position is adopted by McShea and Brandon in their 2010 book: Biology's First Law, They describe a “zero-force evolutionary law” in which “diversity and complexity arise by the simple accumulation of accidents”. They argue that when you start with a clean white picket fence, it gets dirty – paint will peel here and there – increasing the “complexity” of the fence. This somewhat impoverished view regards complexity as being based on variation by random drift, and highlights an obvious question: are diversity and complexity synonymous? If our definition of biological complexity refers only to variation without the environmental-information-inserting influence of selection, we have omitted a key factor in biological evolution, which requires both variation and selection. An increase in variation without involving selection merely describes an increase in entropy (a pristine fence naturally degenerating into random defects) and an approach to equilibrium rather than an increase of complexity. This is simply the second law of thermodynamics and fails to capture the key quality of complexity as more than mere randomness. To make progress on this issue, we need to distinguish between variation that brings the system closer to equilibrium and variation that takes it further from equilibrium.

Ruse summarizes McShea and Brandon's views and relates them to Gould's wall of minimal complexity:

[McShea and Brandon] say that complexity is “a function only of the amount of differentiation among parts within an individual”. Elsewhere they say ““complexity” just means number of parts types or degree of differentiation among parts”. They are very careful to specify that this has nothing to do with adaptation. Indeed they say “in our usage, even functionless, useless, part types contribute to complexity. Even maladaptive differentiation is pure complexity”. How could this complexity come about? It all seems to be a matter of randomness. With Gould, and I think with Spencer, they simply believe that over time more and more things will happen and pieces will be produced and thus complexity will emerge. It is the inevitability of the drunkard falling into the gutter.

What is clear from these exchanges is that one cannot isolate the complexity of biological organisms from the complexity of their environment. Whether the inhabitants of newly available niches will become simpler or more complex depends on the complexity and simplicity of those new niches. Random walks that elevate complexity are driven by the complexity of the environment being walked in. When environments get more varied and complex, so too do the inhabitants. If the environment gets simpler (closer to equilibrium with fewer gradients) then the inhabitants get simpler as selection pressure is removed and the information in their genomes drifts away.

Part of the problem of quantifying biological complexity is that it seems impossible to put a measure on the number of new niches, let alone the number of all possible niches. How many ways of making a living are there? If the number increases with time, where does this increase come from, and is there any meaningful upper bound?

Kauffman and Conway Morris both address the issue of the potential for biological complexity afforded by an expansion of the niche space. The former opines that

we cannot know what the set of all the possibilities of the evolution of the biosphere are. Not only do we not know what WILL happen, we don't even know what CAN happen.

Also, by coining the memorable phrase: “The uses of a screwdriver cannot be listed algorithmically” Kauffman emphasizes the unlimited nature of exaptations and the impossibility of listing them in advance. Conway Morris has a different view. Based on his estimation of the importance and ubiquity of convergence, he claims that the space of phenotypes is not only saturable, in many cases it is already saturated. He contends that if the tape of life were played again, much the same results would ensue. To bolster his claim, Conway Morris presents evidence that “biological systems are near, and in some cases have clearly reached, the limits of complexity”. One exception to this – in Conway Morris's view – are human beings, who don't seem to have reached the limits to their complexity.

Kauffman contests Conway Morris's saturation claim. Unfortunately, in the absence of a measure of biological complexity it is hard to decide between these competing points of view. Davies at least sides with Kauffman, when he writes

At any given time there is only a finite number of species, but the number of potential species is almost limitless, as Stuart Kauffman explains so well in his chapter. Therefore random mutations are far more likely to discover a new species than to recreate old ones. No mystery there.

If the proximate origin of the increase of biological complexity lies in the adaptations that take advantage of new niches, the ultimate origin must be in the mechanism that supplies the new niches. What mechanism controls the number of new niches? Obviously changes in the environment play a key role. However, the environment of an organism includes not only the physical variables (temperature, rainfall, etc.) but the existence and activities of other organisms in the ecosystem. As a specific example of how a new niche drives the evolution of new adaptations, see the detailed genomic analysis of E. coli evolution in a new citrate-rich, glucose-limited medium (Blount et al., 2012).

Lineweaver argues that one can simplify the analysis of complexity because all the information and complexity in biological organisms ultimately derives from the physical environment. In addition, since cultural information and complexity depends on biological complexity, cultural complexity, too, ultimately depends on physical complexity. If this causal chain is correct, then we can understand the source and nature of complexity by focusing on the evolution of the complexity of the physical universe (galaxies, stars, and planets).

In his chapter, Krakauer gives an explicit description of how natural selection transfers the information from the environment to the genome. In addition, when the environmental information is changing too rapidly to be incorporated into genes (i.e. environmental changes on timescales shorter than a generation) he describes how environmental information can become incorporated into biological information storage facilities with much faster turnover times: i.e. brains. Thus both Lineweaver and Krakauer argue that biological information is not created de novo, but concentrated in organisms via evolutionary and cognitive processes. The ultimate source of biological information and complexity must then be found in the physical universe. Although it is hard to fault this line of reasoning, it carries the implicit assumption that information and complexity are in some sense passed on like a token from environment to organism. The argument shifts the explanation for biological information and complexity to the environment. But this is just passing the buck or moving the bump in the carpet unless we can answer the question of how the information and complexity got into the environment in the first place.

One aspect of the problem has general agreement. Complexity cannot increase in time without a source of free energy to generate (or transfer) it. This is possible only if the universe is not in a state of thermodynamic equilibrium (sometimes referred to as the heat death). Well-known examples of the emergence of complexity and the concomitant rise of entropy (or fall of free energy) to pay for it are (i) the organized structure of a hurricane, which is possible only by the existence (and low entropy) of pressure, temperature, and humidity gradients, and (ii) the origin of life driven by the exploitation of some form of chemical redox potential. Mineral evolution is also an example of increasing abiotic complexity that depends on the flow of free energy (Hazen et al., 2008). The largest variety of minerals is found near ore bodies where there has been high free energy through-put from the condensation of hot fluids driven by a temperature gradient.




© Cambridge University Press
printer iconPrinter friendly version AddThis