To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An Individual and Its Possibilities: The State Space
The first question in any scientific research is its subject matter: What are we studying? The most general answer is a certain kind of system. A system, no matter how complex, is an individual, which is logically a particular, a unit with characters describable in general terms and a unique identity that enables us to single it out and refer to it unambiguously without invoking its characters. We must delineate the system before we delve into its properties. Thus the first general concept in the formulation of a theory is that of an individual. Individuals are represented in scientific theories by the state spaces, which form the keystone of the theories by defining their topics. The notion of a composite system as an integral individual with its own state space is the synthetic conceptual framework of many-body theories.
The general concept of a concrete individual in the real world is much more complicated than that of an abstract individual, which may be merely an unanalyzable x over which predicate variables range. The complication is most apparent in many-body theories, where both the composite system and its constituents are explicitly treated as individuals. An individual is isolated if it is not encumbered by any relation and not guaranteed to be relatable; situated, if it is an interactive part of a system; composite, if analyzable into parts; simple, if not.
So far we have examined models in which large composite systems are microanalyzed into uncoupled or weakly coupled entities. In some models, the entities are modified constituents that have internalized much of their former interaction; in others, they are collectives engaging the organized behaviors of many constituents. The models focus on finding the characters and behaviors of the entities that, once known, can be readily aggregated to yield system characters, for the weak coupling among the entities can be neglected in a first approximation. Thus we can crudely view a system in these models as the sum of its parts, although the parts are specialized to it and differ from the familiar constituents of its smaller counterparts.
Not all phenomena are microanalyzable by modularization. Phenomena such as freezing and evaporation, in which the entire structure of a system changes, would be totally obscured from the viewpoint of the parts. Here the correlation among the constituents is too strong and coherent to be swept under the cloaks of individual modules, whatever they are. In such thoroughly interconnected systems, the effects of a slight perturbation on a few constituents can propagate unhampered and produce large systemwide changes. Consequently, the behaviors of these systems are more multifarious, unstable, and surprising. They are often called emergent properties and processes. Emergent characters are most interesting and yet most controversial, for they are most difficult to treat theoretically and incur the wrath of some revisionary philosophies.
Mechanical processes lack a preferred temporal direction. Directionality appears in thermodynamic processes, which nevertheless have neither past nor future. Evolution is a historical process in which past changes are frozen into the structures of present organisms. Economic agents use the capital accumulated from the past and determine their action according to their expectations of the future. As dynamic processes become more complicated, so do the corresponding concepts of time.
There are hardly any phenomena more obvious than the temporal asymmetry of almost all processes we live through and observe. Moving things slow because of friction but do not spontaneously accelerate; we have memories of the past but only expectations for the future. Does the temporal asymmetry merely reflect our own life process and the processes familiar to us, or does it have a more objective grounding?
This section considers the objectivity of a global temporal direction. A particular kind of irreversible process has a temporal direction, but there are many kinds of irreversible processes. Physicists alone talk about at least four “arrows of time”: quantum, thermodynamic, electromagnetic, cosmic. More arrows occur in other sciences. The relations among the various temporal directions are not clear. Can they be harmonized objectively and not merely conventionally into a global temporal direction, which can be briefly called the direction of time? Failing that, can we find some kind of pervasive and irreversible process to serve as a standard direction?
Science reveals complexity unfolding in all dimensions and novel features emerging at all scales and organizational levels of the universe. The more we know the more we become aware of how much we do not know. Gone is the image of a clockwork universe. Equally untenable are the image of a clockwork science that claims to comprehend all the diversity by a single method and a single set of laws and the clockwork scientists who are absorbed in deducing the consequences of the laws by applying given algorithms. Scientific research is a highly creative activity. Scientific creativity, however, is not anything-goes arbitrariness. There are general guiding principles, which are discernable across diverse disciplines.
We have examined the general way in which theoretical reason comes to grip with complexity. This synthetic microanalytic approach is not restricted to science. Readers would hear a familiar ring in the following passage from a textbook in computer engineering: “The techniques we teach and draw upon are common to all of engineering design. We control complexity by building abstractions that hide details when appropriate. We control complexity by establishing conventional interfaces that enable us to construct systems by combining standard, well-understood pieces in a ‘mix and match’ way. We control complexity by establishing new languages for describing a design, each of which emphasizes particular aspects of the design and deemphasizes others.”
Synthetic Microanalysis of Complex Composite Systems
According to our best experimentally confirmed physical theory, all known stable matter in the universe is made up of three kinds of elementary particle coupled via four kinds of fundamental interaction. The homogeneity and simplicity at the elementary level imply that the infinite diversity and complexity of things we see around us can only be the result of that makeup. Composition is not merely congregation; the constituents of a compound interact and the interaction generates complicated structures. Nor is it mere interaction; it conveys the additional idea of compounds as wholes with their own properties. Composition is as important to our understanding of the universe as the laws of elementary particles, and far more important to our understanding of ourselves, for each of us is a complex composite system and we participate in complex ecological, political, and socioeconomic systems. How does science represent and explain the complexity of composition?
Large-scale composition is especially interesting because it produces high complexity and limitless possibility. Zillions of atoms coalesce into a material that, under certain conditions, transforms from solid to liquid. Millions of people cooperate in a national economy that, under certain conditions, plunges from prosperity into depression. More generally, myriad individuals organize themselves into a dynamic, volatile, and adaptive system that, although responsive to the external environment, evolves mainly according to its intricate internal structure generated by the relations among its constituents.
An Intermediate Layer of Structure and Individuals
There is no ambiguity about the ingredients of solids. A solid is made up of atoms, which are decomposed into ions and electrons, simple and clear. However, a chapter or two into a solid-state text book and one encounters entities such as phonons and plasmons. Not only do they have corpuscular names, they behave like particles, they are treated as particles, and their analogues in elementary particle physics are exactly Particles. What are they and where do they come from? A phonon is the concerted motion of many ions, a plasmon of many electrons. They are treated as individuals not because they resemble tiny pebbles but because they have distinctive characters and couple weakly to each other. Physicists have microanalyzed the solid afresh to define new entities that emerge from the self-organization of ions and electrons.
Salient structures describable on their own frequently emerge in large composite systems. Systemwide structures will be discussed in the following chapter. Here we examine a class of structure that is microanalyzable into novel individuals, which I call collectives. A collective arises from the coherent behavior of a group of strongly interacting constituents. It has strong internal cohesion and weak external coupling, and its characters and causal relations can be conceptualized independently of its participants. Thus it is treated as an individual. Phonons and plasmons are examples of collectives, as are firms and households, which are organized groups of people.
Einstein once said that “thinking without the positing of categories and of concepts in general would be as impossible as is breathing in a vacuum.” His remark echoes a long tradition of Western philosophy arguing that our experience and knowledge are structured by a framework of categories or general concepts. The categorical framework contains our most basic and general presuppositions about the intelligible world and our status in it. It is not imposed externally but is already embodied in our objective thoughts as oxygen is integrated in the blood of breathing organisms. Since the categories subtly influence our thinking, it is as important to examine them as to test whether the air we breathe is polluted. Philosophers from Aristotle to Kant have made major efforts to abstract them from our actual thoughts, articulate, and criticize them.
This book continues my effort to uncover the categorical framework of objective thought as it is embedded in scientific theories and common sense. Scientific theories contain some of our most refined thoughts. They do not merely represent the objective world: They represent it in ways intelligible to us. Thus while their objective contents illuminate the world, their conceptual frameworks also illustrate the general structure of theoretical reason, an important aspect of our mind.
As a physicist who turns to philosophize, I naturally started by examining relativity and quantum mechanics. Many general concepts, including the familiar notions of object and experience, space–time and causality, seem problematic when physics pushes beyond the form of human observation and analyzes matter to its simplest constitutive level.
“Some are brave out of ignorance; when they stop to think they start to fear. Those who are truly brave are those who best know what is sweet in life and what is terrible, then go out undeterred to meet what is to come.” The ancient Greeks are not renowned for their sense of history, but these words – put into the mouth of Pericles for a funeral speech honoring fallen warriors in the first winter of the Peloponnesian War by Thucydides some thirty years later, when their city Athens lay in total defeat – reveal the fundamental temporality of the human being: commitments made in view of past experience and future uncertainty. The awareness that we have a past and a future opens a finite temporal horizon for each moment, frees us from the grip of the immediate present, enables us to push back the frontier at either end, to study history and to develop techniques of prediction. Temporality makes possible genuine action; we know that what we choose to do makes our history and changes our destiny. Despite its obscurity, the future is not a blinding fog bank; here and there we see possibilities illuminated by experience including the knowledge of the sweet and the terrible. The past constrains, but it is not a mere dead weight that conditions the behavior of an organism. It contributes to the future by opening our consciousness to a wider range of possibilities and by shaping the character of the person who chooses the aim of his life and adheres to it through vicissitudes.
The Calculus of Probability and Stochastic Processes
The probability calculus finds application in a large and expanding class of empirical theory in the natural, social, and human sciences. What general features do the diverse topics share that make them susceptible to representation by the same mathematics?
Chance is an easy answer; probability is intuitively associated with chance. Easy answers are often wrong. Chance is not among the primary or secondary concepts of the probability calculus; it does not even have a clear definition there. The clue to the common features of the sciences that employ the probability calculus lies in the structure of the calculus, not in its name; thus it is misleading to call its application the “laws of chance” or the “empire of chance.”
The first ideas introduced in the axioms of the probability calculus are that of part and whole and that of the relative magnitudes of the parts. Probability is technically defined as a relative magnitude. The most general feature of probabilistic systems is that they are composite. Various limit theorems and laws of large numbers show that the calculus is most powerful for large or infinite composite systems. Large composite systems are usually very complex. The probability calculus is adept in treating a special class of relatively simple system, the constituents of which are independent of each other. Independence is the chief meaning of randomness in the calculus, and it is often posited as an approximation that simplifies the theories of realistic composite systems.
Organizational and Descriptive Levels of the World
The physicist Richard Feynman closed his lecture on the relations among various sciences by contemplating a glass of wine. He said if we look at the wine closely enough we see the entire universe: the optics of crystals, the dynamics of fluids, the array of chemicals, the life of fermentation, the sunshine, the rain, the mineral nutrient, the starry sky, the growing vine, the pleasure it gives us. “How vivid is the claret, pressing its existence into the consciousness that watches it! If our small minds, for some convenience, divide this glass of wine, this universe, into parts – physics, biology, geology, astronomy, psychology, and so on – remember that nature does not know it!”
Feynman did not mention the price tag, the symbol of the market economy. The division of intellectual labor he highlighted, however, underlies the success of science just as the division of labor generates the prosperity in which common people can enjoy fine wines from around the world. How is the division of intellectual labor possible? What are the general conditions of the world that allow so many scientific disciplines to investigate the same glass of wine? What does the organization of science tell us about the general structure of theoretical reason?
Our scientific enterprise exhibits an interesting double-faced phenomenon. On the one hand, scientists generally acknowledge that everything in the universe is made up of microscopic particles.
Dynamic systems are things undergoing processes. This and the following chapter examine two classes of processes and three mathematical theories applicable to them. Deterministic processes are governed by dynamic rules represented by differential equations (§§ 29–31). Stochastic processes are treated by theories that make use of the probability calculus but do not mention dynamic rules (§§ 35–38). A dynamic process can be characterized deterministically in a fine-grained description and stochastically in a coarsegrained description. Both characterizations are included in the ergodic theory, which employs both dynamic rules and statistical concepts (§§ 32–33). The mathematics that unites deterministic and stochastic concepts in a single dynamic system exposes the irrelevancy of the metaphysical doctrines of determinism and tychism (the dominion of chance) (§§ 34, 39).
A deterministic process follows a dynamic rule that specifies a unique successor state for each state of the system undergoing the process. The rulegoverned change makes the behaviors of deterministic systems predictable and controllable to a significant extent. In recent decades, high-speed digital computers have enabled scientists to study dynamic systems previously deemed too difficult, notably nonlinear systems. Some of these systems exhibit chaotic behaviors that are unpredictable in the long run, because the slightest inaccuracy in the initial state is amplified exponentially, so that the error eventually overwhelms the result of the dynamic rule.
Stochastic processes are represented by distribution functions that give the number of stages in a process having certain characters.
When scattering occurs in systems of large spatial extension like solids, a relationship appears between scattering and transport. This is particularly evident in the process of transport of thermal neutrons in nuclear reactors. On the one hand, neutrons may form beams for the study of matter and, on the other hand, as soon as the target presents spatially distributed scattering centres, the scattering process becomes a diffusion process, i.e., a process of transport as studied in kinetic theory. Clearly, both processes are similar and the difference only appears in the number and distribution of scatterers. Therefore, a fundamental connection exists between scattering theory and nonequilibrium statistical mechanics. The scattering approach to diffusion is also natural since diffusion is studied in finite pieces of material in the laboratory. Diffusion is a property of bulk matter which is extrapolated from experiments on finite samples to a hypothetical infinite sample.
Classically, the scattering on a spatially distributed target may be expected to be chaotic because the collisions on spherical scatterers have a defocusing character. Chaoticity will play an important role in such a connection. In the following, we shall elaborate in this direction with the tools developed in the previous chapters to obtain the so-called escape-rate formulas for the transport coefficients, which precisely express such a relationship (Gaspard and Nicolis 1990, Gaspard and Baras 1995).
We should mention here that Lax and Phillips (1967) proposed in the sixties a scattering theory of transport phenomena based on the properties of classical dynamics.