To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The nineteenth century saw enormous advances in electrical science, culminating in the formulation of Maxwellian field theory and the discovery of the electron. It also witnessed the emergence of electrical power and communications technologies that have transformed modern life. That these developments in both science and technology occurred in the same period and often in the same places was no coincidence, nor was it just a matter of purely scientific discoveries being applied, after some delay, to practical purposes. Influences ran both ways, and several important scientific advances, including the adoption of a unified system of units and of Maxwellian field theory itself, were deeply shaped by the demands and opportunities presented by electrical technologies. As we shall see, electrical theory and practice were tightly intertwined throughout the century.
EARLY CURRENTS
Before the nineteenth century, electrical science was limited to electrostatics; magnetism was regarded as fundamentally distinct. In the 1780s, careful measurements by the French engineer Charles Coulomb established an inverse-square law of attraction and repulsion for electric charges, and electrostatics occupied a prominent place in the Laplacian program, based on laws of force between hypothetical particles, then beginning to take hold in France. The situation was soon complicated, however, by Alessandro Volta’s invention in 1799 of his “pile,” particularly as attention shifted from the pile itself to the electric currents it produced. Much of the history of electrical science in the nineteenth century can be read as a series of attempts to come to terms with the puzzles posed, and the opportunities presented, by currents like those generated by Volta’s pile.
Throughout the late eighteenth and nineteenth centuries, there were two distinctly different ways of thinking about the earth – two different evidentiary and epistemic traditions. Such men as Comte Georges de Buffon and Léonce Elie de Beaumont in France, William Hopkins and William Thomson (Lord Kelvin) in the United Kingdom, and James Dwight Dana in the United States tried to understand the history of the earth primarily in terms of the laws of physics and chemistry. Their science was mathematical and deductive, and it was closely aligned with physics, astronomy, mathematics, and, later, chemistry. With some exceptions, they spent little time in the field; to the degree that they made empirical observations, they were likely to be indoors rather than out. In hindsight, this work has come to be known as the geophysical tradition. In contrast, such men as Abraham Gottlob Werner in Germany, Georges Cuvier in France, and Charles Lyell in England tried to elucidate earth history primarily from physical evidence contained in the rock record. Their science was observational and inductive, and it was, to a far greater degree than that of their counterparts, intellectually and institutionally autonomous from physics and chemistry. With some exceptions, they spent little time in the laboratory or at the blackboard; the rock record was to be found outside. By the early nineteenth century, students of the rock record called themselves geologists. These two traditions – geophysical and geological – together defined the agenda for what would become the modern earth sciences. Geophysicists and geologists addressed themselves to common questions, such as the age and internal structure of the earth, the differentiation of continents and oceans, the formation of mountain belts, and the history of the earth’s climate.
Until about 1840, the theory of probability was used almost exclusively to describe and to manage the imperfections of human observation and reasoning. The introduction of statistical methods to physics, which began in the late 1850s, was part of the process through which the mathematics of chance and variation was deployed to represent objects and processes in the world. If this was a “probabilistic revolution,” it was a multifarious and gradual one, the vast scope of which went largely unremarked. Yet it challenged some basic scientific assumptions about explanation, metaphysics, and even morality. For this reason, it sometimes provoked searching reflection and debate within particular fields, including physics, over what, in retrospect, appears as an important new direction in science.
At the most basic level, statistical method meant replacing fundamental laws whose action was universal and deterministic with broad characterizations of heterogeneous collectives. Statistics, whether of human societies or of molecular systems, involved a shift from the individual to the population and from direct causality to mass regularity. In social writings, it was linked to bold claims for scientific naturalism. Statisticians claimed to have uncovered a lawlike social order governing human acts and decisions that had so far been comprehended by Christian moral philosophy in terms of divine intentionality and human will. Their science seemed to devalue moral agency, perhaps even to deny human freedom. In other contexts, and especially in physics, statistical principles appeared, rather, to limit the domain of scientific certainty. They directed attention to merely probabilistic regularities, the truth of which was uncertain and approximate.
For over three millennia, cosmology had closer connections to myth, religion, and philosophy than to science. Cosmology as a branch of science has essentially been an invention of the twentieth century. Because modern cosmology is such a diverse field and has ties with so many adjacent scientific disciplines and communities (mathematics, physics, chemistry, and astronomy), it is not possible to write its history in a single chapter. Although there is no complete history of modern cosmology, there exist several partial histories that describe and analyze the main developments. The following account draws on these histories and presents some major contributions to the knowledge of the universe that emerged during the twentieth century. The chapter focuses on the scientific aspects of cosmology, rather than on those related to philosophy and theology.
THE NINETEENTH-CENTURY HERITAGE
Cosmology, the study of the structure and evolution of the world at large, scarcely existed as a recognized branch of science in the nineteenth century; and cosmogony, the study of the origin of the world, did even less. Yet there was, throughout the century, an interest, often of a speculative and philosophical kind, in these grand questions. According to the nebular hypothesis of Pierre-Simon de Laplace and William Herschel, some of the observed nebulae were protostellar clouds that would eventually condense and form stars and planets in a manner similar to the way in which the solar system was believed to have been formed. This widely accepted view implied that the world was not a fixed entity, but in a state of evolution.
The modern historical period from the Enlightenment to the mid-twentieth century has often been called an age of science, an age of progress or, using Auguste Comte’s term, an age of positivism.
Volume 5 in The Cambridge History of Science is largely a history of the nineteenth- and twentieth-century period in which mathematicians and scientists optimistically aimed to establish conceptual foundations and empirical knowledge for a rational, rigorous scientific understanding that is accurate, dependable, and universal. These scientists criticized, enlarged, and transformed what they already knew, and they expected their successors to do the same. Most mathematicians and scientists still adhere to these traditional aims and expectations and to the optimism identified with modern science.
By way of contrast, some writers and critics in the late twentieth century characterized the waning years of the twentieth century as a postmodern and postpositivist age. By this they meant, in part, that there is no acceptable master narrative for history as a story of progress and improvement grounded on scientific methods and values. They also meant, in part, that subjectivity and relativism are to be taken seriously both cognitively and culturally, thereby undermining claims for scientific knowledge as dependable and privileged knowledge.
When medical technology met computers in the last third of the twentieth century, the conjoining triggered changes almost as radical as the ones that followed the discovery of x rays in 1895. As in that earlier revolution, the greatest change was in the realm of vision. Whereas x rays and fluoroscopy allowed physicians to peer into the living body to see foreign objects, or tumors and lungs disfigured by tuberculosis (TB), the new digitized images locate dysfunction deep inside organs, like the brain, that are opaque to x rays. The initial medical impact of these new devices, like the x ray before it, was in diagnosis.
Wilhelm Conrad Röntgen’s (1845–1923) announcement of the discovery of x rays in 1896 was probably the first scientific media event. Within months, x-ray apparatus was hauled into department stores, and slot machine versions were installed in the palaces of kings and tsars, and in railroad stations for the titillation of the masses. Although the phenomenon had been discovered by a physicist who had no interest in either personal profit or any practical application, it was obvious to physicians and surgeons, as well as to those who sold them instruments, how the discovery could help make diagnoses.
The advantages seemed so great that, for the most part, purveyors of x- ray machines were either oblivious to the dangers of radiation or able to find alternative explanations for burns and ulcerating sores that kept appearing. Even so, with the exception of military medicine, exemplified in the United States by the use of x rays during the Spanish-American war, the machines were not employed routinely in American hospitals for at least a decade after their discovery.
Scientists have always expressed a strong urge to think in visual images, especially today with our new and exciting possibilities for the visual display of information. We can “see” elementary particles in bubble chamber photographs. But what is the deep structure of these images? A basic problem in modern science has always been how to represent nature, both visible and invisible, with mathematics, and how to understand what these representations mean. This line of inquiry throws fresh light on the connection between common sense intuition and scientific intuition, the nature of scientific creativity, and the role played by metaphors in scientific research.
We understand, and represent, the world about us not merely through perception but with the complex interplay between perception and cognition. Representing phenomena means literally re-presenting them as either text or visual image, or a combination of the two. But what exactly are we re-presenting? What sort of visual imagery should we use to represent phenomena? Should we worry that visual imagery can be misleading?
Consider Figure 10.1, which shows the visual image offered by Aristotelian physics for a cannonball’s trajectory. It is drawn with a commonsensical Aristotelian intuition in mind. On the other hand, Galileo Galilei (1564–1642) realized that specific motions should not be imposed on nature. Rather, they should emerge from the theory’s mathematics – in this way should the book of nature be read. Figure 10.2 is Galileo’s own drawing of the parabolic fall of an object pushed horizontally off a table. It contains the noncommonsensical axiom of his new physics that all objects fall with the same acceleration, regardless of their weight, in a vacuum.
Hephaestus, arms maker to the gods, was the only deity with a physical disability. Lame and deformed, he caricatured what his own handiwork could do to the human body. Not until the later twentieth century, however, did his heirs and successors attain the power to inflict such damage on the whole human race. Nuclear weapons lent salience to the long history of military technology. The Cold War contest between the United States and the Soviet Union attracted the most attention and concern, but in the second half of the twentieth century, science and technology transformed conventional warfare as well. Even small states with comparatively modest arsenals found themselves stressed by the growing ties and tensions between science and war.
The relationship between science, technology, and war can be said to have a set of defining characteristics: (1) State funding or patronage of arms makers has flowed through (2) institutions ranging from state arsenals to private contracts. This patronage purchased (3) qualitative improvements in military arms and equipment, as well as (4) large-scale, dependable, standardized production. To guarantee an adequate supply of scientists and engineers, the state also underwrote (5) education and training. As knowledge replaced skill in the production of superior arms and equipment, a cloak of (6) secrecy fell over military technology. The scale of activity, especially in peacetime, could give rise to (7) political coalitions; in the United States these took the form of the military-industrial complex. The scale also imposed upon states significant (8) opportunity costs in science and engineering that were often addressed by pursuit of (9) dual-use technologies. For some scientists and engineers, participation in this work posed serious (10) moral questions.
In this chapter I shall illustrate some of the general trends in the development of mathematical analysis by considering its most basic element: the concept of function. I shall show that its development was shaped both by applications in various domains, such as mechanics, electrical engineering, and quantum mechanics, and by foundational issues in pure mathematics, such as the striving for rigor in nineteenth-century analysis and the structural movement of the twentieth century. In particular, I shall concentrate on two great changes in the concept of function: first, the change from analytic-algebraic expressions to Dirichlet’s concept of a variable depending on another variable in an arbitrary way, and second, the invention of the theory of distributions. We shall see that it is characteristic of both of the new concepts that they were initiated in a nonrigorous way in connection with various applications, and that they were generally accepted and widely used only after a new basic trend in the foundation of mathematics had made them natural and rigorous. However, the two conceptual transformations differ in one important respect: The first change had a revolutionary character in that Dirichlet’s concept of function completely replaced the earlier one. Furthermore, some of the analytic expressions, such as divergent power series, which eighteenth-century mathematicians considered as functions, were considered as meaningless by their nineteenth-century successors. The concept of distributions, on the other hand, is a generalization of the concept of function in the sense that most functions (the locally integrable functions) can be considered distributions. Moreover, the theory of distributions builds upon the ordinary theory of functions, so that the theory of functions is neither superfluous nor meaningless.
In 1967, Per-Olov Löwdin introduced the new International Journal of Quantum Chemistry in the following manner:
Quantum chemistry deals with the theory of the electronic structure of matter: atoms, molecules, and crystals. It describes this structure in terms of wave patterns, and it uses physical and chemical experience, deep-going mathematical analysis, and high-speed electronic computers to achieve its results. Quantum mechanics has rendered a new conceptual framework for physics and chemistry, and it has led to a unification of the natural sciences which was previously inconceivable; the recent development of molecular biology shows also that the life sciences are now approaching the same basis.
Quantum chemistry is a young field which falls between the historically developed areas of mathematics, physics, chemistry, and biology.
In this chapter I address the emergence and establishment of a scientific discipline that has been called at times quantum chemistry, chemical physics, or theoretical chemistry. Understanding why and how atoms combine to form molecules is an intrinsically chemical problem, but it is also a many-body problem, which is handled by means of the integration of Schrödinger’s equation. The heart of the difficulty is that the equation cannot be integrated exactly for even the simplest of all molecules. Devising semiempirical approximate methods became, therefore, a constitutive feature of quantum chemistry, at least in its formative years.
Until the end of the nineteenth century, geometry was the study of space. As such, geometrical knowledge can be found in virtually all civilizations. Ancient Sumerians, Babylonians, Chinese, Indians, Aztecs, and Egyptians surveyed their lands, constructed their pyramids, and knew the relation among the sides of a right triangle. The Western geometrical tradition dates from Euclid’s (fl. 295 B.C.E.) Elements. What marks this work as seminal lies not so much in its content per se as in how that content was known.
Two tightly interwoven characteristics marked Euclidean geometrical knowledge. First, the objective characteristic was the strict correspondence between the terms of the geometry and the objects to which those terms referred. Euclid’s geometry dealt with something that we would call space. For example, the Euclidean definition “a point is that which has no part” neither explains the concept of point nor shows how to use it nor establishes its existence. It does, however, indicate what a point is. The definition has meaning; it refers to an aspect of space that we already know.
Euclidean axioms are self–evident truths; the postulates are obvious statements that must be accepted before the rest can follow. Like the definitions, the axioms and postulates are statements about space that make explicit what we already know. Euclid’s axioms and postulates do more, however. They support and structure all of the subsequent argument; all of the rest of the subject is drawn out of or built upon these basics. The adequacy of this axiomatic structure to support all legitimate geometrical conclusions is the second, rational, characteristic of Euclidean knowledge.
When we consider issues in science and religion in the nineteenth century and even in subsequent years, we naturally think first of the evolutionary controversies that have commanded public attention. However, there are important ways in which developments in physical science continued to intersect with the interests of people of all religious beliefs. Indeed, the closer one approached the end of the twentieth century, the more the interaction between science and religion was dominated by topics involving the physical sciences, and the more they became as important to non-Christian religions as to various forms of Christianity. For the nineteenth century, most issues were new versions of debates that had been introduced long before. Because these reconsiderations were frequently prompted by new developments in physical science, forcing people of religious faith into a reactive mode, the impression grew that religion was increasingly being placed on the defensive. For a variety of reasons, this form of the relationship between the two fields changed greatly over the course of the twentieth century until, at the dawn of the third millennium of the common era, the intersection between science and religion is currently being informed both by new theological perspectives and by new developments in physical science.
Religion intersects with the physical sciences primarily in questions having to do with the origin, development, destiny, and meaning of matter and the material world. At the beginning of the period under review, the origin of matter itself was not regarded as a scientific question. The development of the cosmos, however, or how it had acquired its present contours and inhabitants, was a subject that had been informed by new telescopic observations and even more by the impressive achievements of Newtonian physical scientists of the eighteenth century. The Enlightenment had also produced fresh philosophical examinations of old religious questions and even of religious reasoning itself.
Quantum mechanics is a most intriguing theory, the empirical success of which is as great as its departure from the basic intuitions of previous theories. Its history has attracted much attention. In the 1960s, three leading contributors to this history, Thomas Kuhn, Paul Forman, and John Heilbron, put together the Archive for the History of Quantum Physics (AHQP), which contains manuscripts, correspondence, and interviews of early quantum physicists. In the same period, Martin Klein wrote clear and penetrating essays on Planck, Einstein, and early quantum theory; and Max Jammer published The Conceptual Development of Quantum Mechanics, still the best available synthesis.
Since then, this subfield of the history of science has grown considerably, as demonstrated in accounts such as the five-volume compilation by Jagdish Mehra and Helmut Rechenberg; the philosophically sensitive studies by Edward MacKinnon, John Hendry, and Sandro Petruccioli; Bruce Wheaton’s work on the empirical roots of wave-particle dualism; my own book on the classical analogy in the history of quantum theory; and a number of biographies.
These sources nicely complement one another. There have been, however, a couple of bitter controversies. Historians notoriously disagree about the nature of Planck’s quantum work around 1900. Whereas Klein sees in it a sharp departure from classical electrodynamics, Kuhn denies that Planck introduced any quantum discontinuity before Einstein. Here I take Kuhn’s side, although it follows from Allan Needell’s insightful dissertation that neither Klein nor Kuhn fully identified Planck’s goals.
Few branches of the physical sciences have had more of an impact on the twentieth-century world than radioactivity and nuclear physics. From its origins in the last years of the nineteenth century, the science of radioactivity spawned the discovery of hitherto unsuspected properties of matter and of numerous new elements. Its practitioners charted a novel kind of understanding of the structure and properties of matter, their achievements gradually winning wide acceptance. With its emphasis on the internal electrical structure of matter and its explanation of atomic and molecular properties by subatomic particles and forces, radioactivity transformed both physics and chemistry. Its offspring, nuclear physics and cosmic ray physics, consolidated and extended the reductionist approach to matter, ultimately giving rise to high energy physics, the form of physical inquiry that became characteristic of late-twentieth-century science: large, expensive machines designed to produce ever smaller particles to support ever more complex and comprehensive theories of the fundamental structure of matter.
The significance of nuclear physics extends far beyond the laboratory and even science itself, however. Practiced in only a handful of places in the 1930s, nuclear physics boomed during World War II, when it provided the scientific basis for the development of nuclear weapons. During the Cold War, nuclear and thermonuclear weapons were the key elements in the precarious military standoff between the superpowers. At the same time, the development of the civil nuclear power industry, of nuclear medicine, and of many other applications brought nuclear phenomena to the attention of a large public. Nuclear physicists came to enjoy enormous prestige and to command enormous resources for their science in the context of the nuclear state.
In the past century, the state has assumed a central role in fostering the development of science. Through direct action, such as subsidies and stipends, and indirect action, such as tax incentives, the modern nation-state supports research in universities, national laboratories, institutes, and industrial firms. Political leaders recognize that science serves a variety of needs: Public health and defense are the most visible, with research on radar, jet engines, and nuclear weapons among the most widely studied. Scientists, too, understand that state support is crucial to their enterprise, for research has grown increasingly complex and expensive, involving large teams of specialists and costly apparatuses. In some countries, philanthropic organizations have underwritten expenses. In communist countries, where the state took control of private capital in the name of the worker, the government was virtually the only source of funding.
The reasons for state support of research seem universal, bridging even great differences in the ideological superstructures that frame economic and political desiderata. Some reasons are tangible, such as national security, but some are intangible, including the desire to prove the superiority of a given system and its scientists through such visible artifacts as hydropower stations, particle accelerators, and nuclear reactors. Whether we consider tangible or intangible issues, capitalist or socialist economies, authoritarian or pluralist polities, the role of the state and its ideology is crucial in understanding the genesis of modern science, its funding, institutional basis, and epistemological foundations.
Language plays a key role in shaping the identity of a scientific discipline. If we take the term “discipline” in its common pedagogical meaning, a good command of the basic vocabulary is a precondition to graduation in a discipline When disciplines are viewed as communities of practitioners, they are also characterized by the possession of a common language, including esoteric terms, patterns of argumentation, and metaphors. The linguistic community is even stronger in research schools, as a number of studies emphasize. Sharing a language is more than understanding a specific jargon. Beyond the codified meanings and references of scientific terms, a scientific community is characterized by a set of tacit rules that guarantee a mutual understanding when the official code of language is not respected. Tacit knowing is involved not only in the understanding of terms and symbols but also in the uses of imagery, schemes, and various kinds of expository devices. A third important function of language in the construction of a scientific discipline is that it shapes and organizes a specific worldview, through naming and classifying objects belonging to its territory. This latter function is of special interest in chemistry.
According to Auguste Comte, the method of rational nomenclature is the contribution of chemistry to the construction of the positivistic or scientific method. Although earlier attempts at a systematic nomenclature were made in botany, the decision by late-eighteenth-century chemists to build up an artificial language based on a method of nomenclature played a key role in the emergence of modern chemistry.