To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper examines the relationship between the Admiralty and the Royal Observatory, Greenwich, by studying the roles of the Hydrographer and the Astronomer Royal as they worked together on the problem of communicating accurate time to ships. The collaboration between the Astronomer Royal and the Hydrographer directed the development of time balls and other visual signals throughout their period of use in Britain and its colonies. This paper focuses on the time ball and clock developed by the Astronomer Royal William Christie and the Hydrographer William Wharton as a key example of significant and productive collaboration between the two institutions. The paper also highlights the importance of the telegraph system to visual time signals. The ability to drop a time ball at a distance from an observatory created significant opportunities to improve the time signal service to mariners and stimulated further innovation in this field.
In May 2003, from the Baikanur launchpad in the Central Asian deserts of Kazakhstan, British scientists fired a Russian Soyuz-Fregat rocket to launch a probe called the Mars Express, intended to determine whether recognizable chemical signs of life could be found in the thin atmosphere and dusty rocks of the red planet. In 1971, the Soviets had been the first to land a probe on Mars, and they were followed by the American Viking missions in 1976. In January 2004, the U.S. National Aeronautics and Space Administration (NASA) landed the mobile rovers Spirit and Opportunity on Mars. These represented huge and dangerous efforts. Of thirty previous missions to Mars, twenty had gone seriously wrong. In 2003, a British probe intended to explore the Martian surface, called – significantly – Beagle-2, failed to arrive on the surface. The European mission cost 300 million euros and the American mission ten times as much. Behind all these efforts lies the necessity of securing wide political and public support. Thus, the space missions are performed in “full view of the public.” As Alan Wells, director of space research at the University of Leicester, put it, “We are breaking new ground in the public presentation of space science.” His duty, in his words, is to be a professor of public relations as well as planetary science.
Among the modern life sciences, physiology trails only the evolutionary sciences in the attention it has received from historians. Lamarck, Darwin, and Mendel may be better known than the heroes of modern physiology, but names such as François Magendie (1783–1855), Johannes Müller (1801–1858), Claude Bernard (1813–1878), Hermann von Helmholtz (1821–1894), Ivan Pavlov (1849–1936), and Charles Sherrington (1857–1952) require little introduction for those who read more than occasionally in the history of science. Physiology may have attracted such attention because it has been widely viewed as the first of the modern biological disciplines to emerge from traditional approaches to the phenomena of life embodied in medicine and natural history. Furthermore, physiology allowed historians of science of the first generation after World War II to develop a series of narratives that reflected their broader concerns about the nature and significance of modern science and about how to write its history. If the historiography of the physical sciences in the 1950s and 1960s found its normative models in the “Scientific Revolution” of the sixteenth and seventeenth centuries, so, too, in those decades did historians of the life sciences locate their normative models in nineteenth-century physiology.
Buoyed by the combination of optimism of understanding the natural world from Isaac Newton’s version of the mechanical philosophy and the excitement of discovering natural artifacts of the natural world from naturalists such as Carl Linnaeus, Abraham Werner, and Georges Buffon, natural philosophers turned increasingly to studying nature in nature by the end of the eighteenth century and the beginning of the nineteenth century. Certainly the maturation of the cabinet tradition in the form of emerging national museums (Muséum d’Histoire Naturelle, British Museum) and national botanical gardens (Royal Botanical Gardens at Kew) at this same time underscores the importance of learning from the natural world. Furthermore, continued overseas expansion and exploration, especially in North America, the Indian subcontinent of Asia, and Australia, heightened European interests in this direction.
Many of these same eighteenth-century motivations continued into the nineteenth century and, moreover, may be described after the model of scientific transmission and development offered by George Basalla, which he developed by examining the early history of American science vis-à-vis science in England. It is certainly appropriate to borrow from and to expand on Basalla, for much of the eighteenth-century interest in the natural world was exhibited by Europeans who observed nature outside of Europe, primarily within their colonial holdings. They collected specimens on voyages of discovery and recruited local colonialists to collect specimens that could later be sent back to European museums and universities following the return of the imperial explorers to their mother country (see MacLeod, Chapter 3, this volume).
Environmentalism is a moving target, always changing position and appearance. Some see it as a state of mind or a way of life; others assume it is a critique of contemporary society or a political platform. Even its single most widely understood meaning, concern about the state of, and human impacts on, the natural environment, has diverse implications, from merely recycling cans and bottles to rejecting industrial society. Environmental values vary across cultures: One society’s bustling, prosperous city is another’s smogchoked hell; a stagnant swamp fit for draining is also a diverse wetland worth preserving. Where consensus has formed on environmental problems, their definition as matters of personal or societal responsibility nevertheless varies across social contexts.
Clearly, a linear, sequential history of environmentalism is not possible. As a result, studies of environmentalism have often focused on specific places: the American West, New England, Canada, Britain, Sweden, or India. Conversely, historians attempting a general account have sometimes been tempted to constrain this diversity within a single narrative, grounded in a search for the “roots” or “origins” of environmentalism.
In seeking these roots, historians have most often found them in individuals such as Gilbert White, Henry David Thoreau, John Muir, and George Perkins Marsh or, more recently, Rachel Carson or Aldo Leopold. Recent studies have provided a wider view of these origins by demonstrating how ideas have emerged from colonial contexts to eventually shape European perspectives or by showing the significance of places and disciplines not usually considered central to environmentalism, such as industrial hygiene and the “workplace roots of environmentalism.”
In January 1994, the journal Scientific American published a review essay on cancer that opened with a quotation from John Bailar III, a famous epidemiologist at McGill University, claiming that the “war on cancer” had not been won. Bailar was referring to the anticancer campaign launched by President Richard Nixon in 1971 as a civilian alternative to the Vietnam War and as a Republican follow-up to President Lyndon Johnson’s War on Poverty. Bailar’s argument rested on statistical data from the National Cancer Institute suggesting that U.S. cancer death rates, adjusted for the aging population, went up seven percent during the twenty-five years of a war that was waged by means of research investments – both biological and clinical.
That article reminds us that cancer remains the visible, frightening, and “scientific” disease it has been for more than a century. More than tuberculosis or syphilis, which were considered conquered after World War II, cancer was the scourge of the twentieth century. From the late nineteenth century, the growing incidence of various types of tumors, as well as the limitations of existing therapies, have been at the center of Western medical discourses increasingly concerned with relatively wealthy and aging populations. Since then, experts have viewed the formation of tumors as a problem of unlimited multiplication of cells, a process that might be controlled by physical or chemical means derived froma better understanding of cell growth and cell division.
Science in the nineteenth century underwent major transformations. The immense growth of knowledge encouraged subdivision into increasingly narrow and self-contained areas of specialization. Science changed from an area of learning in which it was exceptional for people to be paid to pursue it into one in which large numbers were receiving instruction in schools and universities with the expectation of making their living from it. Science turned into a substantial profession, but the process of professionalization was not automatic. In most developed countries, there were conditions inimical to it, and when the change eventually took place, it did so comparatively abruptly and generated considerable tension. This compression has been a boon to historians, for it provides them with a clearly marked stratum dividing the preexisting world of science from the very different one that emerged shortly afterward.
THE PREPROFESSIONAL ERA
Until the 1880s, it is unhelpful and misleading to employ the categories “amateur” and “professional.” Whereas “amateur” has come to acquire a derogatory overtone, especially in the United States, it was the “professional” who was despised in the early nineteenth century. A professional was someone who received money to do something that others did for pleasure, and to put one’s labor up for hire placed one in the position of a servant. This aristocratic prejudice had trickled down into the upper middle class and restricted the range of occupations members of that class could follow.
For almost a century, entrepreneurs, policymakers and scientists have used the word biotechnology to describe imminent revolutions based on the application of biology. Yet although novel clusters of techniques, products, and promises were clearly momentous to visionaries, they repeatedly failed to achieve their foreseen potential.
The old frustration seemed to have been overcome in 1980 when the U.S. Supreme Court permitted the patenting of a transgenic bacterium that could consume oil spilled at sea. Many were enthused by the new development, and foreign governments felt that this was an American challenge they could not afford to duck. Although a few quaked before this new appropriation of science, the majority of commentators assumed that finally the subdivision and exploitation of the world of primitive living beings was about to begin. The possibility of patenting new organisms made by means of modern biological techniques and, in particular, the methods of recombinant DNA that had first been developed in the early 1970s would, it seemed, open up hitherto undreamed of possibilities. Rather than relying on traditional breeding, which entailed combining genes of animals and plants within the same species, genes could now be combined from across the entire spectrum of living organisms. At this moment, when an oil crisis suggested that old energy-intensive industries had had their day, and the success of electronics had demonstrated the possibility of a new industrial revolution, every major country created its own biotechnology plan.
“Experiments,” observed French physiologist Claude Bernard in An Introduction to the Study of Experimental Medicine (1865), “may be performed on man, but within what limits?” In the nineteenth and twentieth centuries, answers to Bernard’s rhetorical question have differed as physicians, scientists, and soldiers have sought to define the appropriate conduct of human experimentation. Whereas Bernard argued that “The principle of medical and surgical morality consists in never performing on man an experiment which might be harmful to him to any extent, even though the result might be highly advantageous to science,” German and Japanese physicians in the Second World War performed experiments on concentration camp inmates and prisoners that were calculated to maim and kill their subjects. Although the limits of ethical experimentation have wide, and in some cases grotesque, variations, physicians and scientists have never been free to experiment at will and without regard for the welfare of research subjects – animal and human. In Nazi Germany, in a hideous reversal of the usual norms regarding human experimentation, Nazi doctors were able to use concentration camp inmates as experimental subjects without restraint, but they were restricted by law in their use of laboratory animals. Part of this chapter explores the ways in which the practice of human experimentation has been constrained in the last two centuries and the groups – physicians, legislators, activists, and members of the lay public – who have participated in defining and implementing limits on human subject research.
The only constant characteristics of a research area that we can, anachronistically, describe as microbiology might be the minute size of the organisms it studies and its reliance on instruments and a set of techniques that allow us to see beyond the range of what is visible to the naked eye. Stability or continuity are difficult to find elsewhere – either in the range and classification of microorganisms, in the types of questions asked about them, in the theoretical or practical goals of research, in the institutions in which investigations were conducted, or in the composition of the group of scientists to whom these microscopic organisms were of interest.
The range of organisms encompassed by these investigations has changed many times during the last two centuries. Relatively undifferentiated infusoria gave place to protists and schizomycetes, and later to protozoa, bacteria, fungi, and algae; the invisible filterable viruses, obligate parasites, and lytic principles appeared only temporarily, to be replaced by rickettsia and viruses. These microorganisms were investigated by a heterogeneous assembly of amateurs, botanists, zoologists, biologists, pathologists, biochemists, geneticists, medical doctors, sanitary engineers, agricultural scientists, veterinarians, public health investigators, biotechnologists, and so on. Specialisms and disciplines devoted to specific groups of microorganisms – bacteriology, virology, protozoology, and mycology – have disparate though often overlapping institutional and intellectual histories, and although the term “microbiology” dates from the last decades of the nineteenth century, it did not come to designate a discipline that could claim its own sphere of concern until after the Second World War.
The relation between geology and industry remains a significant, challenging, yet overlooked topic within the history of the earth sciences. Anyone surveying the subject confronts the glaring fact that very little has been written on it either by historians or geologists themselves. Industry is nevertheless important to understanding the history of geology if for no other reason than the tremendous amount of research that scientists (and engineers) have done on mineral resources. It would have been difficult to find a prominent nineteenth- or twentieth-century geologist who was unfamiliar with coal, petroleum, iron, copper, silver, or gold, not to mention building stones, water, and salt. Practically every textbook had some description of the origin and occurrence of useful minerals, whether the author was studying them or not. On the surface, economic resources seem to occupy a central place in geology, but explaining industry’s influence on the development of the science is another matter entirely.
This chapter addresses the relation between geology and industry from four perspectives: mining schools, government surveys, private surveys, and industrial science. The first two sections discuss institutions that served as intermediaries between science and commerce. The third section addresses the settings and conditions in which geologists worked directly for private enterprise, and the last section treats the emergence of new research fields that industry encouraged. This analytical framework follows a rough chronology, beginning in the late eighteenth century and ending in the mid-twentieth, which itself reveals the increasing influence of industry on geology.
Geophysics is the branch of experimental physics concerned with the earth, atmosphere, and hydrosphere. It includes such fields as meteorology and oceanography, but attention is restricted here to geodesy, gravimetry, seismology, and geomagnetism. Geochemistry is the study of the distribution and migration of the different elements in the earth, oceans, and atmosphere and therefore involves the chemical analyses of minerals, rocks, and the atmosphere, and mineral solutions. Modern geochemical research makes much use of studies of the radioisotopes of the different elements, which are also used for radiometric dating. In such work, the boundaries between geophysics, geochemistry, and geology are indistinct.
Geophysics is an important field, both practically, as in earthquake studies, and theoretically, as exemplified by geophysicists’ contributions to the establishment of plate tectonics (see Frankel, Chapter 20, this volume). Geochemistry is likewise important: practically, as in geochemical prospecting, and theoretically, especially regarding the earth’s origin and cyclic processes, some involving living organisms. Neither field has attracted the historical attention it deserves, although there are a number of useful sources that offer information and insights of relevance. Beacuse these areas are less well known than other aspects of the earth sciences, this chapter will include outlines of the major scientific developments before indicating what is known about their history.
Universities have been important to biology not merely by providing it with a home. Particular features of the university setting had a substantial impact on both the proliferation of new fields in the nineteenth century and the central questions that came to characterize those fields. The history of biological thought and practice must therefore make room for institutional history. Moreover, writing the history of “biology” poses particular problems. Unlike many subjects in the natural sciences (e.g., chemistry, physics) or the humanities (e.g., history, philosophy), “biology” has rarely been institutionalized as a single subject. Whenever the life sciences experienced growth within the universities, they displayed a remarkable tendency to be institutionalized separately rather than to remain together as an internally differentiated whole. Just why this has occurred is not clear, but its historiographical implication is that “biology” is best conceived as a collection of loosely connected areas of inquiry (I will call them “fields”) sharing little more than their concern with living organisms.
That said, the status that these fields have occupied within the university has varied considerably. Some of them (e.g., zoology or botany) were disciplines in the sense that they were central to the curriculum and were institutionalized in separate departments (or “institutes”) at most universities. But many fields were established for long periods of time without ever acquiring disciplinary status; for convenience, I will call them specialties (e.g., morphology, embryology, or cytology).
Zoology, the study of the animal kingdom, is no longer seen as a coherent branch of science. The specialization of the twentieth century has seen zoology’s territory divided among a host of separate disciplines. But in the nineteenth century that specialization was only beginning, and many naturalists would still have called themselves “zoologists,” their primary concern being to gain an understanding of the animal kingdom as a whole, its diversity of structure and function, and the ways in which its component species were related.
Exploration and the description of new species continued to drive home the sheer diversity of nature: Zoologists searched for the “natural system” of relationships but disagreed over how to uncover it. Philosophical naturalists started from a priori assumptions and abstract principles, searching for unity and symmetry in the array of natural forms. Many were influenced by various forms of idealist philosophy proclaiming that nature was the manifestation of a rational Mind. Others adopted a more empirical approach, starting from the study of particular cases; these naturalists were more likely to include information on the habits, distribution, and ecological relationships of species. There were constant disagreements over the relative significance of “form” (internal biological constraints) and “function” (adaptation to the environment) in determining the structure of individual species. The advent of evolutionism transformed biologists’ ideas on the nature of the relationships between species, although the theory’s impact on practice is less easy to define.
The development of pathology, with all its complex roots and branches, seems harder to account for than for most of the disciplines we take to underlie medical education and “scientific medicine.” The effort requires an array of cross-cutting bits of medical history, all working at different levels. A full account needs to include pathological museums and the collecting impulse, and the development of disease concepts as far as those concepts came to be localized in bodies (or, for that matter, generalized in bodies). It also needs to include the developing patterns of “disease itself,” however we mean that, for “the Seats and Causes of disease” that Giovanni Battista Morgagni and later doctors described in bodies depended as much on changing epidemiological patterns as on the tools with which the doctors variously confronted the diseased. New diseases such as severe acute respiratory syndrome (SARS) or acquired immune deficiency syndrome (AIDS), human ritual behaviors, family lives, Homo sapiens’ relations with other animals and with natural environments, and (especially) urbanization with its attendant concentration of illnesses in hospitals – all helped condition the “pathology” that would be “seen” by medical observers.
The earth sciences underwent a revolution during the 1960s, ending nearly sixty years of controversy over the reality of continental drift. Before 1966, few workers accepted continental drift as a working hypothesis; most earth scientists preferred fixist theories. Fixist theories maintain that the continents and oceans have not appreciably changed their positions relative to each other, whereas theories within the continental drift tradition, hereafter referred to as the mobilist tradition, maintain that relative displacement occurs. However, most earth scientists became mobilists soon after the confirmation of seafloor spreading, and plate tectonics, the modern theory of continental drift, remains the reigning theory in the earth sciences. The aim of this chapter is to outline the major historical aspects of the plate tectonics revolution.
THE CLASSICAL STAGE OF THE MOBILIST CONTROVERSY: FROM ALFRED WEGENER TO THE END OF THE SECOND WORLD WAR
The prospect of finding an overall geological theory looked promising to many earth scientists during the 1880s, for they believed that Eduard Suess (1831–1914), the great Austrian geologist, had provided them with a basic framework. His fixist theory was secular contractionism, the reigning tradition during the latter half of the nineteenth century. Suess maintained that the earth has been contracting since its initial formation as it cooled. He postulated that tensions are produced in the crustal layer because the earth’s inner layers contract more rapidly than its crust.