To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Uppsala school in separation science, under the leadership of Nobel laureates, The (Theodor) Svedberg and Arne Tiselius, was by all counts a half-century-long success story. Chemists at the departments for physical chemistry and biochemistry produced a number of separation techniques that were widely adopted by the scientific community and in various technological applications. Success was also commercial and separation techniques, such as gel filtration, were an important factor behind the meteoric rise of the drug company Pharmacia from the 1950s. The paper focuses on the story behind the invention of gel filtration and the product Sephadex in the 1950s and the emergence of streamlined commercially oriented separation science as a main activity at the department of biochemistry in the 1960s. The dynamics of this development is analyzed from the perspectives of moral economy and storytelling framed by the larger question of the social construction of innovation. The latter point is addressed in a brief discussion about the uses of stories like the one about Sephadex in current research policy.
This article adopts a historical perspective to examine the development of Laboratory Animal Science and Medicine, an auxiliary field which formed to facilitate the work of the biomedical sciences by systematically improving laboratory animal production, provision, and maintenance in the post Second World War period. We investigate how Laboratory Animal Science and Medicine co-developed at the local level (responding to national needs and concerns) yet was simultaneously transnational in orientation (responding to the scientific need that knowledge, practices, objects and animals circulate freely). Adapting the work of Tsing (2004), we argue that national differences provided the creative “friction” that helped drive the formation of Laboratory Animal Science and Medicine as a transnational endeavor. Our analysis engages with the themes of this special issue by focusing on the development of Laboratory Animal Science and Medicine in Norway, which both informed wider transnational developments and was formed by them. We show that Laboratory Animal Science and Medicine can only be properly understood from a spatial perspective; whilst it developed and was structured through national “centers,” its orientation was transnational necessitating international networks through which knowledge, practice, technologies, and animals circulated.
One of the common characteristics of science, technology, and medicine is their ambition to epistemologically and organizationally move beyond the confines of nation states. In practice, however, they develop differently in countries or regions. Scientists, engineers, and physicians are constrained as well as enabled by national boundaries and specific cultures. The cultural status of such practices in reverse is influenced by a country's history, politics, and the view of the role of science, technology, and medicine in society. It is the relation between a specific region, Scandinavia, and the history of science, technology, and medicine within this region that this issue of Science in Context sets out to explore. But what is this “Scandinavia”? To many, Scandinavia besides being a specific geographical region of three countries (Denmark, Sweden, and Norway) with entwined histories and closely related languages is a way of denoting a specific style or movement. “Scandinavian design” is renowned for three interrelated features; minimalism or simplicity, functionalism, and “design to the people” i.e. functional products for the average citizen (Beer 1975; Glambek 1997; Fallan 2012).
During the 1950s it became apparent that antibiotics could not conquer all microbes, and a series of tests were developed to assess the susceptibility of microbes to antibiotics. This article explores the development and standardization of one such testing procedure which became dominant in the Nordic region, and how the project eventually failed in the late 1970s. The standardization procedures amounted to a comprehensive scheme, standardizing not only the materials used, but also the methods and the interpretation of the results. Focusing on Sweden and Norway in particular, the article shows how this comprehensive standardization procedure accounted for several co-dependent factors and demanded collaboration within and across laboratories. Whereas literature on standardization has focused mostly on how facts and artefacts move within and across laboratories, I argue for the importance of also attending to regions and territories. More particularly, while arguing that the practices, ideals, and politics related to what have been called the “Nordic welfare state” were contributing to the design of the standardized procedure in the laboratory, I also argue that Scandinavia was drawn together as a unified region with and by these very same practices.
In the 1920s there were still very few fossil human remains to support an evolutionary explanation of human origins. Nonetheless, evolution as an explanatory framework was widely accepted. This led to a search for ancestors in several continents with fierce international competition. With so little fossil evidence available and the idea of a Missing Link as a crucial piece of evidence in human evolution still intact, many actors participated in the scientific race to identify the human ancestor. The curious case of Homo gardarensis serves as an example of how personal ambitions and national pride were deeply interconnected as scientific concerns were sometimes slighted in interwar palaeoanthropology.
The Scandinavian countries share a solid reputation as longstanding contributors to top level Arctic research. This received view, however, veils some deep-seated contrasts in the ways that Sweden, Norway, and Denmark have conducted research in the Arctic and the North Atlantic. In this paper it is argued that instead of focusing on the geographical determinism of science – the fact that the Arctic is close to, indeed part of, Scandinavian territories – we should look more closely at the geopolitics of science to understand the differences and similarities between these three Nordic countries. Through case studies of, mainly, Swedish Arctic and North Atlantic glaciology in the 1920s through to the 1940s, and of Norwegian preparations in the 1950s for the International Geophysical Year 1957/58, the paper demonstrates how different styles of research – research agendas, methodological choices, collaborative patterns, international networks, availability of infrastructure, relations to politics and power – are conditioned on economic interests and strategic and geopolitical trajectories, either these are explicitly put in the forefront of scientific priorities as in the case of Norway in the 1950s, or when they are manifestly disregarded in the name of scientific internationalism, as in the case of Swedish glaciology. The case of Danish colonial science in Greenland is only cursorily drawn into this analysis but corroborates the overall thesis. The analysis of this wider science politics of Scandinavian circumpolar science is exercised against a brief introductory backdrop of Arctic science historiography. Its chief message is that the analysis of polar science applying modern theory and method of the social studies of science is comparatively recent and that the full potential of merging the literature of Arctic science and exploration with those of security, geopolitics, indigenous voices, and the politics of nationalism is yet to be realized.
At the turn of the twentieth century the Norwegian market flourished with milk products intended for infants. But medical doctors argued in favor of “going back to nature”: Women ought to breastfeed their children. This paper explores how a re-naturalization of mother's milk emerged within experimental medicine. The prescribed “natural way” did not develop within medicine alone. The paper demonstrates how the natural developed within a relational space of different versions of milk: the free-market milk, the dirty and decaying milk, and the non-nutritional milk. But why did Norwegian mothers, in contrast to the development in for instance the US, continue to breastfeed their infants? Drawing on the work of the leading pediatrician Theodor Frølich, the paper suggests that this may in part be explained by the development of a distinct version of care: A matter-of-fact, pragmatic and flexible version of care that nevertheless came to enact mother's milk as the supreme form of nutrition to which there was hardly a competing or healthy alternative. “The natural way” became a thought style and was made integral to everyday culture.
Since the 1970s, Danish population registries were increasingly used for research purposes, in particular in the health sciences. Linked with a large number of disease registries, these data infrastructures became laboratories for the development of both information technology and epidemiological studies. Denmark's system of population registries had been centralized in 1924 and was further automated in the 1960s, with individual identification numbers (CPR-numbers) introduced in 1968. The ubiquitous presence of CPR-numbers in administrative routines and everyday lives created a continually growing data archive of the entire population. The resulting national-level database made possible unprecedented record linkage, a feature epidemiologists and biomedical scientists used as a resource for population health research. The specific assemblages that emerged with their practices of data mining were constitutive of registry-based epidemiology as a style of thought and of a distinct relationship between science, citizens, and the state that emerged as “Scandinavian.”
Standardization provided a stamp of approval and a level of acceptance and stability. The characteristics of the language would no longer be subject to the whims of a single organization. There would be an industry-wide voice in the definition of features and the timing of their introduction.
– Martin Greenfield, “History of FORTRAN Standardization,” 1982
By the middle decades of the twentieth century, the architects and engineers of communication networks understood that standardization provided a powerful strategy and tool kit for control. In the early twentieth century, AT&T chief engineer Bancroft Gherardi and his colleagues at the top of the Bell System hierarchy nurtured a corporate culture – and an ideology of standardization – that privileged stability and caution over radical technological or organizational change. The question of control, as we have seen, was at the core of tensions between telephone engineers in local Bell operating companies and AT&T executives in New York. Even as Gherardi and his fellow executives struggled mightily to centralize control, they faced a wide range of recalcitrant obstacles, such as resistance from the Bell operating companies and competing firms, adversarial state and federal regulators, and rapid changes in the scientific and technological foundations of their industry.
From the 1930s to the 1970s, critiques of AT&T’s style of centralized control arose from a variety of sources in American society. They are noteworthy as critiques – and not mere criticisms – because they were more elaborate than ordinary gripes about high rates or other aspects of telephone service. They did not only criticize the status quo; these critiques also began to build alternatives to the status quo that would take power out of the hands of Bell System employees and put it into the hands of a more diverse group of engineers, regulators, and users.
By the mid-1990s, the Internet’s advocates had learned to cast Internet history in a most flattering light. They downplayed its autocratic and closed world origins, belittled the work undertaken by their competitors in Open Systems Interconnection (OSI), and reimagined the Internet as an open system. In their revisionist hands, the Internet standards process became a novel form of distributed control and participatory democracy that emerged organically from the interactions of Internet engineers. Internet users, dazzled and enchanted by their sublime new toy, searched for secrets of the Internet’s astonishing success that they could apply in other realms. A chorus of academics and policy makers unwittingly latched on to an origin myth – that the Internet was a meritocracy, the product of “nerds” and “hackers” who collaborated through a decentralized and participatory design process.
The Internet standards process was an especially rich source of inspiration for legal scholars who offered new interpretations of legal philosophy, the process of innovation, and the role of nongovernmental and transnational regulatory regimes in the twenty-first-century global economy. In many cases, the lessons gleaned from the Internet success story flew in the face of accepted scientific and engineering practice – evidence, it would seem, that the Internet’s emergence truly marked a new kind of technology-enabled society. For example, Harvard law professor Jonathan Zittrain generalized from his reading of the Internet’s architectural history to propose a “procrastination principle” built on the assumption that “most problems confronting a network can be solved later or by others.” This principle, Zittrain argued, “suggests waiting for problems to arise before solving them.”
Americans of all ages, all conditions, and all dispositions constantly form associations. They have not only commercial and manufacturing companies, in which all take part, but associations of a thousand other kinds, religious, moral, serious, futile, general or restricted, enormous or diminutive…. Wherever at the head of some new undertaking you see the government in France, or a man of rank in England, in the United States you will be sure to find an association.
– Alexis de Tocqueville, Democracy in America, 1840
Had he been able to visit America in 1900, Alexis de Tocqueville would have seen that Americans continued to “constantly form associations.” Americans continued to form commercial and manufacturing companies, as Tocqueville witnessed in the 1830s. Moreover, representatives of those companies formed additional associations to pursue common interests and ambitions. Through these combinations of associations, Americans developed technical standards to harmonize crucial aspects of industrial production.
Such voluntary industrial and corporate associations are by and large missing from histories of the late nineteenth-century American industrial economy. Histories of this era of American capitalism tend to focus on two general types of activity: disorganized and cutthroat competition in markets, and hierarchical command structures that developed within large corporations. Only recently have historians paid more attention to a third type of activity, which they conceptualize as hybrids of markets (which facilitate singular transactions) and hierarchies (which provide permanent structures for repeated transactions). William Cronon’s discussion in Nature’s Metropolis of the Chicago Board of Trade provides perhaps the best-known example of this hybrid form of coordination mechanism in the mid-nineteenth century. Cronon explains how the Board of Trade’s grading system decoupled grain ownership from its market price by creating standards of quality for different types of wheat. It thus functioned as an institution that created a level playing field for all grain merchants and prevented the possibility of control by a few powerful ones.
The standards elephant of today - it’s right here.
As the Internet and its community grows, how do we manage the process of change and growth?
Open process – let all voices be heard.
Closed process – make progress.
Quick process – keep up with reality.
Slow process – leave time to think.
Market driven process – the future is commercial.
Scaling driven process – the future is the Internet.
We reject: kings, presidents and voting.
We believe in: rough consensus and running code.
– David D. Clark, “A Cloudy Crystal Ball,” 1992
By all accounts, David Clark’s plenary address at the July 1992 meeting of the Internet Engineering Task Force (IETF) was a transformative moment in the history of networking. There was a palpable tension in the sessions leading up to Clark’s talk, which occurred at the end of the final session on the fourth day of a contentious i ve-day meeting. Clark, a senior research scientist in MIT’s Laboratory for Computer Science, had been a leading member of the Internet technical community since 1976. Clark’s title, “A Cloudy Crystal Ball,” suggested uncertainty, but his “alternate title” – “Apocalypse Now” – better encapsulated the tone of his remarks. He used the occasion to cajole Internet engineers into paying more attention to technical problems that threatened the Internet’s continued growth and success. Like many people in the audience, Clark worried about the strains that the commercialization of the early 1990s was placing on the technical capabilities of the Internet. However, he was more deeply concerned by fundamental problems with the security of the Internet, as one of his slides made plain: “Security is a CRITICAL problem. Lack of security means the END OF LIFE AS WE KNOW IT.”