To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Theoretical astrophysics emerged as a significant research programme with the construction of a series of stellar models by A. S. Eddington. This paper examines the controversies surrounding those models as a way of understanding the development and justification of new theoretical technologies. In particular, it examines the challenges raised against Eddington by James Jeans, and explores how the two astronomers championed different visions of what it meant to do science. Jeans argued for a scientific method based on certainty and completeness, whereas Eddington called for a method that valued exploration and further investigation, even at the sake of secure foundations. The first generation of stellar models depended on the validity of Eddington's approach – the physics and many of the basic facts of stars were poorly understood and he justified his models through their utility for future research and their robustness under challenging use. What would become theoretical astrophysics depended heavily on this phenomenological outlook, which Jeans dismissed as not even science. This was a dispute about the practice of theory, and it would be this methodological debate (rather than the emergence of new facts or the incorporation of new theory) that made theoretical astrophysics viable.
In the first half of the twentieth century, hormones took pride of place as life's master molecules and the endocrinologist took precedence over the geneticist as the scientist offering the means to control life. But, as with molecular genetics and biotechnology today, the status of endocrinology was not based solely on contemporary scientific and medical practices. To a high degree it was also reliant on expectations or visions of what endocrinologists would soon be able to do. Inspired by the approach of social studies of techno-scientific expectations, the aim of this article is to explore some of the great expectations connected to the development of endocrinology in the 1930s. The analysis is based on popular books written by the American physician and endocrinologist Louis Berman. The paper argues that Berman thought not only that it was perfectly possible to understand human nature through hormone analysis but that endocrinologists would be able to control, design and ‘improve’ humans by using hormone replacement therapy. Furthermore, in contrast to most of the eugenics of his time, Berman suggested that the whole population of the world should be improved. As a political activist he wanted to contribute to the development of new human beings, ‘ideal normal persons’, thereby reaching an ‘ideal society’. That HRT could involve risks was something that he seems not to have taken into account.
In 1945 Vannevar Bush proposed a machine that acted as a “supplement” to memory and met the particular information needs of its user. Because this “memex” recorded “trails” of selected documents, it has been seen as a precursor to hypertext. However, this paper considers Bush in relation to earlier concerns about memory and information, via the ideas of Robert Hooke and John Locke. Whereas Bush modeled the memex on the associative processes of natural memory, Hooke and Locke concluded that an external archive had to allow collective reason to overcome the limits of individual memory, including its tendency to freeze and repeat patterns of ideas. Moreover, they envisaged an institutional archive rather than one controlled by the interests and mental associations of an individual. From this early modern perspective, Bush's memex appears as a personal device for managing information that incorporates assumptions inimical to the strategies required for scientific analysis.
This paper argues for two claims. (1) In his biological views, Kenelm Digby tries to reconcile aspects of an Aristotelian theory of composite substance with early modern corpuscularianism. (2) From a methodological point of view, he uses the Stoic-Epicurean epistemology of common notions in order to show the adequacy of his conciliatory approach. The first claim is substantiated by an analysis of Digby's views on the role of mixture and homogeneity in the process of animal generation. The second claim is substantiated by an analysis of Digby's views on the role of the concept of quantity in the evaluation of scientific hypotheses. Both arguments make use of the context of Digby's philosophy: the first argument draws on his background in the work of early modern corpuscularian Aristotelians such as Daniel Sennert; the second argument draws on his background in the epistemology of Pierre Gassendi.
We present, and place in its context, a previously unpublished paper by Nicholas (Nils) Collin, one of the eighteenth-century pioneers of the American Philosophical Society. Collin may have been one of the first to bring eighteenth-century advances in probability theory to the United States at the very time that applied probability was gaining importance in the design of constitutions and in the design of annuities. Specifically, Collin transmitted at least some of the revolutionary social mathematics of Condorcet to the Society and therefore into American intellectual life.
This article focuses on the scientific practice of a specific class of models in neuroscience and biology that approach biological systems as computational or information processing systems. This specific approach to biological systems has a long tradition that started with the information discourse in the 1940s. Borrowing concepts, methods, and techniques from cybernetics, information theory, and physics, these models are situated at the interface of different scientific disciplines. This article examines in detail the Hopfield model, a model of the specific brain function of auto-associative memory that is situated at the interface of neuroscience, theoretical physics, and engineering. By drawing analogies to a model of disordered magnetic systems in physics, John Hopfield constructed a model that was able to mimic auto-associative memory. The model became an active field of research in theoretical physics and in engineering but not in neuroscience. According to neuroscientists, in the process of construction the model had lost touch with the biological system and the function that it was supposed to model. How did this happen? As will be shown, the process of constructing the model was guided by conceptualizing the biological organism as a computational, information-processing device. This construction process gained its own momentum that involved an interrelated development of a computational concept and a redescription of the biological system or phenomenon for bringing the computational concept to work. We will see how physicists and engineers tried to get models in touch with biological systems by constructing “a synthetic model.”