To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Beginning graduate students in mathematical sciences and related areas in physical and computer sciences and engineering are expected to be familiar with a daunting breadth of mathematics, but few have such a background. This bestselling book helps students fill in the gaps in their knowledge. Thomas A. Garrity explains the basic points and a few key results of all the most important undergraduate topics in mathematics, emphasizing the intuitions behind the subject. The explanations are accompanied by numerous examples, exercises and suggestions for further reading that allow the reader to test and develop their understanding of these core topics. Featuring four new chapters and many other improvements, this second edition of All the Math You Missed is an essential resource for advanced undergraduates and beginning graduate students who need to learn some serious mathematics quickly.
In addition to his ground-breaking research, Nobel Laureate Steven Weinberg is known for a series of highly praised texts on various aspects of physics, combining exceptional physical insight with his gift for clear exposition. Describing the foundations of modern physics in their historical context and with some new derivations, Weinberg introduces topics ranging from early applications of atomic theory through thermodynamics, statistical mechanics, transport theory, special relativity, quantum mechanics, nuclear physics, and quantum field theory. This volume provides the basis for advanced undergraduate and graduate physics courses as well as being a handy introduction to aspects of modern physics for working scientists.
Chapter 5 described quantum mechanics in the context of particles moving in a potential. This application of quantum mechanics led to great advances in the 1920s and 1930s in our understanding of atoms, molecules, and much else. But, starting around 1930 and increasingly since then, theoretical physicists have become aware of a deeper description of matter, in terms of fields. Just as Einstein and others had much earlier recognized that the energy and momentum of the electromagnetic field is packaged in bundles, the particles later called photons, so also there is an electron field whose energy and momentum is packaged in particles, observed as electrons, and likewise for every other sort of elementary particle. Indeed, in practice this is what we now mean by an elementary particle: it is the quantum of some field that appears as an ingredient in whatever seem to be the fundamental equations of physics at any stage in our progress.
The successful uses of atomic theory described in Chapter 1 did not settle the existence of atoms in all scientists’ minds. This was in part because of the appearance in the first half of the nineteenth century of an attractive competitor, the physical theory of thermodynamics. With thermodynamics one may derive powerful results of great generality, without ever committing oneself to the existence of atoms or molecules. But thermodynamics could not do everything. This chapter describes the advent of kinetic theory, which is based on the assumption that matter consists of very large numbers of particles, and its generalization to statistical mechanics. From these, thermodynamics could be derived and, together with the atomic hypothesis, it yielded results far more powerful than could be obtained from thermodynamics alone. Even so, it was not until the appearance of direct evidence for the graininess of matter that the existence of atoms became almost universally accepted.
The serious scientific application of the atomic theory began in the eighteenth century, with calculations of the properties of gases, which had been studied experimentally since the century before. This is the topic with which we begin this chapter. Applications to chemistry and electrolysis followed in the nineteenth century, and are considered in subsequent sections. The final section of this chapter describes how the nature of atoms began to be clarified with the discovery of the electron.
This chapter covers the early years of quantum theory, a time of guesswork, inspired by problems presented by the properties of atoms and radiation and their interaction. Later, in the 1920s, this struggle led to the systematic theory known as quantum mechanics, the subject of Chapter 5. Quantum mechanics started with the problem of understanding radiation in thermal equilibrium at a non-zero temperature. It was not possible to make progress in applying quantum ideas to atoms without some understanding of what atoms are. The growth of this understanding began with the discovery of radioactivity.
Atoms were at the center of physicists’ interests in the 1920s. It was largely from the effort to understand atomic properties that modern quantum mechanics emerged in this decade. In the 1930s physicists’ concerns expanded to include the nature of atomic nuclei. The constituents of the nucleus were identified, and a start was made in learning what held them together. And as everyone knows, world history was changed in subsequent decades by the military application of nuclear physics.
Our modern understanding of atoms, molecules, solids, atomic nuclei, and elementary particles is largely based on quantum mechanics. Quantum mechanics grew in the mid-1920s out of two independent developments: the matrix mechanics of Werner and the wave mechanics of Erwin Schrödinger. For the most part this chapter follows the path of wave mechanics, which is more convenient for all but the simplest calculations. The general principles of the wave mechanical formulation of quantum mechanics are laid out and provide a basis for the discussion of spin, identical particles. and scattering processes. The general principles are supplemented with the canonical formalism to work out the Schrödinger equation for charged particles in a general electromagnetic field. The chapter ends with the unification of the approaches of wave and matrix mechanics by Paul Dirac, and a modern approach, known as Hilbert space, is briefly described.
This chapter covers the Special Theory of Relativity, introduced by Einstein in a pair of papers in 1905, the same year in which he postulated the quantization of radiation energy and showed how to use observations of diffusion to measure constants of microscopic physics. Special relativity revolutionized our ideas of space, time, and mass, and it gave the physicists of the twentieth century a paradigm for the incorporation of conditions of invariance into the fundamental principles of physics.
Here we discuss how the use of artificial intelligence will change the way science is done. Deep learning algorithms can now surpass the performance of human experts, a fact that has major implications for the future of our discipline. Successful uses of AI technology all possess two ingredients for deep learning: copious training data and a clear way to classify it. When these two conditions are met, researchers working in tandem with AI technologies can organize information and solve scientific problems with impressive efficiency. The future of science will increasingly rely on human–machine partnerships, where people and computers work together, revolutionizing the scientific process. We provide an example of what this may look like. Hoping to remedy a present-day challenge in science known as the “reproducibility crisis,” researchers used deep learning to uncover patterns in papers that signal strong and weak scientific findings. By combining the insights of machines and humans, the new AI model acheives the highest predictive accuracy.
We begin by discussing the challenges of quantifying scientific impact. We introduce the h-index and explore its implications for scientists. We also detail the h-index’s strengths when compared with other metrics, and show that it bypasses all the disadvantages posed by alternative ranking systems. We then explore the h-index’s predictive power, finding that it provides an easy but relatively accurate estimate of a person’s acheivements. Despite its relative accuracy, we are aware of the h-index’s limitations, which we detail here with suggestions for possible remedies.
To describe coauthorship networks, we begin with the Erdös number, which links mathematicians to their famously prolific colleague through the papers they have collaborated on. Coauthorship networks help us capture collaborative patterns and identify important features that characterize them. We can also use them to predict how many collaborators a scientist will have in the future based on her coauthorship history. We find that collaboration networks are scale-free, following a power-law distribution. As a consequence of the Matthew effect, frequent collaborators are more likely to collaborate, becoming hubs in their networks. We then explore the small-world phenomenon evidenced in coauthorship networks, which is sometimes referred to as “six degrees of separation.” To understand how a network’s small-worldliness impacts creativity and success, we look to teams of artists collaborating on Broadway musicals, finding that teams perform best when the network they inhabit is neither too big or too small. We end by discussing how connected components within networks provide evidence for the “invisible college.”
We introduce the role that productivity plays in scientific success by describing Paul Erdös’ exceptional productivity. How does Erdös’ productivity measure up to other scientists? Is the exponential increase in the number of papers published due to rising productivity rates or to the growing number of scientists working in the discipline? We find that there is an increase in the productivity of individual scientists but that that increase is due to the growth of collaborative work in science. We also quantify the significant productivity differences between disciplines and individual scientists. Why do these differences exist? To answer this question, we explore Shockley’s work on the subject, beginning with his discovery that productivity follows a lognormal distribution. We outline his hurdle model of productivity, which not only explains why the productivity distribution is fat-tailed, but also provides a helpful framework for improving individual scientific output. Finally, we outline how productivity is multiplicative, but salaries are additive, a contradiction that has implications for science policy.
Here we address bias and causality, beginning with the bias against failure in the existing science of science research. Because the data available to us is mostly on published papers, we necessarily disregard the role that failure plays in a scientific career. This could be framed as a surviorship bias, where the “surviving” papers are those that make it to publication. This same issue can be seen as a flaw in our current definition of impact, since our use of citation counts keeps a focus on success in the discipline. We explore the drawbacks and upsides of variants on citation counts, including altmetrics like page views. We also look at how possible ways to expand the science of science to include unobservable factors, as we saw in the case of the credibility revolution in economics. Using randomized controlled trials and natural experiments, the science of science could explore causality more deeply. Given the tension between certainty and generalizability, both experimental and observational insights are important to our understanding of how science works.