To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Physics ultimately rules the processes in living organisms and thus is in that sense fundamental for understanding medicine. To cite some examples involving different branches of physics:
the dynamics of forces in joints, e.g. the dependence of stress in an articulation → mechanics
the microstructure of bones and the role of compounds with high elasticity and high tensile strength → solid-state physics of articulations
control of blood circulation and the variable viscosity of blood and blood plasma → hydrodynamics
passive molecular transport through membranes via osmosis → thermodynamics
signal conduction in nerve cells → electrodynamics
image formation on the retina → optics and
hearing → acoustics and mechanics.
Many physical properties of tissues, substances, cells, and molecules and of their mechanisms of operation are exploited for diagnostics and therapy (e.g. ultrasonic imaging, electro- or magneto-encephalography, high-frequency electromagnetic-radiation therapy). Rather than these manifold relationships between physics and the phenomena of medicine and biology, medical physics today covers that part of physics in which new phenomema are exploited and new techniques developed explicitly for use in diagnostics and treatment. The largest area has to do with diagnostic methods, the most prominent being transmission radiography with X-rays, which was introduced soon after the discovery of X-rays by Röntgen in 1895. Many sections of this chapter cover the principles and recent developments of diagnostic tools, weighing their virtues as well as possible disadvantages (e.g. adverse side effects). The various techniques also provide complementary information.
We all observe phase transitions in our daily lives, with hardly a second thought. When we boil water for a cup of tea, we observe that the water is quiescent until it reaches a certain temperature (100 ℃), and then bubbles appear vigorously until all the water has turned to steam. Or after an overnight snowfall, we have watched the snow melt away when temperatures rise during the day. The more adventurous among us may have heated an iron magnet above about 760 ℃ and noted the disappearance of its magnetism.
Familiar and ubiquitous as these and many related phenomena are, a little reflection shows that they are quite mysterious and not easy to understand: indeed, the outlines of a theory did not emerge until the middle of the twentieth century, and, although much has been understood since then, active research continues. Ice and water both consist of molecules of H2O, and we can look up all the physical parameters of a single molecule, and of the interaction between a pair of molecules, in standard reference texts. However, no detailed study of this information prepares us for the dramatic change that occurs at 0 ℃. Below 0 ℃, the H2O molecules of ice are arranged in a regular crystalline lattice, and each H2O molecule hardly strays from its own lattice site. Above 0 ℃, we obtain liquid water, in which all the molecules are moving freely throughout the liquid container at high speeds. Why do 1023 H2O molecules cooperatively “decide” to become mobile at a certain temperature, leading to the phase transition from ice to water?
It is a universally acknowledged truth that any useful artifact is made of materials. A material is a substance, or combination of substances, which has achieved utility. To convert a mere substance into a material requires the imposition of appropriate structure, which is one of the subtlest of scientific concepts; indeed, the control of structure is a central skill of the materials scientist, and that skill makes new artifacts feasible.
Any structure involves two constituents: building blocks and laws of assembly. Crystal structure will serve as the key example, with wide ramifications. Any solid element, pure chemical compound, or solid solution ideally consists of a crystal, or an assembly of crystals that are internally identical. (The qualification “ideally” is necessary because some liquids are kinetically unable to crystallize and turn instead into glasses, which are simply congealed liquids.) Every crystal has a structure, which is defined when (1) the size and shape of the repeating unit, or unit cell, and (2) the position and species of each atom located in the unit cell have all been specified. At one level, the unit cells are the building blocks; at another level, the individual atoms are. The laws of assembly are implicit in the laws of interatomic force: once we know how the strength of attraction or repulsion between two atoms depends on their separation and what angles between covalent bonds issuing from the same atom are stable then, in principle, we can predict the entire arrangement of the atoms in the unit cell. Once the crystal structure has been established, many physical properties, such as the cleavage plane, melting temperature, and thermal expansion coefficient, are thereby rendered determinate. In the fullest sense, structure determines behavior.
Within the lifetime of my grandparents, there lived distinguished scientists who did not believe in atoms. Within the lifetime of my children, there lived distinguished scientists who did not believe in quarks. Although we can trace the notion of fundamental constituents of matter – minimal parts – to the ancients, the experimental reality of the atom is a profoundly modern achievement. The experimental reality of the quark is more modern still.
Through the end of the nineteenth century, controversy seethed over whether atoms were real material bodies or merely convenient computational fictions. The law of multiple proportions, the indivisibility of the elements, and the kinetic theory of gases supported the notion of real atoms, but a reasonable person could resist because no one had ever seen an atom. One of the founders of physical chemistry, Wilhelm Ostwald, wrote influential chemistry textbooks that made no use of atoms. The physicist, philosopher, and psychologist Ernst Mach likened “artificial and hypothetical atoms and molecules” to algebraic symbols – tokens, devoid of physical reality – that could be manipulated to answer questions about Nature.
The word “superfluid” was coined to describe a qualitatively different state of a fluid that can occur at low temperatures, in which the resistance to flow is identically zero, so that flow round a closed path lasts for ever – a persistent current. Superfluidity can occur either for uncharged particles such as helium atoms or for charged particles such as the electrons in a metal. In the latter case the flow constitutes an electric current and we have a superconductor. Since an electric current is accompanied by a magnetic field, it is much easier to demonstrate the presence of a persistent current in a superconductor than in a neutral superfluid.
In this chapter I shall describe the properties of superfluids, starting with the simplest and working up to more complicated examples. But in this introductory section I shall depart from the historical order even further by turning to the last page of the detective story so as to catch a glimpse of the conclusion from almost a century of experimental and theoretical research; we shall then have an idea of where we are heading.
The conclusion is that superfluidity is more than just flow without resistance. It is an example of a transition to a more ordered state as the temperature is lowered, like the transition to ferromagnetism. At such a transition new macroscopically measurable quantities appear: in the case of a ferromagnet, the spontaneous magnetization. Such new measurable quantities are known as the order parameter of the low-temperature phase. The new quantities in the case of a superfluid are more subtle and more surprising: the amplitude and phase of the de Broglie wave associated with the motion of the superfluid particles. A superfluid can therefore exhibit quantummechanical effects on a macroscopic scale.
This chapter describes remarkable new experiments that have been made possible through recent advances in laser cooling of atoms. Cooling of atoms in a gas by laser light is in itself surprising and this technique is important in a range of applications. In particular the experiments described here take atoms that have been laser cooled and then use other methods to reach even lower temperatures, at which wonderful and fascinating quantum effects occur. As well as revealing new phenomena, these experiments explore the deep analogies between the roles played by light waves in lasers and matter waves. These techniques are widely accessible and enable skilled researchers in laboratories all over the world to explore exciting new physics. The key physics and techniques of laser cooling are described in depth in Chapter 6.
What is “temperature”?
We use the words “cold” and “hot” every day; for example to refer to the temperature of the air outside. But what do “cold” and “hot” mean for the air’s molecules? Broadly speaking, we can say that atoms in a hot gas move faster than those in a cold gas and that cooling, by taking energy out of the gas, slows the atoms down. In this description of the gas as a collection of atoms behaving like billiard balls, or little hard spheres, the lowest possible temperature occurs when the atoms stop moving but, as we shall see, this picture does not give an accurate description at very low energies, for which quantum effects become important. In this chapter we will look at the fascinating properties of atomic gases cooled to temperatures within one millionth of a degree Kelvin from absolute zero. We shall use the term “ultra-cold” to refer to such extremely low temperatures that have only recently been achieved in experiments on atomic gases. The techniques used to cool liquids and solids have a longer history and have made available a different set of possibilities (see Chapter 8). Note that most of this chapter refers to gases in which the individual particles are atoms, but the general statements about temperature scales apply equally well to gases of molecules, e.g. air, whose major constituents, oxygen and nitrogen, exist as diatomic molecules.
Gravity attracts. It attracts every body in the Universe to every other, and it has attracted the interest of physicists for centuries. It was the first fundamental force to be understood mathematically in Isaac Newton’s action-at-a-distance theory, it is a center of current attention in Albert Einstein’s general relativity theory, and it promises to be the last force to be fully understood and integrated with the rest of physics.
After centuries of success, Newton’s theory was finally replaced by Einstein’s theory, which describes gravity at a deeper level as due to curvature of spacetime. General relativity is widely considered to be the most elegant physical theory and one of the most profound. It has allowed the study and understanding of gravitational phenomena ranging from laboratory scale to the cosmological scale – the entire Universe; but many mysteries remain, especially in cosmology.
An impressive understanding of the other fundamental forces of nature, electromagnetism and the weak and strong nuclear forces, is now embodied in the Standard Model of elementary particles. However, these other forces are understood in terms of quantum field theory and are not geometric in the manner of general relativity, so gravity remains apart. Many physicists believe that the final phase of understanding gravity will be to include quantum effects and form a union of general relativity and the Standard Model. We would then understand all the forces and spacetime on a fundamental quantum level. This is proving to be quite a task.
Electromagnetic interactions play a central role in low-energy physics, chemistry, and biology. They are responsible for the cohesion of atoms and molecules and are at the origin of the emission and absorption of light by such systems. They can be described in terms of absorptions and emissions of photons by charged particles or by systems of charged particles like atoms and molecules. Photons are the energy quanta associated with a light beam. Since the discoveries of Planck and Einstein at the beginning of the last century, we know that a plane light wave with frequency ν, propagating along a direction defined by the unit vector u, can also be considered as a beam of photons with energy E = hν and linear momentum p = (hν/c)u. We shall see later on that these photons also have an angular momentum along u depending on the polarization of the associated light wave.
Conservation laws are very useful for understanding the consequences of atom–photon interactions. They express that the total energy, the total linear momentum, and the total angular momentum are conserved when the atom emits or absorbs a photon. Consider for example the conservation of the total energy. Quantum mechanics tells us that the energy of an atom cannot take any value. It is quantized, the possible values of the energy forming a discrete set Ea, Eb, Ec, . . . In an emission process, the atom goes from an upper energy level Eb to a lower one Ea and emits a photon with energy hν. Conservation of the total energy requiresThe energy lost by the atom on going from Eb to Ea is carried away by the photon.
Ever since physics came into existence as a scientific discipline, it has been mainly concerned with identifying a minimal set of fundamental laws connecting a minimal number of different constituents, under the more or less implicit assumption that knowledge of these rules were a sufficient condition for explaining the world we live in. Such a program has had remarkable success, to the extent that processes occurring on scales that range from that of elementary particles up to those of stellar evolution can nowadays be satisfactorily described. Nevertheless, it has meanwhile become clear that the inverse approach is a source of unexpected richness and can hardly be included with the original microscopic equations. The existence of different phases of matter (gas, liquid, solid, and plasma, together with perhaps glasses, granular materials, and Bose–Einstein condensates) provides the most striking evidence that the adjustment of a parameter (e.g. temperature) can dramatically change the organization of the system.
Moreover, not only can the same set of microscopic laws give rise to different structures, but also the converse is true: different systems can, for example, converge toward the same crystalline configuration. One of the basic ingredients that makes the connection between different levels of description complex, though very intriguing, is the presence of nonlinearities. As long as each atom deviates slightly from its equilibrium position, the equations of motion can be linearized and the dynamics thereby decomposed into the sum of independent evolutions of so-called normal modes. In some cases, especially when disorder is present, such a decomposition cannot be easily performed (and very interesting physics can indeed arise, e.g. Anderson localization) but is, at least in principle, feasible.
The atomic nature of matter is well documented and appreciated. Solids are made from regular (in the case of crystals) or irregular (for the example of glasses) arrangements of atoms. Nature finds a way to produce various materials by combining constituent atoms into a macroscopic substance.
Imagine the following hypothetical experiment. Suppose that one builds up, for example, a crystal of silicon by starting with a single atom and adding atoms to make a small number of unit cells, repeating the process until a large enough crystallite has formed. A fundamental question is the following: at what stage in this process, i.e. at what size of the crystallite, will it approximately acquire the “bulk” properties? The precise answer to this question may well depend on what exactly one intends to do with the crystal.
While this may sound like a purely thought experiment, remarkable modern fabrication methods are close to being able to achieve this, using at least two independent approaches. Molecular-beam epitaxy (Figure 12.1) is able to grow high-quality crystalline layers one by one with remarkable control. Atomic-resolution methods are already able to deposit atoms selectively on some surfaces (Figure 12.2).
In 1935, a decade after the invention of quantum theory by Heisenberg (1925) and Schrödinger (1926), three papers appeared, setting the stage for a new concept, quantum entanglement. The previous decade had already seen momentous discussions on the meaning of the new theory, for example by Niels Bohr (see his contribution to Albert Einstein: Philosopher-Scientist in Further reading); but these new papers opened up the gates for a new, very deep philosophical debate about the nature of reality, about the role of knowledge, and about their relation. Since the 1970s, it has become possible for experimenters to observe entanglement directly in the laboratory and thus test the counter-intuitive predictions of quantum mechanics whose confirmation today is beyond reasonable doubt. Since the 1990s, the conceptual and experimental developments have led to new concepts in information technology, including such topics as quantum cryptography, quantum teleportation, and quantum computation, in which entanglement plays a key role.
In the first of the three papers, Einstein, Podolsky, and Rosen (EPR) realized that, if two particles had interacted in their past, their properties will remain connected in the future in a novel way, namely, observation on one determines the quantum state of the other, no matter how far away. Their conclusion was that quantum mechanics is incomplete and they expressed their belief that a more complete theory may be found. Niels Bohr replied that the two systems may never be considered independent of each other, and observation on one changes the possible predictions on the other. Finally, Erwin Schrödinger, in the paper which also proposed his famous cat paradox, coined the term “entanglement” (in German Verschränkung) for the new situation and he called it “not one, but the essential trait of the new theory, the one which forces a complete departure from all classical concepts.”
The application of basic physics ideas to the study of biological molecules is one of the major growth areas of modern physics, and emphasizes well how physics principles ultimately underpin the whole of Nature. This chapter focuses on the collective properties of biological molecules showing examples of hierarchical structures, and, in some instances, how structure and dynamics enable biological function. In addition, supramolecular biophysics casts a wide web with contributions to a broad range of fields. Among them are, in medicine and genetics, the design of carriers of large pieces of DNA containing genes for gene therapy and for characterizing chromosome structure and function; in molecular neurosciences, elucidating the structure and dynamics of the nerve-cell cytoskeleton; and in molecular cell biology, characterizing the forces responsible for condensation of DNA in vivo, to name a few. Concepts and new materials emerging from research in the field continue to have a large impact in industries as diverse as cosmetics and optoelectronics. A separate branch of biophysics dealing with the properties of single molecules is not described here due to space limitations and the availability of excellent reviews published in the past few years.
If one looks at research in biophysics over the last few decades one finds that a large part has been dedicated to studies of the structure and phase behavior of biological membranes. Membranes of living organisms are astoundingly complex structures, with the lipid bilayer containing membrane-protein inclusions and carbohydrate-chain decorations as shown in a cartoon of a section of the plasma membrane of a eukaryotic cell, which separates the interior contents of the cell from the region outside of the cell (Figure 16.1). The common lipids in membranes are amphiphilic molecules, meaning that the molecules contain both hydrophilic (“water-liking”) polar head groups and hydrophobic (“water-avoiding”) double tail hydrocarbon chains. Plasma membranes contain a large number of distinct membrane-associated proteins, which may traverse the lipid bilayer, be partially inserted into the bilayer, or interact with the membrane but not penetrate the bilayer.
Computation is an operation on symbols. We tend to perceive symbols as abstract entities, such as numbers or letters from a given alphabet. However, symbols are always represented by selected properties of physical objects. The binary string
10011011010011010110101101
may represent an abstract concept, such as the number 40 711 597, but the binary symbols 0 and 1 have also a physical existence of their own. It could be ink on paper (this is most likely to be how you see them when you are reading these words), glowing pixels on a computer screen (this is how I see them now when I am writing these words), or different charges or voltages (this is how my word processor sees them). If symbols are physical objects and if computation is an operation on symbols then computation is a physical process. Thus any computation can be viewed in terms of physical experiments, which produce outputs that depend on initial preparations called inputs. This sentence may sound very innocuous but its consequences are anything but trivial!
On the atomic scale matter obeys the rules of quantum mechanics, which are quite different from the classical rules that determine the properties of conventional computers. Today’s advanced lithographic techniques can etch logic gates and wires less than a micrometer across onto the surfaces of silicon chips. Soon they will yield even smaller parts and inevitably reach a point at which logic gates are so small that they are made out of only a handful of atoms. So, if computers are to become smaller in the future, new, quantum technology must replace or supplement what we have now. The point is, however, that quantum technology can offer much more than cramming more and more bits onto silicon and multiplying the clock-speed of microprocessors. It can support an entirely new kind of computation, known as quantum computation, with qualitatively new algorithms based on quantum principles.