To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What makes condensed matter “condensed”? The answer is stickiness (i.e. an attractive interaction) that exists between the particles that make up a liquid or solid. In the last two chapters we have looked at two extremes of particle arrangements in matter: ordered (crystalline) and disordered (amorphous). In this chapter, we now examine the nature of the forces that form between particles and which promote the formation of a condensed phase of matter. We begin by reviewing the five major bonds (van der Waals, covalent, ionic, metallic and hydrogen bonds) and conclude by considering the overall cohesive energy in a crystal. As thermodynamics generally favors the system with lowest energy, this cohesive energy is part of what determines why a particular crystal structure is adopted in Nature, rather than another structure.
Survey of bond types
In order for matter to condense, there must be an attractive force between the particles to promote their mutual gathering together. Of the four fundamental forces in Nature, the two nuclear forces (strong and weak) play no role in the condensation process and the gravitational force is far too weak to drive the process at ordinary terrestrial temperatures and pressures. Instead, the fundamental force that binds particles together in condensed matter arises from electrostatic interactions.
Our discussion of critical phenomena surrounding a second-order phase transition has thus far focused only on qualitative features. We have now examined three systems, fluids, magnets and random percolation, each of which displays an abruptly sharp transition from a less ordered to a more ordered phase as the respective transition point is encountered. Each shows a similar pattern of developing structure just in advance of the critical point. Fluctuations in the respective order parameter display a self-similar structure that is limited only by a single relevant length scale, the correlation length ξ, which diverges on approach to the transition point. For fluids and magnets, this structure arises from inherent fluctuations that are amplified by either a diverging compressibility or susceptibility, respectively.
In this chapter, we explore more quantitative, theoretical approaches taken to understand the features of second-order phase transitions. The simplest of these are the mean field theories, in which the pairwise interaction (needed to produce a phase transition) is introduced in the form of an average field. In this approach, the effects of the growing fluctuations of the order parameter near the critical point are ignored. Although the mean field approach does meet many of the requirements and does predict divergences of certain quantities near the transition, the critical exponents predicted by the theory do not match those seen experimentally. Obtaining correct exponents requires a more advanced approach involving renormalization techniques that exploit the self-similar structure of the fluctuations near the critical point and allow all of the various critical exponents to be inter-related, such that knowledge of any two yields all the others.
In the previous chapter, we examined the elastic nature of a crystal and replaced the notion of atoms as independent harmonic oscillators with the concept of phonons as quantized pieces of elastic waves propagating within a crystal. In this chapter we bolster our confidence in the reality of these phonons by examining two thermal properties of a crystal: its specific heat and its thermal conductivity. At low temperatures, the specific heat of a crystal decreases as the cube of the temperature. A model (attributed to Einstein) based only on independent harmonic oscillators is unable to account for this particular low temperature dependence, while the Debye model, involving a population of phonons, properly accounts for the temperature dependence. Likewise, the thermal conductivity of a crystal can only be understood using the phonon picture. The thermal conductivity exhibits a sharp division in its temperature dependence between a T3 variation at low temperatures and a 1/T dependence at high temperatures. This division stems from the nature of phonon–phonon collisions, which are only truly successful in retarding heat flow at high temperatures where so-called Umklapp processes dominate. In these collisions, the resultant phonon emerging from the collision extends beyond the boundaries of the Brillouin zone and suffers strong Bragg scattering by the lattice.
Specific heat of solids
Consider a crystal maintained at some finite temperature. Clearly the crystal contains energy in the form of lattice vibrations for which we have now developed two, self-consistent pictures. In one picture, we view this energy as stored in atoms that act as local harmonic oscillators. In the other picture, the energy is stored in a large population of phonons. The phonons appear in a variety of energies consistent with both the dispersion relation and the restricted set of allowed wave vectors imposed by the finite size of the crystal and the boundary of the first Brillouin zone.
In contrast to the sharp, discrete scattering that occurs in crystals as a result of their perfect periodicity, amorphous materials possess a distribution of particle spacings and display scattering that is far more continuous as a function of the scattering wave vector. In this chapter, we explore in detail the relationship between the structure factor of a liquid or glass and the corresponding short-range order described by the pair distribution function introduced in Chapter 2. We demonstrate that S(q) is (mostly) a Fourier transform of the pair distribution function. Thus again, prominent features of the static structure factor point to recurrent particle spacings present in the material, and provide vital experimental clues to the short-range order.
In this chapter, we also look at how visible light is scattered by liquids and glasses. Unlike X-rays that probe mainly the short-range order resident over just a few coordination layers, the larger wavelength of visible light makes it sensitive to larger-scaled density variations caused by thermal fluctuations. In this alternative scattering regime, the pattern of density fluctuations is described by the van Hove correlation function which, again, is related to S(q) by a Fourier transform.
In this final chapter on the subject of scattering, we examine the structure of extended, but finite-sized composite objects constructed of a very large number of individual particles. Examples include polymer molecules composed of many repeated individual chemical units, and aggregation clusters that form when many individual particles randomly assemble into a larger structure. In both instances, we will see that the amorphous structures of these macroscopic-sized objects display self-similarity – a continuous hierarchy of structures that appear identical on many alternative length scales. This self-similarity appears in the pair distribution function as a power law dependence on radial distance, much unlike the sort of g(r) curves we have examined thus far, and which transforms into Fourier space as a corresponding power law variation of S(q).
Also in this chapter, we conclude our survey of structures and scattering with a brief look at liquid crystals and microemulsions, whose structures undergo a series of transitions with symmetries that are intermediate between that of crystals and liquids. In these materials the particles are able to spontaneously self-assemble into more ordered structures as a result of only weak, inter-particle forces.
On my bulletin board I have a picture of a recent U.S. president that someone has
photo shopped to include a text balloon that says, “The ice caps are not
melting. The water is being liberated.” Although intended to be humorous,
there is an element of truth to this statement. Phase transitions are in many
respects the result of a competition or war between two opposing forces. On one
side are the attractive interactions between particles that act to bind them
together and force them into a more ordered structure – a world governed
by potential energy. On the other side is thermal energy that acts to break
these bonds and liberate the particles so that they are free to move about
– a world dominated by kinetic energy. There is then a point of
transition where one world order trumps the other and it is this phenomenon that
we consider in this final set of chapters.
There are many sorts of phase transitions, but the two prominent examples we will
consider are the gas-to-liquid transition and the transition of
paramagnet-to-ferromagnet. In spite of the obvious differences between these two
systems, we will emphasize a remarkable level of similarity in how their phase
transitions proceed and how order develops in both situations.
Beginning in Chapter 15, we lay some groundwork regarding the fundamental nature
of phase transitions and explore the meanings behind various phase diagrams used
to describe the transitions between different phases. Here we examine the
competition between inter-particle interactions and thermodynamic forces in
determining the conditions for phase transitions to occur and emphasize the
special role played by thermodynamic fluctuations near socalled
“critical” points, where certain thermodynamic quantities tend to
diverge.
In this chapter we develop a general formalism to describe the scattering of waves by a large system of particles and show that the scattering pattern relates directly to the structural arrangement of the particles. We develop this formalism using the specific example of light waves, composed of oscillating electromagnetic fields. But, in principle, the waves could represent any wave-like entity including matter waves such as traveling electrons or neutrons. The characteristic scattering pattern is known as the static structure factor, and it results from the collective interference of waves scattered by particles in the system. This interference is sensitive to the relative separation between the particles, and the static structure factor is shown to be just a spatial Fourier transform of the particle structure as it is represented by the density–density correlation function.
The dipole field
All condensed matter is constructed of atoms that contain nuclei and electrons. The nuclei reside at the atom center and the electrons, while bound up in the atom, orbit about the nucleus at a relatively large distance under the attraction of a Coulomb force. In considering the interaction of an atom with an external electric field, we know that both the electron and the nucleus experience opposing forces owing to their opposite charge. However, because neutrons and protons are about two thousand times more massive than the electron, we can largely disregard any disturbances in the location of the nucleus and instead focus on the motion of electrons alone.
Up to now, we have considered only those inherent microscopic dynamics in a material that are present at equilibrium and are driven by the thermal energy content of the material itself. Here, in our last chapter dealing with dynamics, we consider instead the macroscopic, bulk dynamics of materials in non-equilibrium situations where an external force is applied or removed. Examples include the stretching or bending of a solid that results from application of a mechanical force, or the polarization of a dielectric material resulting when an external electric field is applied.
Several common features emerge in the response of a material to an external force or field. In all cases, there is some aspect of elasticity by which application of the force results in the storage of potential energy, that is returned when the force is removed. In all cases, this storage of energy is accompanied by some element of viscous drag or damping by which a portion of the work done during the deformation is lost in the form of heat. Like friction, this damping is a microscopic feature inherent in the thermodynamic fluctuations, and the energy lost during the deformation is returned to the same thermal bath from which it was derived. In fact, we will show that an important theorem exists, known as the fluctuation–dissipation theorem, which relates the macroscopic dissipation of energy in these bulk, non-equilibrium, processes directly to the inherent microscopic fluctuations present at equilibrium.
This textbook was designed to accompany a one-semester, undergraduate course that
itself is a hybridization of conventional solid state physics and
“softer” condensed matter physics.
Why the hybridization? Conventional (crystalline) solid state physics has been
pretty much understood since the 1960s at a time when non-crystalline physics
was still a fledgling endeavour. Some 50 years later, many of the foundational
themes in condensed matter (scaling, random walks, percolation) have now matured
and I believe the time is ripe for both subjects to be taught as one. Moreover,
for those of us teaching at smaller liberal arts institutions like my own, the
merging of these two subjects into one, better accommodates a tight curriculum
that is already heavily laden with required coursework outside the physics
discipline.
Why the textbook? For some years now I have taught a one-semester course,
originally listed as “solid state physics”, which evolved through
each biannual reincarnation into a course that now incorporates many significant
condensed matter themes, as well as the conventional solid state content. In
past offerings of the course, a conventional solid state textbook was adopted
(Kittel’s Introduction to Solid State Physics) and students
were provided with handouts for the remaining material. This worked poorly.
Invariably, the notation and style of the handouts clashed with that of the
textbook and the disjointed presentation of the subject matter was not only
annoying to students, but a source of unnecessary confusion. Students were left
with the impression that solid state and condensed matter were two largely
unrelated topics being crammed into a single course. Frustrated, I opted to
spend a portion of a recent sabbatical assembling all of the material into a
single document that would better convey the continuity of these two fields by
threading both together into a seamless narrative.
We often think of crystals as the gemstones we give to a loved one, but most metals (e.g. copper, aluminum, iron) that we encounter daily are common crystals too. In this chapter, we will examine the structure of crystalline matter in which particles are arranged in a repeating pattern that extends over very long distances. This long-range order is formally described by identifying small local groupings of particles, known as a basis set, that are identically affixed to the sites of a regularly repeating space lattice. As it happens, most crystals found in nature assume one of a limited set of special space lattices known as Bravais lattices. These lattices are special by virtue of their unique symmetry properties wherein only discrete translations and rotations allow the lattice to appear unchanged. Chief among these Bravais lattices are the cubic and hexagonal lattice structures that appear most frequently in nature. We focus extra attention on both to provide a useful introduction to coordination properties and packing fractions.
Crystal lattice
Crystals have a decided advantage because of the inherent repeating pattern present in their structure. In an ideal (perfect) crystal, this repeating pattern extends indefinitely. However, for real crystals found in nature, the pattern is often interrupted by imperfections known as defects that can include vacancies, in which a single particle is missing, and dislocations in which the repeating pattern is offset. These defects are important for some crystal properties, but for now we restrict ourselves to only ideal structures. Besides, even in real crystals large regions containing substantial numbers of particles exist in which a perfectly repeating pattern is maintained.
Most of the light that enters our eyes has been scattered and when we see objects we see them because of the diffuse scattering of light they produce. Even the sky is blue because of how it scatters sunlight. But scattering is also an important mechanism for observing very small objects. As a classic example, recall how Lord Rutherford unveiled the internal structure of the atom by studying the scattering pattern of alpha particles directed at gold atoms. The abnormally large number of particles backscattered by these gold atoms pointed to the existence of a small, but very dense, center which we now refer to as the nucleus.
In the next chapter, we develop the basic framework for the scattering of
waves by condensed matter by looking at how electromagnetic waves scatter
from the electrons contained in the particles. Although this is strictly relevant
only for the scattering of X-rays and visible light, much of the formalism that
develops will apply equally to other waves, including particle waves (electrons
or neutrons) that interact with things other than electrons. In the following
chapter (Chapter 6), we look at how X-rays scatter from crystals. There we will
find scattering that is reminiscent of how visible light is scattered by a diffraction
grating in that the scattered radiation exits as a set of discrete beams. This
discrete (Bragg) diffraction is contrasted in Chapter 7 by the continuous pattern
of scattering produced by glasses or liquids.
In the last chapter, we took a brash and somewhat unrealistic approach to treating the motion of electrons in a crystal. Although we know that the electron travels through a periodic potential caused by the regular arrangement of ion cores, we disregarded this “bumpy terrain” and considered instead only the barest consequences of the electron being trapped in the crystal “box” as a whole. In spite of its simplicity, this free electron model provided insightful explanations, not only for the origin of the small electronic contribution to specific heat and the temperature dependence of the electrical resistivity, but also for a host of emission phenomena, including the photoelectric effect.
However, the free electron model fails to provide any insight into additional questions regarding electrical conduction, such as (1) the anomaly of positive Hall coefficients that would imply positive charge carriers, and (2) the peculiar pattern of conductors, insulators and semiconductors that is found in the periodic table. In this chapter, we examine the nearly free electron model as a natural extension in which a weak, periodic potential is introduced. As a direct consequence of this addition, the continuum of electron energies in our free electron model now becomes separated into bands of allowed electron energy, separated by disallowed energy gaps. This separation of the electron energy into bands and gaps is key to understanding the division of materials into conductors, insulators and semiconductors, as well as providing a natural interpretation for the positive Hall coefficients.
Percolation theory refers to properties of a simple experiment in which
random events produce features common to second-order transitions; namely a
continuously developing order parameter and self-similar, critical-like
fluctuations. The model itself is quite simple, yet as we will see, it has
been used extensively to interpret many phenomena found in nature, including
not only the conditions under which liquids percolate through sand (from
which the theory obtains its name), but also the manner in which stars form
in spiral galaxies.
In this chapter, we investigate the percolation process in some rigorous
detail to demonstrate how percolation clusters develop in a self-similar,
power law manner near the percolation threshold. We also take this
opportunity to introduce both the finite-sized scaling and renormalization
techniques. Both of these techniques exploit the inherent self-similarity to
gain insight into the critical exponents that characterize a second-order
phase transition, and will prove useful to us in the next chapter.
The percolation scenario
At the heart of percolation theory is the question of how long-range
connections develop through a random process. Consider a geometrical lattice
of some arbitrary dimension such as the two-dimensional networks of pipes
shown in the form of a square lattice in Fig. 16.1a. Here, the pipes are
fully connected and fluid is free to flow from one edge of the network to
the other. Suppose we now insert valves throughout this arrangement of pipes
in one or the other of two ways. In the first instance, which corresponds to
bond percolation, the valves are placed inside the
pipes (i.e. inside the “bonds” between intersections), as
shown in Fig. 16.1b. In the alternate case, referred to as site
percolation, the valves are placed at the intersection of the
pipes. Again, when all the valves are opened, the network is fully connected
and fluid can flow readily from one side to the other. But, if all the
valves are closed, the network is fully unconnected and fluid is unable to
flow anywhere.
In the last chapter, we investigated the dynamics of liquids whose particles are free to wander about due to the reasonably weak level of inter-particle bonding. In a solid (crystal or glass), bonding between particles is stronger and the translational motion of the particles is arrested. Nevertheless, these “solid” particles continue to move and execute small, localized vibrations about a fixed point in space. In this chapter and the next, we investigate the nature of this vibrational motion and its impact on the thermal properties of a solid. Here we begin by considering a simple model of masses connected by ideal springs to demonstrate how vibrations of individual atoms are, in reality, a consequence of propagating waves traveling through the crystal lattice. In order to connect these waves with the quantum mechanical perspective of each atom behaving as a quantized harmonic oscillator, we find ourselves introducing the concept of a quantum of elastic wave, known as a phonon.
An important outcome of our development of quantized elastic waves is a growing appreciation for a special region of reciprocal space known as the Brillouin zone, which is populated by all the wave vectors, K, corresponding to allowed phonon waves in the crystal. For phonons whose K matches the edge of this zone, significant Bragg scattering results, to produce two equivalent standing wave patterns separated by an energy gap. We will revisit the Brillouin zone often in the chapters to come, and we will begin to appreciate the significance of this boundary for the motion of all waves that attempt to travel within a crystal.
We have now witnessed the similar patterns associated with second-order phase transitions in both fluid and magnetic systems. These patterns include laws of corresponding states and similarity in critical exponents, that govern how properties evolve near the transition point. Furthermore, the Landau theory provides a framework for understanding the commonality of these second-order phase transitions, in terms of similarity in the functional dependence of the free energy on an appropriately chosen order parameter, and a simple expansion that can be performed near the critical point. In this chapter, we examine yet another significant phase transition found in condensed matter: the transition of a material to a state of virtually infinite conductivity or superconductivity. Here the transition involves the sticking together of two electrons into a boson-like, superconducting charge carrier known as a Cooper pair, and we again find evidence of a second-order transition consistent with mean field theory.
Superconducting phenomena
Discovery
In 1908, H. K. Onnes perfected the technique for cooling helium gas to its condensation point and soon after began using this new technology to investigate the properties of various elements at ultra low temperatures. In one instance, Onnes was curious about the ultimate demise of the resistivity of an electronic conductor. As we saw in Chapter 12, the resistivity of most conductors decreases linearly with temperature at high temperatures, due to the scattering of electrons by lattice phonons, but approaches a limiting value at low temperatures, associated with a mean free path determined by macroscopic imperfections of the crystal lattice. In a series of studies, Onnes measured the resistance of gold and platinum and observed an approach to a limiting resistance at low temperatures. In an effort to eliminate the effects of imperfections, he extended the study in 1911 to include mercury, which at that time could be refined to a highly pure form. The results of this study, shown in Fig. 18.1, are quite dramatic. A roughly linear temperature dependence was observed above about 4.2 K, which decreased abruptly to an immeasurably small resistance at lower temperatures. On reheating, the resistance was identically retraced, and Onnes concluded that mercury had undergone a unique phase transition to a new state characterized by virtually zero resistance – a “superconducting” phase.
Explaining the properties and performance of practical nanotube devices and related applications, this is the first introductory textbook on the subject. All the fundamental concepts are introduced, so that readers without an advanced scientific background can follow all the major ideas and results. Additional topics covered include nanotube transistors and interconnects, and the basic physics of graphene. Problem sets at the end of every chapter allow readers to test their knowledge of the material covered and gain a greater understanding of the analytical skill sets developed in the text. This is an ideal textbook for senior undergraduate and graduate students taking courses in semiconductor device physics and nanoelectronics. It is also a perfect self-study guide for professional device engineers and researchers.