To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Prerequisites: Chapters 2–7, 9, 10, 12, 13, 15, and 16.
So far in the treatment of creation and annihilation operators, we have considered operators, including especially Hamiltonians, in which we are concerned with only one kind of particle (i.e., either identical bosons or identical fermions). Many important phenomena involve interactions of different kinds of particles (e.g., interactions of photons or phonons with electrons). Here, we discuss how to handle operators for such situations. As a specific, and particularly useful example, we discuss the electron–photon interaction. This leads us through perturbation theory in this operator formalism to a proper quantum mechanical treatment of absorption and stimulated and spontaneous emission.
States and commutation relations for different kinds of particles
The approach is an extension of what we have done before. We need two additions. First, though we continue to work in the occupation number representation, we must include the description of the occupied single-particle states for each different particle in the overall description of the states. Second, we need commutation relations between operators corresponding to different kinds of particles.
In considering the occupation number basis states – for example, for a system with two different kinds of particles – we simply have to list which states are occupied for each different kind of particle. Suppose that we have a set of identical electrons and a set of identical bosons (e.g., photons).
Thus far with Schrödinger's equation, we have considered only situations where the spatial probability distribution was steady in time. In our rationalization of the time-independent Schrödinger equation, we imagined we had, for example, a steady electron beam, where the electrons had a definite energy; this beam was diffracting off some object, such as a crystal or through a pair of slits. The result was some steady diffraction pattern (at least, the probability distribution did not vary in time). We then went on to use this equation to examine some other specific problems, including potential wells, where this requirement of definite energy led to the unusual behavior that only specific, “quantized” energies were allowed.
In particular, we analyzed the problem of the harmonic oscillator and found stationary states of that oscillator. On the face of it, stationary states of an oscillator (other than the trivial one of the oscillator having zero energy) make little sense given our classical experience with oscillators – a classical oscillator with energy oscillates.
Clearly, we must expect quantum mechanics to model situations that are not stationary. The world about us changes, and if quantum mechanics is to be a complete theory, it must handle such changes. To understand such changes, at least for the kinds of systems where Schrödinger's equation might be expected to be valid, we need a time-dependent extension of Schrödinger's equation.
Prerequisites: Chapters 2–7, including the discussion of periodic boundary conditions in Section 5.4.
One of the most important practical applications of quantum mechanics is the understanding and engineering of crystalline materials. Of course, the full understanding of crystalline materials is a major part of solid-state physics and merits a much longer discussion than we give here. We will, however, try to introduce some of the most basic quantum mechanical principles and simplifications in crystalline materials. This will also allow us to perform many quantum mechanical calculations of important processes in semiconductors.
Crystals
A crystal is a material whose measurable properties are periodic in space. We can think about it using the idea of a unit cell. If we think of the unit cell as a “block” or “brick,” then a definition of a crystal structure is one that can fill all space by the regular stacking of these unit cells. If we imagine that we marked a black spot on the same position of the surface of each block, these spots or points would form a crystal lattice. We can, if we wish, define a set of vectors, RL, that we call lattice vectors. The set of lattice vectors consists of all of the vectors that link points on this lattice; that is,
Here, a1, a2, and a3 are the three linearly independent vectors that take us from a given point in one unit cell to the equivalent point in the adjacent unit cell.
In this appendix, we summarize the core background mathematics that we presume in the rest of the book. A major purpose here is to clarify the mathematical notations and terminology. Readers coming from different backgrounds may be more used to one notation or another and other books that the reader may consult may use different notation and terms, so we clarify the ones we use here and their relations to others. This appendix may also serve as a refresher or to patch over some holes temporarily in the reader's knowledge until the reader has more time to study the relevant mathematics in more detail. This short discussion here is certainly not a complete one and no attempt is made to give rigorous and complete mathematics.
Quantum mechanics is sometimes presented as being a very mathematical subject. It is true that many aspects of quantum mechanics can only be well defined using a mathematical vocabulary. In fact, we can assure the reader that the mathematics of quantum mechanics is not harder than that found in classical physics or any analytic branch of engineering and the required background is essentially the same as in those fields. Because quantum mechanics is very fundamentally based on linear mathematics, quantum mechanics, in practice, is arguably simpler mathematically than many other areas of science and engineering.
Prerequisites: Chapters 2 – 5, including a first reading of Section 2.11.
We have seen how to solve some simple quantum mechanical problems exactly and, in principle, we know how to solve for any quantum mechanical problem that is the solution of Schrödinger's equation. Some extensions of Schrödinger's equation are important for many problems, especially those including the consequences of electron spin. Other equations also arise in quantum mechanics, beyond the simple Schrödinger equation description, such as appropriate equations to describe photons and relativistically correct approaches. We postpone discussion of any such more advanced equations.
For all such equations, however, there are relatively few problems that are simple enough to be solved exactly. This is not a problem peculiar to quantum mechanics; there are relatively few classical mechanics problems that can be solved exactly either. Problems that involve multiple bodies or that involve the interaction between systems are often quite difficult to solve.
One could regard such difficulties as being purely mathematical, say that we have done our job of setting up the necessary principles to understand quantum mechanics, and move on, consigning the remaining tasks to applied mathematicians or possibly to some brute-force computer technique. Indeed, the standard set of techniques that can be applied, for example, to the solution of differential equations can be (and are) applied to the solution of quantum mechanical differential equation problems. The problem with such an approach is that if we blindly apply the mathematical techniques, we may lose much insight as to how such more complicated systems work.
When we think of processing information, we are typically used to a classical world in which we represent information in terms of the classical state of an object. For example, we could represent a number as the length of some particular rod in meters or the value of some electrical potential in volts; these would be analog representations. More typically in information processing, we represent numbers and other information digitally, in binary form as a sequence of “bits” that are either “1” or “0”. We can use all sorts of physical representations for the 1 and 0, such as an object being “up” or “down,” a device being “on” (e.g., passing current) or “off” (e.g., not passing current), or a voltage being “high” or “low.”
In the quantum mechanical world, we have additional options in representing information; in particular we can use quantum mechanical superpositions, such as a superposition of “up” and “down.” Though we could easily also have the equivalent of a superposition in a classical world for one physical system (a system that was half up and half down could simply be represented by it being horizontal), in quantum mechanics, we can have kinds of superpositions of multiple systems (so-called entangled states) that have no classical analog and the act of measurement on a quantum mechanical system in a superposition can have quite a different result from that in a classical system (i.e., the process of “collapse” into an eigenstate).
The problem of simplifying the computation of the normal modes of vibrations of molecules and solids has been presented, over the past century, as a classic application of symmetry. It has been extensively discussed in a plethora of books on applications of group theoretical techniques. The dynamical problem of surfaces has been a relative latecomer.
A major contribution of group theoretical techniques to the dynamics of condensed matter systems has been to simplify the secular problem for the determination of normal mode eigenfrequencies and eigenvectors in the harmonic approximation. The secular matrix is found to be reducible, i.e. “block-diagonalizable”, with respect to the Irreps of the symmetry group of the system's Hamiltonian.
For the sake of pedagogy, and in order to prepare the way for tackling the dynamics of the more complex condensed matter systems, we consider first the simpler dynamics of molecules.
Dynamical properties of molecules
The application of group theoretical techniques to study the dynamical properties of molecules involves the determination of symmetrized normal modes prior to the computation of the eigenfrequencies and eigenvectors. A typical example of such an approach has been presented in Chapter 6, to motivate the concept of projection operators. In that example, we were able to obtain the symmetry-adapted translation, rotation, and vibrational vectors describing the dynamics of water molecules. Here, we expand on this approach and extend it to enable the computation of corresponding eigenfrequencies and eigenvectors.
Condensed matter systems have diverse physical properties that are described by tensorial quantities. By this we mean that physical properties of a system are usually defined by, and consist of, relationships between two or more particular measurable quantities associated with the system. These measurable quantities themselves usually assume tensorial forms, so that the ensuing physical properties that characterize a physical system are represented by tensors.
Scalars, i.e. tensors of rank 0, are typified by temperature, pressure, and mass of a homogeneous system, etc., while tensors of rank 1, i.e. vectorial properties, are manifest in electric and magnetic fields and moments, temperature gradients, and currents. Examples of second rank tensors are evident in the characterization of stress and strain in material systems. All these tensors describe either some physical state of a system or some externally applied physical field. For example, the magnitude and direction of the electric polarization is specified in response to an applied external electric field. We call all these tensors, both applied and induced, physical tensors.
We also encounter second and higher rank tensors that relate applied and induced physical tensors, for example, the electric susceptibility tensor of rank 2 connects the electric polarization with the applied electric fields, and the elasticity tensor, of rank 4, relates the strain to the applied stress. In contrast to physical tensors, the latter tensors are system specific: their particular forms, i.e. the number of linearly independent components and their values, are determined by the symmetry and structure of a given system. We refer to these system-specific tensors as matter, or material, tensors.