To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The motion of charge carriers within a semiconductor is governed by a number of useful concepts which can be understood in straightforward ways from rates of change of particles in space, time, momentum and energy. In chapter 1 it was seen that the conservation of particles leads to the continuity equations (1.2.3) and (1.2.5) relating the rate of accumulation of charge density ∂ρ/∂t and the spatial rate of dispersal, div J, of current density J.
This section shows how rates of change of momentum and energy determine the velocity ν = J/ρ of charge carriers as the electric field changes within a semiconductor. In chapter 4 a more detailed approach to carrier transport is discussed using the Boltzmann collision equation, which brings in diffusion and also develops a model relevant to the Gunn effect.
The later sections of this present chapter continue with elementary transport discussing the rates at which semiconductors relax back to equilibrium. Chapter 3 outlines how such rates place limits on the engineering of devices for very high speed switching.
Rates of change of momentum: mobility
Quantum theory assures one that electrons behave as waves and that electron waves can travel freely through a perfectly periodic array of atoms such as is formed by a perfect crystal. The analogy is often made between electron quantum waves in a crystal and electromagnetic waves travelling through a periodic structure of inductors and capacitors. In such a filter only certain frequencies are permitted to propagate. In the crystal there is equally a limited range of frequencies for the quantum waves, and this means a limited range of energies for the mobile electrons (Fig. 2.1).
The motion of charge carriers within a semiconductor (or a gas) can be found by a more formal approach through the Boltzmann collision equation, which provides an elegant method by considering the rates of change of particles within ‘phase’ space – a concept to be introduced shortly. To keep the discussion clear, a one-dimensional classical approach will be considered with charge carriers having an effective mass, m*, assumed to be independent of energy or direction (not strictly valid in a semiconductor but still a most useful simplification). Extensions to three dimensions and the required corrections for quantum theory can be dealt with in later reading.
The Boltzmann collision equation for the flow of many particles is a statistical equation on the conservation of particles in a six-dimensional space referred to as phase space. The six dimensions consist of the three space dimensions for x combined with the three additional dimensions for momentum p. Momentum is considered here as an independent variable with the same independence as position. It is the dynamical equations which link these six independent variables together. On first acquaintance, this concept of momentum and position being independent variables appears absurd because it is easy to confuse the dynamical link (given through an equation such as m dx/dt = p), with functional interdependence of p and x.
W. R. Hamilton in 1834 introduced the idea that all dynamical motion could be described in terms of a function H(p, x, t) = 0 linking the momentum p and position x in time. Coordinates of position and momentum can be defined and are treated as of equal independence. […]
The work here concentrates on semiconductor devices, leaving gas and atomic lasers for further reading. This chapter considers tutorial models for the diode injection laser, the semiconductor light emitting diode (LED) and photodiode (PD), all of which can be understood from the same forms of rate equations, though with different constraints and interpretation. It will be helpful to have models for these three devices before starting on their rate equations.
The light emitting diode
The LED is a p-n junction driven by a current I into forward bias. Electrons and holes recombine close to the junction between the p- and n-materials and give out radiation with a frequency determined by the photon energy which in turn is determined by the material's impurities or band gap. Some fraction of the forward current I is then turned into useful light L (power) formed from photons with a mean energy hfm. The quantum efficiency is η = (eL/hfmI). The more efficient LEDs have a well-defined recombination region (volume Φ). For example, in GaAs LEDs (Fig. 6.1), the region Φ can be defined through the use of ‘heterojunctions’, where a p-type GaAs layer is sandwiched between n- and p-type Ga1−xAlx As materials which have a wider energy gap between conduction and valence band than GaAs materials. On forward bias, holes are driven from the p-Ga1−xAlxAs material into the GaAs, but the potential difference that arises between the valence band of p-type GaAs and n-Ga1−xAlxAs prevents the holes diffusing into the n-Ga1−xAlxAs material.
In the hope of avoiding analysis for its own sake, some of the ideas of the preceding chapter are applied to the dynamics of semiconductor switching devices which are important in computer and communication systems. Most texts on semiconductor devices concentrate on impurities, Fermi levels, diffusion equations and equilibrium starting conditions. In the approach here, the chief concerns are the rates at which a device can transport charge. So RC time constants, the dielectric relaxation rate, transit times and rates of recombination are the quantities that appear in approximate dynamic analyses of the selected switching devices. There is an inevitable tendency to digress from one or two themes of rate equations into standard semiconductor physics, and the forbearance of the reader is requested when the digressions into physics are too lengthy, inadequate or both.
Digital communications and computations rely on transmitting information by electromagnetic pulses (electrical, microwave or optical) which are either ‘on’ or ‘off’. In section 1.6 it was seen that such binary signals helped to maximise the probable information of a single symbol. Moreover, encoding signals into pulses leads to more accurate detection, regeneration and transmission of data through a variety of techniques such as error correcting codes, which can combat interference or noise in a transmission path. Faster switching rates lead to shorter pulses and so to higher rates of data and information processing.
An ideal switch between a load and a source would transfer power instantaneously to the load, but we remind the reader here how switches and loads have, of necessity, lead and contact resistances with stray capacitances which limit the rate of transfer of charge to the load.
Collecting fresh fruits becomes ever harder as the tree of knowledge grows higher and wider. However, there are certain branches that provide surer footholds to the new growths, and teachers must search these out. The rates of change of charge, energy, momentum, photon numbers, electron densities and so on, along with their detailed balances as particles and systems interact, provide fundamental footholds on sturdy branches in physics and chemistry. This book contains a collection of such topics applied to semiconductors and optoelectronics in the belief that such analyses provide valuable tutorial routes to understanding past, present and future devices. By concentrating on rates of change one focuses attention on these devices' dynamic behaviour which is vital to the ever faster flow of digits and information. The rates of the statistical interactions between electrons and photons determine distributions of energy amongst the particles as well as determining distributions in time, so controlling the ways in which devices work.
The first chapter is meant to be a fun chapter outlining some of the breadth and ideas of rate equation approaches. It is even hoped that some of these initial ideas may be picked up by sixth form teachers. Rates of reactions are mentioned in school chemistry but the implications are much broader.
Most electronic degree courses consider electron waves, holes and electrons, along with devices such as p-n junctions, FETS and bipolar transistors. Chapters 2 and 3 are adjuncts to this work. By considering rates of change of charge and emphasising transit times and recombination rates the dynamics of these devices can be highlighted.
An outstandingly innovative scientist, Rudi Kompfner, wrote that when his intuition was unengaged or disengaged then his creative faculties were paralysed. Although Kompfner was writing about quantum theory, his remarks apply to most aspects of science. How can one create and innovate when no understanding is present? The idea behind this book is that a useful contribution to understanding in science and engineering can be found by determining the rate at which an interactive process occurs and concentrating on the dominant features which limit the interaction rates.
Such thinking is not limited to science; it can have universal application. For example, before lending money to a client, a building society will ask how much that client is earning from any employers, and so obtain an estimate of the maximum rate at which the client can reasonably pay off the mortgage that will be advanced to buy a house. The rate of income being paid to the client determines to a first order the rate at which money can be spent! The maximum amount of traffic that can use a road may be limited not by the size of the road, but by the rate at which traffic can escape or enter from congested roundabouts that serve the road. In building electronic circuits to switch at high speeds, one may find that the speed is limited by the rate at which components can transfer charge into a capacitive load. It may alternatively be limited by the rate at which information can be transmitted from neighbouring devices, which have to be a certain distance away in order to accommodate enough devices to drive and be driven by any one single device.
This book was written as the text for a one quarter, or one semester, introductory course on the physics of solids. For an undergraduate majoring in physics, the associated course will usually be taken during the last two undergraduate years. However, the book is designed also to meet needs of those with other degree majors: in chemistry, electrical engineering, materials science, etc., who may not encounter this requirement in their education until graduate school. Some topics discussed (band theory, for example) require familiarity with the language and concepts of quantum physics; and an assumed level of preparedness is one semester of “modern physics”. A reader who has taken a formal quantum mechanics course will be well prepared, but it is recognized that this is often not possible. Thus Schrödinger's equation is seen from time to time, but formal quantum mechanical proofs are side-stepped.
The aim is thus a reasonably rigorous – but not obscure – first exposition of solid state physics. The emphasis is on crystalline solids, proceeding from lattice symmetries to the ideas of reciprocal space and Brillouin zones. These ideas are then developed: for lattice vibrations, the theory of metals, and crystalline semiconductors, in Chapters 2, 3, and 4 respectively. Aspects of the consequences of atomic periodicity comprise some 75% of the book's 500 pages.
In this chapter we are concerned with the spectrum of characteristic vibrations of a crystalline solid. This subject leads to a consideration of the conditions for wave propagation in a periodic lattice, the energy content and specific heat of lattice waves, the particle aspects of quantized lattice vibrations (phonons), and the consequences of anharmonic coupling between atoms. These topics form a significant part of solid state physics, and their discussion additionally introduces us to the concepts of permitted and forbidden frequency ranges, concepts which will be encountered again in connection with electronic spectra of solids.
The zero-point energy and thermal energy of a solid are manifest in incessant complicated vibrations of the atoms. These vibrations have Fourier components at a variety of frequencies. Additional motion is superimposed if the solid is stimulated by some external source, and we usually assume that the principle of superposition applies to the sum of these motions, i.e., we assume that the effect of several disturbances is found by simply adding them together. This assumption sounds plausible, provided that we remain in the linear region (or region of elastic deformation) such that the restoring force on each atom is approximately proportional to its displacement (Hooke's Law). As we shall see in discussing thermal conductivity, there are some effects of nonlinearity or “anharmonicity” even for very modest atomic displacements.
This chapter gives a brief account of some of the phenomena on an atomic scale which contribute to the macroscopically observable dielectric and magnetic properties of solids. The discussion of certain topics has been abbreviated or curtailed for two reasons, and a bibliography of more extended accounts is provided at the end of the chapter as a supplement to the limited or descriptive coverage given here.
One reason for compressing dielectric and magnetic phenomena (which are two extremely active fields of solid state research) into a single chapter is that the subjects we shall discuss here have a rather different emphasis than the subject matter of the previous four chapters. Up to this point we have been almost continuously concerned with the consequences of periodicity in real space and in k-space. To be sure, the long range order of a crystalline lattice is significant in any discussion of ferroelectric, ferromagnetic, or antiferromagnetic behavior, but most of the topics in this chapter are less directly involved with the existence of a strictly periodic k-space.
I have also tried to be brief, and thus far from comprehensive, on many of the aspects of dielectric and magnetic behavior in order to hold the total length of this book to a reasonable length for a one semester course (recognizing that there will always have to be selection of topics from within each of the five chapters). Accordingly, several of the topics are dealt with on a purely descriptive basis, relying on the bibliography for further source material.
While metallic conduction has attracted considerable interest for a long time, a clear knowledge of the character of electron dynamics in metals has emerged surprisingly recently. In this chapter we shall follow the historical sequence in first examining simple free electron models, for which it was supposed that atoms in metals could liberate their outer electrons to produce an electron “gas” for random thermal motion and contributions to conduction. These models explained a number of important metallic properties, but raised a new set of questions, which remained unanswerable until it was realized that this electron gas moves through space also occupied by a periodic array of positively charged atomic cores.
The periodic nature of a crystal lattice has often been emphasized in the last two chapters. In this chapter we shall see that the periodicity of the ion core array produces an electrostatic field distribution which profoundly affects the relationship between energy and momentum for a mobile electron. We call this relationship the “band theory of solids.” In addition to permitting a more realistic picture of metallic conduction, band theory explains why many solids have insulating or semiconducting properties.
As a curious consequence, advances in the understanding of semiconducting and insulating solids were very rapid during the 1940's and 1950's, and we might well say that by the mid 1950's some simple semiconductors were better understood than any metals. The pendulum has started to swing back since that time, and with the development of highly sophisticated experimental techniques there has been a lively resurgence of interest in metals.
It was not feasible to discuss metals in Chapter 3 without mention of the companion topic of electronic conduction in insulating and semiconducting materials on several occasions. Now it is time to consider these solids in some detail.
For a metal, it is possible to vary the mobility of charge carriers (by changing the temperature to change the density and spectrum of phonons, or by changing the density of defects in the crystal), but the number of charge carriers is fixed. This number of free electrons may be characterized by a fixed electrochemical potential (Fermi energy) or in terms of some other suitable invariant parameter, and most investigative techniques with metals tell us only about the properties of electrons at the Fermi energy.
Things are very different in a semiconductor. The numbers (as well as the mobilities) of charge carriers depend on temperature and on the presence of defects or impurities. At thermodynamic equilibrium it is possible to express the occupancy of all electron states at all energies in terms of a single normalizing parameter or Fermi level, but this Fermi level is a consequence of the overall electrostatic balance and does not have to coincide with the energy of any mobile electrons. Thus in a semiconductor, we have to evaluate the dependence of the Fermi level on temperature and flaw concentration.