To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Phase transition dynamics is centrally important to condensed matter physics. This 2002 book treats a wide variety of topics systematically by constructing time-dependent Ginzburg-Landau models for various systems in physics, metallurgy and polymer science. Beginning with a summary of advanced statistical-mechanical theories including the renormalization group theory, the book reviews dynamical theories, and covers the kinetics of phase ordering, spinodal decomposition and nucleation in depth. The phase transition dynamics of real systems are discussed, treating interdisciplinary problems in a unified manner. Topics include supercritical fluid dynamics, stress-diffusion coupling in polymers and mesoscopic dynamics at structural phase transitions in solids. Theoretical and experimental approaches to shear flow problems in fluids are reviewed. Phase Transition Dynamics provides a comprehensive account, building on the statistical mechanics of phase transitions covered in many introductory textbooks. It will be essential reading for researchers and advanced graduate students in physics, chemistry, metallurgy and polymer science.
Electronic excitation is a means to change materials properties. This book analyses the important features of the changes induced by electronic excitation, identifies what is critical, and provides a basis from which materials modification can be developed successfully. Electronic excitation by lasers or electron beams can change the properties of materials. In the last few years, there has been a mix of basic science, of new laser and electron beam tools, and of new needs from microelectronics, photonics and nanotechnology. This book extends and synthesises the science, addressing ideas like energy localisation and charge localisation, with detailed comparisons of experiment and theory. It also identifies the ways this understanding links to technological needs, like selective removal of material, controlled changes, altering the balance between process steps, and possibilities of quantum control. This book will be of particular interest to research workers in physics, chemistry, electronic engineering and materials science.
The conventional, single-reference, coupled-cluster method is very effective for electronic states dominated by a single determinant, such as most molecular ground states near their equilibrium geometry. Such states are predominantly closed-shell singlet states, and CC calculations on them produce pure singlet wave functions. But even these states become dominated by more than one determinant when one or more bonds are stretched close to breaking, so that single-reference CC based on RHF orbitals is then not usually appropriate for the calculation of entire potential-energy surfaces. While such problems can be partially treated by using UHF reference functions, which usually separate correctly, the UHF approach makes use of symmetry breaking and is poor in the spin-recoupling region.
Most excited, ionized and electron-attached states are open-shell states, and CC calculations on them using UHF or ROHF orbitals do not usually result in pure-spin wave functions. Furthermore, such states often involve large contributions from more than one determinant and thus do not respond well to conventional single-reference treatments.
One solution to these problems is to resort to multireference methods, such as those described in Chapters 8 and 14, but such treatments are still quite difficult to apply at a high enough level. An effective alternative in many cases is provided by the equation-of-motion coupled-cluster (EOM-CC) method (Emrich 1981, Sekino and Bartlett 1984, Comeau and Bartlett 1993, Stanton and Bartlett 1993a). A closely related approach is the coupled-cluster linear response (CCLR) method (Monkhorst 1977, Dalgaard and Monkhorst 1983, Koch and Jørgensen 1990).
In the present chapter, we discuss the Sutherland model for particles without internal degrees of freedom such as spin. We call this model the singlecomponent Sutherland model. For two particles, the eigenenergies and eigenstates of the Sutherland model (1.3) have been obtained explicitly in the previous chapter. The most striking feature of the Sutherland model is that one can derive not only the energy spectrum but also the dynamics for the many-particle case with an exact account of interaction effects. Thus, the Sutherland model provides an ideal framework to study a one-dimensional quantum liquid in detail.
In Section 2.1, we derive the eigenenergies of eigenstates. In Section 2.2, we present different but equivalent physical pictures for the energy spectrum. Namely, the energy spectrum is naturally regarded as that of interacting bosons or fermions. The same spectrum can also be interpreted as that of free particles obeying nontrivial quantum statistics, i.e., free anyons in one dimension. The exclusion statistics proposed by Haldane will be explained on this occasion. Spectrum and statistics of elementary excitations are derived in Section 2.3. In Section 2.4, we discuss thermodynamic properties, which can be rewritten as those of free anyons. In Section 2.5, we identify the eigenfunctions with Jack symmetric polynomials, and discuss their basic properties. In Section 2.6, we consider dynamical correlation functions such as Green's functions and the density correlation function. These quantities are derived with the use of Jack polynomials, and are naturally interpreted in terms of elementary excitations with fractional charge.
The Sutherland model is the simplest model to realize the Tomonaga–Luttinger liquid.
Bipolar device design can be considered in two parts. The first part deals with designing bipolar transistors in general, independent of their intended application. In this case, the goal is to reduce as much as possible, consistent with the start-of-the-art fabrication technology, all the internal resistance and capacitance components of the transistor. The second part deals with designing a bipolar transistor for a specific circuit application. In this case, the optimal device design point depends on the application. The design of a bipolar transistor in general is covered in this chapter, and the optimization of a transistor for a specific application is discussed in Chapter 8.
Design of the Emitter Region
It was shown in Section 6.2 that the emitter parameters affect only the base current, and have no effect on the collector current. In theory, a device designer can vary the emitter design to vary the base current. In practice, this is rarely done, for two reasons. First, for digital-circuit applications, as long as the current gain is not unusually low or the base current unusually high, the performance of a bipolar transistor is rather insensitive to its base current (Ning et al., 1981). For many analog-circuit applications, once the current gain is adequate, the reproducibility of the base current is more important than its magnitude. Therefore, there is really no particular reason to tune the base current of a bipolar device by tuning the emitter design, once a low and reproducible base current is obtained. Second, as can be seen in Appendix 2, the emitter is formed towards the end of the device fabrication process. Any change to the emitter process to tune the base current could affect the doping profile of the other device regions and hence could affect the other device parameters. As a result, once a bipolar technology is ready for manufacturing, its emitter fabrication process is usually fixed. All that a device designer can do to alter the device and circuit characteristics in this bipolar technology is to change the base and the collector designs, which often can be accomplished independently of the emitter process and hence has no effect on the base current.
The metal–oxide–semiconductor field-effect transistor (MOSFET) is the building block of VLSI circuits in microprocessors and dynamic memories. Because the current in a MOSFET is transported predominantly by carriers of one polarity only (e.g., electrons in an n-channel device), the MOSFET is usually referred to as a unipolar or majority-carrier device. Throughout this chapter, n-channel MOSFETs are used as an example to illustrate device operation and derive drain-current equations. The results can easily be extended to p-channel MOSFETs by exchanging the dopant types and reversing the voltage polarities.
The basic structure of a MOSFET is shown in Fig. 3.1. It is a four-terminal device with the terminals designated as gate (subscript g), source (subscript s), drain (subscript d), and substrate or body (subscript b). An n-channel MOSFET, or nMOSFET, consists of a p-type silicon substrate into which two n+ regions, the source and the drain, are formed (e.g., by ion implantation). The gate electrode is usually made of metal or heavily doped polysilicon and is separated from the substrate by a thin silicon dioxide film, the gate oxide. The gate oxide is usually formed by thermal oxidation of silicon. In VLSI circuits, a MOSFET is surrounded by a thick oxide called the field oxide to isolate it from the adjacent devices. The surface region under the gate oxide between the source and drain is called the channel region and is critical for current conduction in a MOSFET. The basic operation of a MOSFET device can be easily understood from the MOS capacitor discussed in Section 2.3. When there is no voltage applied to the gate or when the gate voltage is zero, the p-type silicon surface is either in accumulation or in depletion and there is no current flow between the source and drain. The MOSFET device acts like two back-to-back p–n junction diodes with only low-level leakage currents present. When a sufficiently large positive voltage is applied to the gate, the silicon surface is inverted to n-type, which forms a conducting channel between the n+ source and drain. If there is a voltage difference between them, an electron current will flow from the source to the drain. A MOSFET device therefore operates like a switch ideally suited for digital circuits. Since the gate electrode is electrically insulated from the substrate, there is effectively no dc gate current, and the channel is capacitively coupled to the gate via the electric field in the oxide (hence the name field-effect transistor).
Since the invention of the bipolar transistor in 1947, there has been an unprecedented growth of the semiconductor industry, with an enormous impact on the way people work and live. In the last thirty years or so, by far the strongest growth area of the semiconductor industry has been in silicon very-large-scale-integration (VLSI) technology. The sustained growth in VLSI technology is fueled by the continued shrinking of transistors to ever smaller dimensions. The benefits of miniaturization – higher packing densities, higher circuit speeds, and lower power dissipation – have been key in the evolutionary progress leading to today’s computers, wireless units, and communication systems that offer superior performance, dramatically reduced cost per function, and much reduced physical size, in comparison with their predecessors. On the economic side, the integrated-circuit (IC) business has grown worldwide in sales from $1 billion in 1970 to $20 billion in 1984 and has reached $250 billion in 2007. The electronics industry is now among the largest industries in terms of output as well as employment in many nations. The importance of microelectronics in economic, social, and even political development throughout the world will no doubt continue to ascend. The large worldwide investment in VLSI technology constitutes a formidable driving force that will all but guarantee the continued progress in IC integration density and speed, for as long as physical principles will allow.
This chapter examines the key device design issues in a modern CMOS VLSI technology. It begins with an extensive review of the concept of MOSFET scaling. Two important CMOS device design parameters, threshold voltage and channel length, are then discussed in detail.
MOSFET Scaling
CMOS technology evolution in the past thirty years has followed the path of device scaling for achieving density, speed, and power improvements. MOSFET scaling was propelled by the rapid advancement of lithographic techniques for delineating fine lines of 1 μm width and below. In Section 3.2.1, we discussed that reducing the source-to-drain spacing, i.e., the channel length of a MOSFET, led to short-channel effects. For digital applications, the most undesirable short-channel effect is a reduction in the gate threshold voltage at which the device turns on, especially at high drain voltages. Full realization of the benefits of the new high-resolution lithographic techniques therefore requires the development of new device designs, technologies, and structures which can be optimized to keep short-channel effects under control at very small dimensions. Another necessary technological advancement for device scaling is in ion implantation, which not only allows the formation of very shallow source and drain regions but also is capable of accurately introducing a sharply profiled, low concentration of doping atoms for optimum channel profile design.
The previous chapters have considered the operation of CMOS and bipolar devices mainly in the context of logic circuits. This chapter addresses another basic functional block in modern VLSI chips – memory. A predominant majority of the VLSI devices produced today are in various forms of random-access memory (RAM).
Viewed from the operation standpoint, a RAM functional unit is usually organized into an array of memory cells (or bits) together with its supporting circuits for selecting, writing, and reading the memory cells. In an array, the bits on the same row are selected by a word signal. A schematic block diagram of a RAM unit is shown in Fig. 9.1. The array consists of W words with B bits each, for a total memory capacity of W × B bits. A random bit in the array can be accessed through signals applied to its wordline and bitline.
Depending on the retention of information in the cells of a memory array, random-access memories can be classified into three categories: static random-access memory (SRAM), dynamic random-access memory (DRAM), and nonvolatile random-access memory (NVRAM). NVRAM is often referred to as nonvolatile memory for short. SRAMs have fast access times. They retain data as long as they are connected to the power supply. Practically every VLSI chip contains a certain amount of SRAM which is usually built using basically the same devices as in the logic circuits. DRAMs have relatively slow access times. They require periodic refresh in order to prevent loss of data. On a per-bit basis, DRAMs have a much lower cost than SRAMs because a DRAM cell is typically only about one tenth the size of an SRAM cell. For systems that require much more SRAM than can be contained on the logic chip, stand-alone SRAM chips are often used to meet the need. However, in order to reduce system cost and size, designers often use stand-alone DRAM chips instead of stand-alone SRAM chips. In that case, some form of memory-hierarchy architecture is usually employed to minimize the impact of the relatively slow DRAM on the system performance. Both SRAMs and DRAMs are volatile in that data are lost once the power supply to the chip is disconnected.
The book focuses primarily on many-body (or better, many-electron) methods for electron correlation. These include Rayleigh–Schrödinger perturbation theory (RSPT), particularly in its diagrammatic representation (referred to as many-body perturbation theory, or MBPT), and coupled-cluster (CC) theory; their relationship to configuration interaction (CI) is included. Further extensions address properties other than the energy, and also excited states and multireference CC and MBPT methods.
The many-body algebraic and diagrammatic methods used in electronic structure theory have their origin in quantum field theory and in the study of nuclear matter and nuclear structure. The second-quantization formalism was first introduced in a treatment of quantized fields by Dirac (1927) and was extended to fermion systems by Jordan and Klein (1927) and by Jordan and Wigner (1928). This formalism is particularly useful in field theory, in scattering problems and in the study of infinite systems because it easily handles problems involving infinite, indefinite or variable numbers of particles. The diagrammatic approach was introduced into field theory by Feynman (1949a,b) and applied to many-body systems by Hugenholtz (1957) and by Goldstone (1957). Many-body perturbation theory and its linked-diagram formalism were first introduced by Brueckner and Levinson (1955) and by Brueckner (1955), and were formalized by Goldstone (1957). Other important contributions to the methodology, first in field theory and then in the theory of nuclear structure, are due to Dyson (1949a,b), Wick (1950), Hubbard (1957, 1958a,b) and Frantz and Mills (1960). Applications to the electronic structure of atoms and molecules began with the work of Kelly (1963, 1964a,b, 1968), and molecular applications using finite analytical basis sets appeared in the work of Bartlett and Silver (1974a, b).
This chapter addresses several more subtle but nevertheless important aspects of coupled-cluster MBPT theory.
Spin summations and computational considerations
The formalism described in the previous sections was presented in terms of spinorbitals, without regard to integration over spin coordinates. Even in the case of unrestricted Hartree–Fock (UHF) reference functions, in which the spatial orbitals for α and β spin are different, integration over spin is absolutely necessary to eliminate many integrals and to allow the introduction of constraints over the summation indices, achieving a computational effort of no more than three times that of comparable RHF calculations. Furthermore, all amplitudes in which the number of α and β spinorbitals is different for the hole and particle indices vanish, preserving the MS, but not the S, quantum number. In the restricted closed-shell Hartree–Fock (RHF) case, spin integration is used to combine contributions from α and β spinorbitals, deriving expressions in terms of spatial orbitals only and thus reducing the range of all indices by about a factor 2 (see Section 7.3). Restricted open-shell Hartree–Fock (ROHF) calculations are usually performed as UHF, despite double occupancy, because the most effective algorithms are still of the spin-integrated, spatial-orbital, form. The double occupancy cannot be exploited further without special effort.
The incorporation of spin integration can be done algebraically or, in some cases, diagrammatically. As an example of the diagrammatic treatment of spin summation in coupled-cluster calculations we shall consider the case of the CCD equation with an RHF reference function. The diagrammatic representation of this equation in a spinorbital basis was given in Fig. 9.2 in terms of antisymmetrized Goldstone diagrams.
Although most microelectronics products are now made of CMOS transistors, bipolar transistors remain important in microelectronics because of their superior characteristics for analog circuit applications. There are two types of bipolar devices: the n–p–n type which has a p-type base and n-type emitter and collector, and the p–n–p type which has an n-type base and p-type emitter and collector. Commonly used bipolar devices are either lateral transistors, where the active device regions are arranged horizontally adjacent to one another and the active currents flow laterally, or vertical transistors, where the active device regions are arranged vertically one on top of another and the active currents flow vertically. Practically all bipolar transistors used in modern VLSI applications are of the vertical n–p–n type.
For simplicity, only vertical n–p–n bipolar transistors will be considered explicitly here. The equations derived for vertical transistors apply to horizontal transistors as well, provided that the device parameter values are adjusted accordingly. Also, the equations for an n–p–n transistor can be extended to a p–n–p transistor simply by reversing the voltage and dopant polarities and using the appropriate device parameter values.
Interactions in many-body systems bring about collective phenomena such as superconductivity and magnetism. In many cases, simple mean-field theory provides a basic understanding of these phenomena. In fermion systems in one dimension, however, neither the mean-field theory nor perturbation theory works if it starts from the non-interacting fermions. This is because the interaction effects in one dimension are much stronger than those in higher dimensions. Intuitively speaking, two particles cannot avoid collision in a single-way track in contrast with two and three dimensions. Thus the interaction effects appear in a drastic way in one dimension.
Another aspect in one dimension, which overcompensates the difficulty of perturbation and mean-field theories, is that a complete account of interaction effects is possible under certain conditions. The class of systems satisfying such conditions is referred to as exactly solvable. Soon after the establishment of quantum mechanics, Bethe solved exactly the Heisenberg spin model in one dimension [28]. The basic idea of the solution is now called the Bethe ansatz. Since then, theoretical physics in one dimension has developed into a magnificent edifice, including sophisticated mathematical techniques. In many cases, the eigenfunctions derived by the Bethe ansatz consist of plane waves that are defined stepwise for each spatial configuration of particles. Since the coefficients of plane waves depend on the configuration, the property of the wave function cannot be made explicit without detailed knowledge of these coefficients. We mention some of the recent monographs on the Bethe ansatz and its extensions [54, 118, 179]. A comprehensive account on exactly solvable models has recently been given by Sutherland [178].