To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Conventional electronics has ignored the spin on the electron
Spin electronics exploits the angular momentum and magnetic moment of the electron to add new functionality to electronic devices. A first generation of devices comprised magnetoresistive sensors and magnetic memory. The sensors have numerous applications, especially in digital recording. Magnetic recording uses semihard magnetic thin films as the recording media. Write heads are miniature thin-film electromagnets, while read heads are usually spin-valves exhibiting giant magnetoresistance (GMR) or tunnelling magnetoresistance (TMR). Magnetic randomaccess memory (MRAM) is based on switchable spin valve cells, similar in structure to the read head. New generations of spin electronic devices are under development in which the angular momentum of a spin-polarized current is used to exert spin transfer torque, or the flow of spin-polarized electrons is controlled via a third electrode in a transistor-like structure.
A hugely successful electronics technology has been built around the manipulation of electronic charge in semiconductor microcircuits. The operations needed for computation are conducted using complementary metal-oxide semiconductor (CMOS) logic. The semiconductors can be doped n- or p-type so that the charge carriers may be electrons or holes. Binary data are stored as charge on the gates of field-effect transistors (FETs). An important feature of CMOS logic, Fig. 14.1 is that it only consumes power when the transistors are switching between the on and off states. It is scalable technology, which has been repeatedly miniaturized since its introduction in 1982.
Permanent magnets deliver magnetic flux into a region of space known as the air gap, with no expenditure of energy. Hard ferrite and rare-earth magnets are ideally suited to generate flux densities comparable in magnitude to their spontaneous polarization Js. Applications are classified by the nature of the flux distribution, which may be static or time-dependent, as well as spatially uniform or nonuniform. Applications are also discussed in terms of the physical effect exploited (force, torque, induced emf, Zeeman splitting, magnetoresistance). The most important uses of permanent magnets are in electric motors, generators and actuators. Their power ranges from microwatts for wristwatch motors to hundreds of kilowatts for industrial drives. Annual production for some consumer applications runs to tens or even hundreds of millions of motors.
The flux density Bg in the airgap (equal to µ0Hg) is the natural field to consider in permanent magnet devices because flux is conserved in a magnetic circuit, and magnetic forces on electric charges and magnetic moments all depend on B.
A static uniform field may be used to generate torque or align pre-existing magnetic moments since г = m × Bg. Charged particles moving freely through the uniform field with velocity v are deflected by the Lorentz force f = qv × Bg, which causes them to move in a helix, turning with the cyclotron frequency (3.26) fc = qB/2πm.
Drop a pebble in a pond and the results are quite predictable: circular waves flow from the point of impact. Hit a point on a crystalline solid, however, and the expanding waves are highly non-spherical: the elasticity of a crystal is anisotropic. This book provides a fresh look at the vibrational properties of crystalline solids, elucidated by new imaging techniques. From the megahertz vibrations of ultrasound to the near-terahertz vibrations associated with heat, the underlying elastic anisotropy of the crystal asserts itself. Phonons are elementary vibrations that affect many properties of solids - thermal, electrical and magnetic. This text covers the basic theory and experimental observations of phonon propagation in solids. Phonon imaging techniques provide physical insights into such topics as phonon focusing, lattice dynamics and ultrasound propagation. Scattering of phonons from interfaces, superlattices, defects and electrons are treated in detail. The book includes many striking and original illustrations.
This book is devoted to the physics of electronic fluctuations (noise) in solids and covers almost all important examples of this phenomenon. It is comprehensive, intelligible and well illustrated. Emphasis is given to the main concepts, supported by many fundamental experiments which have become classics, to physical mechanisms of fluctuations, and to conclusions on the nature and magnitude of noise. The book also includes a comprehensive and complete review of flicker (1/f) noise in the literature. It will be useful to graduate students and researchers in physics and electronic engineering, and especially those carrying out research in the fields of noise phenomena and highly sensitive electronic devices, for example radiation detectors, electronic devices for low-noise amplifiers and quantum magnetometers (SQUIDS).
This book describes the properties and device applications of hydrogenated amorphous silicon. It covers the growth, the atomic and electronic structure, the properties of dopants and defects, the optical and electronic properties which result from the disordered structure and finally the applications of this technologically very important material. There is also an important chapter on contacts, interfaces and multilayers. The main emphasis of the book is on the new physical phenomena which result from the disorder of the atomic structure. The book will be of major importance to those who are researching or studying the properties and applications of a-Si:H. It will have a wider interest for anyone working in semiconductor physics and electronic engineering in general.
The aim of this chapter is to introduce the concept of the Feynman path integral. As well as developing the general construction scheme, particular emphasis is placed on establishing the interconnections between the quantum mechanical path integral, classical Hamiltonian mechanics, and classical statistical mechanics. The practice of path integration is discussed in the context of several pedagogical applications. As well as the canonical examples of a quantum particle in a single and a double potential well, we discuss the generalization of the path integral scheme to tunneling of extended objects (quantum fields), dissipative and thermally assisted quantum tunneling, and the quantum mechanical spin.
In this chapter we temporarily leave the arena of many-body physics and second quantization and, at least superficially, return to single-particle quantum mechanics. By establishing the path integral approach for ordinary quantum mechanics, we will set the stage for the introduction of field integral methods for many-body theories explored in the next chapter. We will see that the path integral not only represents a gateway to higher-dimensional functional integral methods but, when viewed from an appropriate perspective, already represents a field theoretical approach in its own right. Exploiting this connection, various concepts of field theory, namely stationary phase analysis, the Euclidean formulation of field theory, instanton techniques, and the role of topology in field theory, are introduced in this chapter.
The path integral: general formalism
Broadly speaking, there are two approaches to the formulation of quantum mechanics: the “operator approach” based on the canonical quantization of physical observables and the associated operator algebra, and the Feynman path integral.
In this chapter, the concept of path integration is generalized to integration over quantum fields. Specifically we will develop an approach to quantum field theory that takes as its starting point an integration over all configurations of a given field, weighted by an appropriate action. To emphasize the importance of the formulation that, methodologically, represents the backbone of the remainder of the text, we have pruned the discussion to focus only on the essential elements. This being so, conceptual aspects stand in the foreground and the discussion of applications is postponed to the following chapters.
In this chapter, the concept of path integration is extended from quantum mechanics to quantum field theory. Our starting point is a situation very much analogous to that outlined at the beginning of the previous chapter. Just as there are two different approaches to quantum mechanics, quantum field theory can also be formulated in two different ways: the formalism of canonically quantized field operators, and functional integration. As to the former, although much of the technology needed to efficiently implement this framework – essentially Feynman diagrams – originated in high-energy physics, it was with the development of condensed matter physics through the 1950s, 1960s, and 1970s that this approach was driven to unprecedented sophistication. The reason is that, almost as a rule, problems in condensed matter investigated at that time necessitated perturbative summations to infinite order in the non-trivial content of the theory (typically interactions).
The purpose of this chapter is to introduce and apply the method of second quantization, a technique that underpins the formulation of quantum many-particle theories. The first part of the chapter focuses on methodology and notation, while the remainder is devoted to the development of applications designed to engender familiarity with and fluency in the technique. Specifically, we will investigate the physics of the interacting electron gas, charge density wave propagation in one-dimensional quantum wires, and spin waves in a quantum Heisenberg (anti) ferromagnet. Indeed, many of these examples and their descendants will reappear as applications in our discussion of methods of quantum field theory in subsequent chapters.
In the previous chapter we encountered two field theories that could conveniently be represented in the language of “second quantization,” i.e. a formulation based on the algebra of certain ladder operators âk. There were two remarkable facts about this formulation: firstly, second quantization provides a compact way of representing the many-body space of excitations; secondly, the properties of the ladder operators were encoded in a simple set of commutation relations (cf. Eq. (1.33)) rather than in some explicit Hilbert space representation.
Apart from a certain aesthetic appeal, these observations would not be of much relevance if it were not for the fact that the formulation can be generalized to a comprehensive and highly efficient formulation of many-body quantum mechanics in general.
The method of the renormalization group (RG) provides theorists with powerful and efficient tools to explore interacting theories, often in regimes where perturbation theory fails. Motivating our discussion with two introductory examples drawn from a classical and a quantum theory, we first become acquainted with the RG as a concept whereby nonlinear theories can be analyzed beyond the level of plain perturbation theory. With this background, we then proceed to discuss the idea and practice of RG methods in more rigorous and general terms, introducing the notion of scaling, dimensional analysis, and the connection to the general theory of phase transitions and critical phenomena. Finally, to conclude this chapter, we visit a number of concrete implementations of the RG program introduced and exemplified on a number of canonical applications.
In Chapter 5, ø4-theory was introduced as an archetypal model of interacting continuum theories. Motivated by the existence of nonlinearities inherent in the model, a full perturbative scheme was developed, namely Wick contractions and their diagrammatic implementation. However, from a critical perspective, one may say that such perturbative approaches present only a limited understanding. Firstly, the validity of the ø4-action as a useful model theory was left unjustified, i.e. the ø4-continuum description was obtained as a gradient expansion of, in that case, a d-dimensional Ising model. But what controls the validity of the low-order expansion? Indeed, the same question could be applied to any one of the many continuum approximations we have performed throughout the first chapters of the text.
In this chapter, we introduce the analytical machinery to investigate the properties of many-body systems perturbatively. Specifically, employing the “ø4-theory” as an example, we learn how to describe systems that are not too far from a known reference state by perturbative means. Diagrammatic methods are introduced as a tool to efficiently implement perturbation theory at large orders. The new concepts are then applied to the analysis of various properties of the weakly interacting electron gas.
In previous chapters we have emphasized repeatedly that the majority of many-particle problems cannot be solved in closed form. Therefore, in general, one is compelled to think about approximation strategies. One promising ansatz leans on the fact that, when approaching the low-temperature physics of a many-particle system, we often have some idea, however vague, of its preferred states and/or its low-energy excitations. One may then set out to explore the system by using these prospective ground state configurations as a working platform. For example, one might expand the Hamiltonian in the vicinity of the reference state and check that, indeed, the residual “perturbations” acting in the low-energy sector of the Hilbert space are weak and can be dealt with by some kind of approximate expansion. Consider, for example, the quantum Heisenberg magnet. In dimensions higher than one, an exact solution of this system is out of the question. However, we know (or, more conservatively, “expect”) that, at zero temperature, the spins will be frozen into configurations aligned along some (domain-wise) constant magnetisation axes.
In the past few decades, the field of quantum condensed matter physics has seen rapid and, at times, almost revolutionary development. Undoubtedly, the success of the field owes much to ground-breaking advances in experiment: already the controlled fabrication of phase coherent electron devices on the nanoscale is commonplace (if not yet routine), while the realization of ultra–cold atomic gases presents a new arena in which to explore strong interaction and condensation phenomena in Fermi and Bose systems. These, along with many other examples, have opened entirely new perspectives on the quantum physics of many-particle systems. Yet, important as it is, experimental progress alone does not, perhaps, fully explain the appeal of modern condensed matter physics. Indeed, in concert with these experimental developments, there has been a “quiet revolution” in condensed matter theory, which has seen phenomena in seemingly quite different systems united by common physical mechanisms. This relentless “unification” of condensed matter theory, which has drawn increasingly on the language of low-energy quantum field theory, betrays the astonishing degree of universality, not fully appreciated in the early literature.
On a truly microscopic level, all forms of quantum matter can be formulated as a many body Hamiltonian encoding the fundamental interactions of the constituent particles. However, in contrast with many other areas of physics, in practically all cases of interest in condensed matter the structure of this operator conveys as much information about the properties of the system as, say, the knowledge of the basic chemical constituents tells us about the behavior of a living organism! Rather, in the condensed matter environment, it has been a long-standing tenet that the degrees of freedom relevant to the low-energy properties of a system are very often not the microscopic.
The chapter begins with a brief survey of concepts and techniques of experimental condensed matter physics. It will be shown how correlation functions provide a bridge between concrete experimental data and the theoretical formalism developed in previous chapters. Specifically we discuss – an example of outstanding practical importance – how the response of many-body systems to various types of electromagnetic perturbation can be described in terms of correlation functions and how these functions can be computed by field theoretical means.
In the previous chapters we have introduced important elements of the theory of quantum many-body systems. Perhaps most importantly, we have learned how to map the basic microscopic representations of many-body systems onto effective low-energy models. However, to actually test the power of these theories, we need to understand how they can be related to experiment. This will be the principal subject of the present chapter.
Modern condensed matter physics benefits from a plethora of sophisticated and highly refined techniques of experimental analysis including the following: electric and thermal transport; neutron, electron, Raman, and X-ray scattering; calorimetric measurements; induction experiments; and many more (for a short glossary of prominent experimental techniques, see Section 7.1.2 below). While a comprehensive discussion of modern experimental condensed matter would reach well beyond the scope of the present text, it is certainly profitable to attempt an identification of some structures common to most experimental work in many-body physics.
x In this chapter we discuss low-energy theories with non-trivial forms of long-range order. We learn how to detect the presence of topologically non-trivial structures, and to understand their physical consequences. Topological terms (θ-terms, Wess–Zumino terms, and Chern–Simons terms) are introduced as contributions to the action, affecting the behavior of low-energy field theories through the topology of the underlying field configurations. Applications discussed in this chapter include persistent currents, quantum spin chains, and the quantum Hall effects.
In the preceding chapters we encountered a wide range of long-range orders, or, to put it more technically, different types of mean–fields. Reflecting the feature of (average) translational invariance, the large majority of these mean-fields turned out to be spatially homogeneous. However, there have also been a number of exceptions: under certain conditions, mean-field configurations with long-range spatial textures are likely to form. One mechanism driving the formation of inhomogeneities is the perpetual competition of energy and entropy: being in conflict with the (average) translational invariance of extended systems, a spatially non-uniform mean-field is energetically costly. On the other hand, this very “disadvantage” implies a state of lowered degree of order, or higher entropy. (Remember, for example, instanton formation in the quantum double well: although energetically unfavorable, once it has been created it can occur at any “time,” which brings about a huge entropic factor.) It then depends on the spatio-temporal extension of the system whether or not an entropic proliferation of mean-field textures is favorable.
In the previous chapter we have seen that departures from equilibrium generate a plethora of new phenomena in statistical physics. While our discussion so far has been limited to classical phenomena, in this chapter we want to ask how quantum mechanics interferes with the conditions of nonequilibrium. Experience shows that, in some cases, nonequilibrium quantum systems can be understood in terms of reasonably straightforward quantization of their classical limits. In others, an interplay of quantum coherence and nonequilibrium driving leads to entirely new phenomena – such as lasing, a paradigm for a steady state nonequilibrium quantum system. At any rate, we have every reason to anticipate that quantum nonequilibrium physics will be as diverse and colourful as its classical counterpart.
Even minor departures from equilibrium call for a theoretical description entirely different from the ‘Matsubara formalism’ developed in earlier chapters of this book. The applicability of the latter is rigidly tied to the existence of a quantum grand canonical density operator, the hallmark of a many particle equilibrium system. But how then, can an functional theory of nonequilibrium quantum systems be developed? An obvious idea would be to once more go through the different elements of classical nonequilibrium theory developed in the previous chapter, and subject every one of them to an individual quantization scheme. Luckily, there are more efficient ways to achieve our goal: some decades ago, a many particle formalism suitable to describe nonequilibrium systems under the most general conditions was introduced by Keldysh.
Previously, we have seen how the field integral method can be deployed to formulate perturbative approximation schemes to explore weakly interacting theories. In this chapter, we will learn how elements of the perturbative approach can be formulated more efficiently by staying firmly within the framework of the field integral. More importantly, in doing so, we will see how the field integral provides a method for identifying and exploring non-trivial reference ground states – “mean-fields.” A fusion of perturbative and mean-field methods will provide us with analytical machinery powerful enough to address a spectrum of rich applications ranging from superfluidity and superconductivity to metallic magnetism and the interacting electron gas.
As mentioned in Chapter 5, the perturbative machinery is but one part of a larger framework. In fact, the diagrammatic series already contained hints indicating that a straightforward expansion of a theory in the interaction operator might not always be an optimal strategy: all previous examples that contained a large parameter “N” – and usually it is only problems of this type that are amenable to controlled analytical analysis – shared the property that the diagrammatic analysis bore structures of “higher complexity.” (For example, series of polarization operators appeared rather than series of the elementary Green functions, etc.) This phenomenon suggests that large-N problems should qualify for a more efficient and, indeed, a more physical formulation.
This chapter provides an introduction to nonequilibrium statistical (field) theory. In the following, we introduce a spectrum of concepts central to the description of many particle systems out of statistical equilibrium. We will see that key elements of the theory – Langevin theory and the formalism of the Fokker–Planck equation – can be developed from the coherent framework of a path integral formalism. Applications discussed below include metastability, macroscopic quantum tunneling, driven diffusive lattices, and directed percolation. While the emphasis in this chapter is on classical nonequilibrium phenomena, the quantum theory of nonequilibrium systems will be discussed in the next chapter.
The world is a place full of nonequilibrium phenomena: an avalanche sliding down a sandpile, a traffic jam forming at rush hour, the dynamics of electrons inside a strongly voltage-biased electronic device, the turmoil of markets following an economic instability, the diffusion-limited reaction of chemical constituents and many others are examples of situations where a large number of correlated constituents evolves under “out-of-equilibrium” conditions. Statistical nonequilibrium physics is concerned with the identification and understanding of universal structures that characterise these phenomena. Notwithstanding the existence of powerful principles of unification, it is clear that a theory addressing the dazzling multitude of nonequilibrium phenomena must be multi-faceted. And indeed, nonequilibrium statistical physics is a strongly diversified field comprising many independent sub-disciplines. But this is not the only remarkable feature of this branch of physics.