To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
During the last four decades, technical progress in the field of light-emitting diodes (LEDs) has been breathtaking. State-of-the art LEDs are small, rugged, reliable, bright, and efficient. At this time, the success story of LEDs still is in full progress. Great technological advances are continuously being made and, as a result, LEDs play an increasingly important role in a myriad of applications. In contrast to many other light sources, LEDs have the potential of converting electricity to light with near-unit efficiency.
LEDs were discovered by accident in 1907 and the first paper on LEDs was published in the same year. LEDs became forgotten only to be re-discovered in the 1920s and again in the 1950s. In the 1960s, three research groups, one working at General Electric Corporation, one at MIT Lincoln Laboratories, and one at IBM Corporation, pursued the demonstration of the semiconductor laser. The first viable LEDs were by-products in this pursuit. LEDs have become devices in their own right and today possibly are the most versatile light sources available to humankind.
The first edition of this book was published in 2003. The second edition of the book is expanded by the discussion of additional technical areas related to LEDs including optical reflectors, the assessment of LED junction temperature, packaging, UV emitters, and LEDs used for general lighting applications.
The resonant-cavity light-emitting diode (RCLED) is a light-emitting diode that has a lightemitting region inside an optical cavity. The optical cavity has a thickness of typically one-half or one times the wavelength of the light emitted by the LED, i.e. a fraction of a micrometer for devices emitting in the visible or in the infrared. The resonance wavelength of the cavity coincides or is in resonance with the emission wavelength of the light-emitting active region of the LED. Thus the cavity is a resonant cavity. The spontaneous emission properties from a lightemitting region located inside the resonant cavity are enhanced by the resocant-cavity effect. The RCLED is the first practical device making use of spontaneous emission enhancement occurring in microcavities.
The placement of an active region inside a resonant cavity results in multiple improvements of the device characteristics. Firstly, the light intensity emitted from the RCLED along the axis of the cavity, i.e. normal to the semiconductor surface, is higher compared with conventional LEDs. The enhancement factor is typically a factor of 2–10. Secondly, the emission spectrum of the RCLED has a higher spectral purity compared with conventional LEDs. In conventional LEDs, the spectral emission linewidth is determined by the thermal energy kT. However, in RCLEDs, the emission linewidth is determined by the quality factor (Q factor) of the optical cavity.
Radiative transitions, i.e. transitions of electrons from an initial quantum state to a final state and the simultaneous emission of a light quantum, are one of the most fundamental processes in optoelectronic devices. There are two distinct ways by which the emission of a photon can occur, namely by spontaneous and stimulated emission. These two processes were first postulated by Einstein (1917).
Stimulated emission is employed in semiconductor lasers and superluminescent LEDs. It was realized in the 1960s that the stimulated emission mode can be used in semiconductors to drastically change the radiative emission characteristics. The efforts to harness stimulated emission resulted in the first room-temperature operation of semiconductor lasers (Hayashi et al., 1970) and the first demonstration of a superluminescent LED (Hall et al., 1962).
Spontaneous emission implies the notion that the recombination process occurs spontaneously, that is without a means to influence this process. In fact, spontaneous emission has long been believed to be uncontrollable. However, research in microscopic optical resonators, where spatial dimensions are of the order of the wavelength of light, showed the possibility of controlling the spontaneous emission properties of a light-emitting medium. The changes of the emission properties include the spontaneous emission rate, spectral purity, and emission pattern. These changes can be employed to make more efficient, faster, and brighter semiconductor devices.
The method of the renormalization group (RG) provides theorists with powerful and efficient tools to explore interacting theories, often in regimes where perturbation theory fails. Motivating our discussion with two introductory examples drawn from a classical and a quantum theory, we will first become acquainted with the RG as a concept whereby nonlinear theories can be analyzed beyond the level of plain perturbation theory. With this background, we will then proceed to discuss the idea and practice of RG methods in more rigorous and general terms, introducing the notion of scaling, dimensional analysis, and the connection to the general theory of phase transitions and critical phenomena. Finally, to conclude this chapter, we will visit a number of concrete implementations of the RG program introduced and exemplified on a number of canonical applications.
In Chapter 5 φ4-theory was introduced as an archetypal model of interacting continuum theories. Motivated by the existence of nonlinearities inherent in the model, a full perturbative scheme was developed, namely Wick contractions and their diagrammatic implementation. However, from a critical perspective, one may say that such perturbative approaches present only a limited understanding. Firstly, the validity of the φ4-action as a useful model theory was left unjustified, i.e. the φ4-continuum description was obtained as a gradient expansion of, in that case, a d-dimensional Ising model. But what controls the validity of the low-order expansion? Indeed, the same question could be applied to any one of the many continuum approximations we have performed throughout the first chapters of the text.
In this chapter, we introduce the analytical machinery to investigate the properties of many-body systems perturbatively. Specifically, employing the “φ4-theory” as an example, we learn how to describe systems that are not too far from a known reference state by perturbative means. Diagrammatic methods are introduced as a tool to efficiently implement perturbation theory at large orders. The new concepts are then applied to the analysis of various properties of the weakly interacting electron gas.
In previous chapters we have emphasized repeatedly that the majority of many-particle problems cannot be solved in closed form. Therefore, in general, one is compelled to think about approximation strategies. One promising ansatz leans on the fact that, when approaching the low-temperature physics of a many-particle system, we often have some idea, however vague, of its preferred states and/or its low-energy excitations. One may then set out to explore the system by using these prospective ground state configurations as a working platform. For example, one might expand the Hamiltonian in the vicinity of the reference state and check that, indeed, the residual “perturbations” acting in the low-energy sector of the Hilbert space are weak and can be dealt with by some kind of approximate expansion. Consider, for example, the quantum Heisenberg magnet. In dimensions higher than one, an exact solution of this system is out of the question. However, we know (or, more conservatively, “expect”) that, at zero temperature, the spins will be frozen into configurations aligned along some (domain-wise) constant magnetisation axes.
In the past few decades, the field of quantum condensed matter physics has seen rapid and, at times, almost revolutionary development. Undoubtedly, the success of the field owes much to ground-breaking advances in experiment: already the controlled fabrication of phase coherent electron devices on the nanoscale is commonplace (if not yet routine), while the realization of ultra-cold atomic gases presents a new arena in which to explore strong interaction and condensation phenomena in Fermi and Bose systems. These, along with many other examples, have opened entirely new perspectives on the quantum physics of many-particle systems. Yet, important as it is, experimental progress alone does not, perhaps, fully explain the appeal of modern condensed matter physics. Indeed, in concert with these experimental developments, there has been a “quiet revolution” in condensed matter theory, which has seen phenomena in seemingly quite different systems united by common physical mechanisms. This relentless “unification” of condensed matter theory, which has drawn increasingly on the language of low-energy quantum field theory, betrays the astonishing degree of universality, not fully appreciated in the early literature.
On a truly microscopic level, all forms of quantum matter can be formulated as a many-body Hamiltonian encoding the fundamental interactions of the constituent particles.
Previously, we have seen how the field integral method can be deployed to formulate perturbative approximation schemes to explore weakly interacting theories. In this chapter, we will learn how elements of the perturbative approach can be formulated more efficiently by staying firmly within the framework of the field integral. More importantly, in doing so, we will see how the field integral provides a method for identifying and exploring non-trivial reference ground states – “mean-fields.” A fusion of perturbative and mean-field methods will provide us with analytical machinery powerful enough to address a spectrum of rich applications ranging from superfluidity and superconductivity to metallic magnetism and the interacting electron gas.
As mentioned in Chapter 5, the perturbative machinery is but one part of a larger framework. In fact, the diagrammatic series already contained hints indicating that a straightforward expansion of a theory in the interaction operator might not always be an optimal strategy: all previous examples that contained a large parameter “N” – and usually it is only problems of this type that are amenable to controlled analytical analysis – shared the property that the diagrammatic analysis bore structures of “higher complexity.” (For example, series of polarization operators appeared rather than series of the elementary Green functions, etc.) This phenomenon suggests that large-N problems should qualify for a more efficient and, indeed, a more physical formulation.
While these remarks appear to be largely methodological, the rationale behind searching for an improved theoretical formulation is, in fact, much deeper.
The purpose of this chapter is to introduce and apply the method of second quantization, a technique that underpins the formulation of quantum many-particle theories. The first part of the chapter focusses on methodology and notation, while the remainder is devoted to the development of applications designed to engender familiarity with and fluency in the technique. Specifically, we will investigate the physics of the interacting electron gas, charge density wave propagation in one-dimensional quantum wires, and spin waves in a quantum Heisenberg (anti)ferromagnet. Indeed, many of these examples and their descendants will reappear as applications in our discussion of methods of quantum field theory in subsequent chapters.
In the previous chapter we encountered two field theories that could conveniently be represented in the language of “second quantization,” i.e. a formulation based on the algebra of certain ladder operators âk. There were two remarkable facts about this formulation: firstly, second quantization provides a compact way of representing the many-body quasi-particle space of excitations; secondly, the properties of the ladder operators were encoded in a simple set of commutation relations (cf. Eq. (1.33)) rather than in some explicit Hilbert space representation.
Apart from a certain aesthetic appeal, these observations would not be of much relevance if it were not for the fact that the formulation can be generalized to a comprehensive and highly efficient formulation of many-body quantum mechanics in general. In fact, second quantization can be considered the first major cornerstone on which the theoretical framework of quantum field theory was built.