To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Dynamical mean-field theory is designed to treat systems with local effective interactions that are strong compared with the independent-particle terms that lead to delocalized band-like states. Interactions are taken into account by a many-body calculation for an auxiliary system, a site embedded in a dynamical mean field, that is chosen to best represent the coupling to the rest of the crystal. The methods are constructed to be exact in three limits: interacting electrons on isolated sites, a lattice with no interactions, and the limit of infinite dimensions d → ∞ where mean-field theory is exact. This chapter is devoted to the general formulation, the single-site approximation where the calculation of the self-energy is mapped onto a self-consistent quantum impurity problem, and instructive examples for the Hubbard model on a d → ∞ Bethe lattice. Further developments and applications are the topics of Chs. 17–21.
One of the most rewarding features of condensed matter theory is the ability to address difficult problems from different points of view. The preceding Chs. 9–15 present an approach based on perturbation expansions in the Coulomb interaction. In particular, the GW approximation for the self-energy has proven to be extremely successful in describing electronic spectra of many materials, as described in Ch. 13. The methods can be applied to the ordered states of materials with d and f states, for example, ferromagnetic Ni and anti-ferromagnetic NiO, as described in Secs. 13.4 and 20.7. However, the GW and related approximations have difficulties in treating cases with degenerate or nearly degenerate states and low-energy excitations. Present-day methods do not describe phenomena like the fluctuations of local moments in a ferromagnetic material above the Curie temperature or the insulating character of NiO in the paramagnetic phase; more generally, they have difficulty describing strong correlation.
The topic of this chapter and Chs. 17–21 is dynamical mean-field theory, which is also a Green's function method in which the key quantity is the self-energy. However, it is designed to treat strong interactions for electrons in localized atomic-like states, such as the d and f states in transition metals, lanthanide and actinide elements and compounds.
In no wave function of the type (1) [product of single determinants for each spin] is there a statistical correlation between the positions of the electrons with antiparallel spin. The purpose of the aforementioned generalization of (1) is to allow for such correlations. This will lead to an improvement of the wave function and, therefore, to a lowering of the energy value.
E. Wigner, Phys. Rev. 46, 1002 (1934)
Summary
Although the exact many-body wavefunction cannot be written down in general, we do know of some of its properties. For example, there are differences between the wavefunctions of insulators and metals and the cusp condition gives the behavior as any two charges approach each other. In this chapter we also discuss approximate wavefunctions, ways to judge their accuracy and how to include electronic correlation. Examples of many-body wavefunctions are the Slater–Jastrow (pair product) wavefunction and its generalization to pairing and backflow wavefunctions.
In other places in this book, we argue that it is not necessary to look at the many-body wavefunctions explicitly because they are unwieldy; the one- and two-body correlation functions discussed in Ch. 5 are sufficient to determine the energy and give information on the excitation spectra. However, these correlation functions do not always contain all information of interest. In principle, the ground-state energy of a many-electron system is a functional of the density, but the very derivation of the theorem invokes the many-body wavefunction, as expressed explicitly in Eq. (4.16). The effects of antisymmetry are manifest in the correlation functions, but antisymmetry is most simply viewed as a property of the wavefunction; electronic correlation is fundamentally a result of properties of the many-body wavefunction.
Studying many-body wavefunctions provides a very useful, different point of view of many-body physics from the approaches based on correlation functions. Many of the most important discoveries in physics have come about by understanding the nature of wavefunctions, such as the Laughlin wavefunction for the fractional quantum Hall effect, the BCS wavefunction for superconductors, p-wave pairing in superfluid 3He, and the Heitler– London approach for molecular binding. The role of Berry's phases has brought out the importance of the phase of the wavefunction in determining properties of quantum systems. This has led to new understanding of the classification of insulators, metals, superconductors, vortices, and other states of condensed matter.
… as suggested by Fermi, the time-independent Schrödinger equation … can be interpreted as describing the behavior of a system of particles each of which performs a random walk, i.e., diffuses isotropically and at the same time is subject to multiplication, which is determined by the value of the point function V.
N. Metropolis and S. Ulam, 1949
Summary
In the projector quantum Monte Carlo method, one uses a function of the hamiltonian to sample a distribution proportional to the exact ground-state wavefunction, and thereby computes exact matrix elements of it. An importance sampling transformation makes the algorithm much more efficient. In this chapter we introduce and develop the diffusion Monte Carlo method, which involves drifting, branching random walks. For any excited state, including any system with more than two electrons, one encounters the sign problem, limiting the direct application of these algorithms for most fermion systems. Instead, by using approximate fixed-node or fixed-phase boundary conditions, one can achieve efficiency similar to variational Monte Carlo. We also discuss the application of the projector method in a basis of Slater determinants.
In this chapter, we discuss a different quantum Monte Carlo method, projector Monte Carlo (PMC). This general method was first suggested by Fermi [1049]; see the quote at the start of this chapter by two of the inventors of the Monte Carlo method. An implementation of PMC was tried out in the early days of computing [1050]. Advances in methodology, in particular importance sampling, resulted in a significant large-scale application: the exact calculation of the ground-state properties of 256 hard-sphere bosons by Kalos, Levesque, and Verlet [1051] in 1974. Calculations for electronic systems and the fixednode approximation were introduced by Anderson [1052, 1053]. One of the most important projector MC algorithms, the diffusion Monte Carlo algorithm with importance sampling for fermions, was used to compute the correlation energy of homogeneous electron gas by Ceperley and Alder [109] in 1980; the resulting HEG correlation energy was crucial in the development of density functional calculations.
Types and properties of projectors
In this method, a many-body projector G(R, R) = Ĝ is repeatedly applied to filter out the exact many-body ground state from an initial state; the operation of the projector is carried out with a random walk, hence the name of this class of methods.
An idea that is developed and put into action is more important than an idea that exists only as an idea.
Buddha
Summary
In this chapter we sketch how GW calculations are performed in practice, touching upon approximations and numerical methods. Typical calculations are done in three steps: one has to determine the dynamically screened Coulomb interaction W, build the GW self-energy, and finally solve the quasi-particle or Dyson equation. All steps have their own difficulties. Choices have to be made, and the calculations are challenging for many materials. Computational approaches are constantly evolving, but many of the aspects contained in the chapter are expected to remain topical for quite some time.
GW calculations have become part of the standard toolbox in computational condensedmatter physics. Many details on foundations and putting into practice can be found in overviews and reviews, like [287, 334, 347, 408].
What does it mean to do a GWA calculation in practice? The formula for the GWA self-energy is as simple as its name, but GW calculations have a long history with continuous improvements. Modern GWA calculations are in the continuation of pioneering attempts to include correlations beyond Hartree–Fock using the concept of screening. Already in 1958 [409] correlation energies for the homogeneous electron gas were obtained from the study of the polarization of the gas due to an individual electron, and from the action of this polarization back on the electron, through the self-energy. These calculations, including several approximations, were limited to states close to the Fermi level. A GWA-like approach [410, 411] was applied to the electron gas in 1959, although these calculations didn't cover the range of densities rs ~ 2 - 5 which is typical for simple metals. Hedin's work [43] is fundamental in that it presented the GWA as the first term of a series in terms of the screened Coulomb interaction, and it contained an extensive description of the homogeneous electron gas on the GWA level. Many more studies on the HEG followed, including detailed investigation of the spectral functions [383, 384, 412], the importance of self-consistency [351, 352, 392], the electron gas in lower dimensions [369] (see also Ch. 11), and vertex corrections beyond the GWA (e.g., [413–415]), as discussed in Ch. 15.
The many-body problem consists of two parts: the first is the non-interacting system in a materials-specific external potential; the second is the Coulomb interaction that makes the problem so hard to solve. The most straightforward idea is to use perturbation theory, with the Coulomb interaction as perturbation. This is conceptually simple, but it turns out to be difficult in practice, since the Coulomb interaction is often not small compared with typical energy differences, it is long-ranged and in the thermodynamic limit there is an infinite number of particles, contributing with an infinite number of mutual interaction processes. The present chapter outlines how one can deal with this problem. It contains an overview of facts that one can also find in many standard textbooks on the many-body problem, but that are useful to keep in mind in order to look at later chapters from a sound and well-established perspective.
The many-body problem is a tough one, and it has many facets. Sorting it out is like putting together a huge puzzle. The eight introductory chapters of this book provide pieces of the puzzle, and ideas on what one might do about it. In the present chapter we choose to go in one of the possible directions, in order to arrive at something tangible. The chapter gives the general framework and the main ideas; specific approximations are the topic of Chs. 10–15.
The idea is to start from an independent-particle problem and add the Coulomb interaction as a perturbation. This is not easy: first, the interaction is responsible for a rich variety of phenomena that are completely absent otherwise, such as the finite lifetime of quasiparticles, or additional structures in spectra due to the fact that a quasi-particle excitation may transfer its energy to other elementary excitations, for example plasmons. Second, because of the two-body Coulomb interaction, the problem scales badly with the number of electrons, and straightforward perturbation theory for the many-body hamiltonian with the Coulomb interaction as perturbation rapidly becomes intractable or even useless, especially in large systems.
To get started, Sec. 9.1 recalls why things are not so easy. The following sections try to solve one problem after the other, starting from Sec. 9.2 where the Green's function is reformulated in a way that is appropriate for a perturbation expansion.
It is shown from first principles that, in spite of the large interatomic forces, liquid 4He should exhibit a transition analogous to the transition in an ideal Bose–Einstein gas. The exact partition function is written as an integral over trajectories, using the space-time approach to quantum mechanics.
R.P. Feynman, 1953
Summary
In this chapter we discuss imaginary-time path integrals and the path-integral Monte Carlo method for the calculation of properties of quantum systems at non-zero temperature. We discuss how Fermi and Bose statistics enter, and how to generalize the fixed-node procedure to non-zero temperature. We then discuss an auxiliary-field method for the Hubbard model. The path-integral method can be used to perform ground-state calculations, allowing calculations of properties with less bias than the projector Monte Carlo method. We also discuss the problem of estimating real-time response functions using information from imaginary-time correlation functions.
In previous chapters we described two QMC methods, namely variational QMC and projector (diffusion) QMC. Both of these methods are zero temperature, or, more properly, are formulated for single states. In this chapter we discuss the path-integral Monte Carlo (PIMC) method, which is explicitly formulated at non-zero temperature. Directly including temperature is important because many, if not most, measurements and practical applications involve significant thermal effects. One might think that to do calculations at a non-zero temperature we would have to explicitly sum over excited states. Such a summation would be difficult to accomplish once the temperature is above the energy gap, because there are so many possible many-body excitations. In addition, the properties for each excitation are more difficult to calculate than those for the ground state. As we will see, path-integral methods do not require an explicit sum over excitations. As an added bonus, they provide an interesting and enlightening window through which to view quantum systems. However, the sign problem, introduced in the previous chapter, is still present for fermion systems. The fixed-node approximation is used again.
An advantage of PIMC is the absence of a trial wavefunction. As a result, quantum expectation values, including ones not involving the energy, can be computed directly. For the expert, the lack of an importance function may seem a disadvantage; without it one cannot push the simulation in a preferred direction.
Condensed matter is constituted by a huge number of electrons and nuclei interacting with Coulomb potentials. The topic of this chapter is a way to deal with the full, coupled problem by separating it into a part that is tractable and the rest that is approximated. This naturally leads to the appearance of dynamical fields, and to the concept of “quasi-particles” that have the same quantum numbers as non-interacting electrons. The quasi-particles obey equations where the potential and bare interactions in the hamiltonian are replaced by dynamical self-energies and screened interactions that can describe many of the effects of correlation. In this chapter we discuss the intuitive concepts behind this approach in general terms to motivate the more rigorous formulations and approximations used in the Green's function methods of the following chapter and in Part III.
The first equations of this book, Eqs. (1.1) and (1.2), express the fundamental theory of matter in terms of electrons and nuclei that interact with Coulomb potentials. For systems with only a few electrons, a solution of the problem can be obtained by exact diagonalization, using configuration interaction methods. For many-electron systems one has to resort to other methods. For example, QMC stochastic simulations (Chs. 22–25) are among the most accurate methods known to calculate certain expectation values, such as equilibrium thermodynamic properties, total energies, the density, and various static correlation functions like those in Ch. 5. Other properties such as excitation spectra are more difficult to access with QMC. Straightforward perturbation theory is, in general, not appropriate since interactions among the electrons are of the same order of magnitude as the independent-particle terms. Hence, we must develop other approaches.
One strategy that gives access to equilibrium thermodynamic as well as dynamical excitation properties is to separate the interacting many-body problem into a part that is tractable by some means and the rest of the problem that is dealt with more approximately. The present chapter is devoted to this idea. It is a prelude to many-body Green's function methods; the aim is to provide a unified framework for the developments in Chs. 8–21, along with a qualitative description of the most relevant quantities.
Here we consider auxiliary systems that are generalizations of the Kohn–Sham construction. In Green's function methods the use of a self-energy from an auxiliary system can be considered as a search in a restricted domain. Dynamical mean-field theory is an example of an interacting auxiliary system.
There are various ways that functionals can be employed to generate auxiliary systems and useful approximations by limiting the range of the functions input to the functional. Each of the functionals of the density, Green's function, or self-energy are defined over a specified domain D, e.g., ФG] and F[∑] are defined for all functions that have the required analytic properties for a Green's function or self-energy (see Sec. H.1). Here we consider limiting the domain to a subset of D denoted by D′. The first examples below derive expressions that provide insights concerning the link of non-locality and frequency dependence, and that lead to useful approximations in the spirit of optimized effective potentials. This is followed by an approach to find an interacting auxiliary system where the full many-body problem can be solved with no approximations, thus providing a self-energy that is exact within the restricted domain of the auxiliary system.
Auxiliary system to reproduce selected quantities
Let us first consider an auxiliary system that reproduces exactly some quantity of interest. This is similar to the Kohn–Sham approach, but it is more general and it leads to explicit expressions for the functionals in terms of Green's functions [1163].
Suppose we are interested in a quantity P that is part of the information carried by the Green's function G, symbolically expressed as P = p{G}. An example is the density where the “part” to be taken is the diagonal of the one-particle G: p{G} = n(r) = −iG(r, r, t′−t = 0+) or −G(r, r, τ = 0−). Consider an auxiliary system that has the same bare Green's function G0 as the original one, but a different effective potential, or a self-energy ∑P that leads to an auxiliary Green's function GP.
In this chapter we examine the two-particle correlation function. Many of its properties are experimentally accessible, such as the macroscopic dielectric function and optical spectra, or the dynamic structure factor and loss function. Formally, it is determined by the Bethe–Salpeter equation and a first useful approximation is the random phase approximation. Comparison with experiment displays the strong and the weak points of the RPA. Its shortcomings motivate the search for corrections, commonly called vertex corrections. We show how better approximations to the BSE can be obtained and how the equation can be solved in practice, and we give some illustrations. A comparison with time-dependent density functional theory completes the chapter.
This chapter is dedicated to the calculation of the two-particle correlation function L, which is one of the central quantities in this book. It contains a wealth of experimentally accessible information: optical spectra, electron energy loss spectra, and the dynamic structure factor that is measured for example by inelastic X-ray scattering, the energy of doubly charged defects, and much more. Moreover, many-body perturbation theory can be formulated in terms of the dynamically screened Coulomb interaction W, which is derived directly from the electron–hole correlation function. The GW approximation is the most prominent example of an approximation based on W. Finally, the total energy can be expressed exactly in terms of L, because the Coulomb interaction vc is a two-body interaction.
The two-particle correlation function is formally given by the Dyson-like Bethe– Salpeter equation derived in Sec. 10.3. The present chapter starts in Sec. 14.1 with a brief reminder of the links between L and measurable quantities, since in this chapter we are interested in the spectra that are derived from L. Some important formal relations are recalled in Sec. 14.2. The simplest approximation for L is the random phase approximation introduced in Sec. 11.2. Section 14.3 is dedicated to the study of spectra calculated in the RPA: which physics is contained in this approximation? What are its strengths and limitations? We will see that optical spectra, for example, often require a description beyond the RPA.
The shortcomings of the RPA motivate the effort to go beyond. The two-particle correlation function contains information about the propagation of two particles, which can be electron–hole pairs, or two electrons or two holes, as explained in Sec. 5.8.
In sp solids, correlation effects are not immediately striking, while for the df solids, they are glaringly present.
L. Hedin, J. Phys.: Condens. Matter11, R489–R528 (1999)
Summary
The topics of this chapter are chosen to represent significant classes of materials and phenomena, often called “strongly correlated.” Lanthanides and actinides illustrate striking effects such as volume collapse, heavy fermions, and localized-todelocalized transitions. Transition metals, such as Fe and Ni, are classic problems with both band-like and local moment behavior. Transition metal oxides exhibit a vast array of phenomena including metal–insulator transitions and high–low spin transitions. These are difficult problems involving competing interactions; the examples here and in Ch. 13 are not chosen to demonstrate successes, but rather to illustrate capabilities of different approaches.
This chapter is devoted to representative examples that bring out the range of phenomena observed in materials with strong local atomic-like interactions, and the types of properties that can be calculated with various methods. The foremost examples are elements and compounds of the series of transition elements with d and f states that are localized as depicted in Fig. 19.1. As emphasized in the previous chapter, the theory must deal with the complexities of many competing interactions: direct Coulomb, exchange, spin–orbit, crystal field splitting, hybridization with other orbitals, and other effects, all of which may be essential for understanding any particular material.
Many-body perturbation theory, especially the GW approximation and beyond, is a firstprinciples method that can be applied directly to ordered states at low temperature. Several examples of applications to transition metal systems are given in Ch. 13 and referred to here to provide a unified picture. Density functional theory and approximate static mean-field approximations also provide useful results and insights. However, many of the most striking phenomena can be understood only by taking into account strong correlation, including large renormalization of electronic states, satellites in excitation spectra, magnetic phase transitions, metal–insulator transitions, and other phenomena. Many phenomena can be understood only if temperature is taken into account. Dynamical mean-field theory brings this capability, but at a price because it is feasible to treat explicitly only a small subset of the degrees of freedom of the electrons. All the results presented here are done in the single-site approximation (single-site DMFA), except the two-site cluster for VO2 in Sec. 20.6.