To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A functional F[f] is a mapping of an entire function f onto a value. In electronic structure, functionals play a central role, not only in density functional theory, but also in the formulation of most of the theoretical methods as functionals of the underlying variables, in particular the wavefunctions. This appendix deals with the general formulation and derivation of variational equations from the functionals.
Basic definitions and variational equations
The difference between a function f(x) and a functional F[f] is that a function is defined to be a mapping of a variable x to a result (a number) f(x); whereas a functional is a mapping of an entire function f to a resulting number F[f]. The functional F[f], denoted by square brackets, depends upon the function f over its range of definition f(x) in terms of its argument x. Here we a describe some basic properties related to the functionals and their use in density functional theory; more complete description can be found in [93], App. A. A review of functional derivatives or the “calculus of variations” can be found in [861] and [862].
As emphasized in the previous chapter, localized functions provide an intuitive description of electronic structure and bonding. This chapter is devoted to quantitative methods in which the wavefunction is expanded as a linear combination of localized atomic(-like) orbitals, such as gaussians, Slater-type orbitals, and numerical radial atomic-like orbitals. Such calculations can be very efficient; they can also be very accurate, as shown by the highly developed codes used in chemistry; and they provide the basis for creation of new methods, such as “order-N” (Ch. 23) and Green's function approaches. There is a cost, however: full self-consistent DFT calculations require specification of the basis, and the price paid for efficiency is loss of generality (in contrast to the “one basis fits all” philosophy of plane wave methods). Since details depend upon the basis, we can only describe general principles with limited examples.
It is instructive to note that there are important connections to localized muffin-tin orbitals (MTOs) (Ch. 16), the linear LMTO method (Ch. 17). This has led to an “ab initio tight-binding” method (Sec. 17.6) in which a minimal basis of orthogonal localized orbitals is derived from the Kohn–Sham hamiltonian.
Solution of Kohn–Sham equations in localized bases
The subject of this chapter is the class of general methods for electronic structure calculations in terms of the localized atom-centered orbitals defined in Sec. 14.1. The orbitals may literally be atomic orbitals, leading to the LCAO method or various atomic-like orbitals.
The Born–Oppenheimer approximation is an essential element without which the very notion of a potential energy surface would not exist (1). It also provides an example of how different coordinates can often be treated independently as a first approximation. This approach has far-reaching consequences, since it greatly simplifies the construction of partition functions in statistical mechanics. The approximation involves neglect of terms that couple together the electronic and nuclear degrees of freedom. The nuclear motion is then governed entirely by a single PES for each electronic state because the Schrödinger equation can be separated into independent nuclear and electronic parts. The simplest approach to the nuclear dynamics then leads to the normal mode approximation via successive coordinate transformations. These developments are treated in some detail, partly because the results are used extensively in subsequent chapters, and partly because they highlight important general principles, which can easily be extended to other situations. The consequences of breakdown in the Born–Oppenheimer approximation, and treatments of dynamics beyond the normal mode approach, are discussed in Section 2.4 and Section 2.5, respectively.
Independent degrees of freedom
The Schrödinger equation that we normally wish to solve in order to identify wavefunctions and energy levels is a partial differential equation if more than one coordinate is involved. The most common method of solution for such equations involves separation of variables (2).
In this chapter we consider the calculation of thermodynamic and dynamic properties using stationary points sampled from the PES. In this approach attention is focused on local minima and transition states of the PES, defined as stationary points with zero and one negative Hessian eigenvalues, respectively (Section 4.1), and theories are required for the local densities of states and minimum-to-minimum rate constants, as outlined in Section 7.1.1 and Section 7.2.1. There can be several reasons to employ such techniques. In particular, it may be possible to calculate approximate thermodynamic and dynamic properties much faster than for conventional Monte Carlo or molecular dynamics simulations. For example, the equilibrium between competing structures separated by large potential energy barriers may be difficult to treat even with techniques such as parallel tempering. This situation arises for Lennard-Jones clusters with nonicosahedral global potential energy minima (Section 6.7.1, Section 8.3), where finite size analogues of a solid–solid phase transition can be identified (1–3). Such transitions probably represent the most favourable case for application of the superposition approximation discussed in Section 7.1, because only a few low-lying minima make significant contributions to the partition functions at the temperatures of interest. Some results for these transitions are illustrated in Section 7.1.1.
Dynamical properties have been calculated using databases of minima and transition states using a master equation approach (Section 7.2.2) for a number of different systems (4–28).
In Chapter 4 we discussed features of the potential energy surface such as minima, transition states, pathways and branch points. In this chapter some of the methods employed to locate and characterise these features will be outlined, while Chapter 7 will deal with techniques to calculate thermodynamic and dynamic properties that make explicit use of stationary point information. Other methods for sampling the PES and for calculating thermodynamic and dynamic quantities of interest will be summarised in the present chapter. Depending on the conditions of interest it may also be important to know what lies at the very bottom of the PES, for this is where the system will be found at low temperature if the dynamics permit equilibrium to be attained. Locating this global potential energy minimum is important in many different fields, and a diverse range of approaches have been suggested. Some of these are considered in Section 6.7, where we also discuss why certain potential energy surfaces make the global minimum relatively easy or difficult to locate.
Finding local minima
The field of geometry optimisation involves the location of stationary points on the PES, be they minima, transition states or higher index saddles with more than one negative Hessian eigenvalue (Section 4.1). Many algorithms have been suggested, even for the simplest problem of finding a local minimum, and there are even methods designed to locate branch points (Section 4.4) (1) and conical intersections (Section 2.4.2) (2–4).
The most interesting points of a potential energy surface are usually the stationary (or critical) points where the gradient vanishes. The geometrical definitions of local minima and transition states on a potential energy surface are given in Section 4.1. Here we also explain how the definition of a transition state is related to alternative interpretations based on free energy or dynamical considerations. Symmetry properties of steepest-descent pathways are then examined in Section 4.2, and a classification scheme for rearrangement mechanisms is presented in Section 4.3. The symmetry restrictions imposed by these results upon possible reaction pathways are illustrated with a number of examples. Branch points and quantum tunnelling are considered in Section 4.4 and Section 4.5, with emphasis on the symmetry properties of the pathways involved. The invariance of the potential energy surface, stationary points and pathways to coordinate transformations is discussed in Section 4.6. Finally, in Section 4.7, we investigate the origin of zero Hessian eigenvalues, which reveals a fundamental difference between the translational and rotational degrees of freedom (1).
Classification of stationary points
A stationary point on a PES is a nuclear configuration where all the forces vanish, i.e. every component of the gradient vector is zero, ∂V(X)/∂Xα = 0 for 1 ≤ α ≤ 3N. Here, and subsequently, we will drop the ‘e’ subscript from V, which served to remind us in Chapter 2 that the potential energy surface describes the variation of the electronic energy with the nuclear coordinates within the Born–Oppenheimer approximation.
In this final chapter we consider applications of energy landscape theory to structural glasses and supercooled liquids. The ultimate objective of this approach is to understand and predict how the glass transition and associated phenomena depend upon details of the underlying potential energy surface. The large number of different models proposed to explain the glass transition must partly reflect different ways of expressing similar ideas, as well as the fundamental importance of the problem (1, 2). An overview of some of these theoretical methodologies is given in Section 10.1. Detailed comparisons between theory and experiment for properties such as dielectric loss (3, 4) or light-scattering spectra (5, 6) of a molecular glass former clearly present a significant challenge, and hence discrimination between different models is relatively hard.
The most popular systems for computer simulations of structural glass formers are described in Section 10.2. Surveys of local minima and transition states, including theoretical approaches based on the superposition framework, are treated in Section 10.3 and Section 10.4, and results for model potential energy surfaces are summarised in Section 10.5. The influence of the system size on the PES is analysed in Section 10.6, where properties of the configuration space are compared with the scaling laws expected for random networks.
This chapter discusses potential and free energy surfaces for molecules of biological interest, ranging from small peptides to proteins. Computer simulations and protein structure prediction are described in Section 9.1 and Section 9.2, respectively. Some theoretical aspects of protein folding are discussed in Section 9.3, and an introduction to the random energy model and the principle of minimal frustration is provided in Section 9.4. Two-dimensional free energy surfaces are considered in Section 9.5, with examples ranging from lattice and off-lattice bead representations to results obtained from biased sampling (Section 6.5.1) of all-atom models with explicit solvent.
Lattice models generally take a coarse-grained view of protein structure, as well as restricting the configuration space to a grid. The potential energy surface is also discretised: the catchment basins and transition states of a continuous PES are absent. These features are recovered in continuum bead models, where each amino acid is still represented by a single centre, but the configuration space is not restricted to a grid. One such model is discussed in detail in Section 9.6. Disconnectivity graphs for all-atom representations of two small molecules, IAN and NATMA, are analysed in Section 9.7 and Section 9.8, and both free energy and potential energy surfaces are considered for polyalanine peptides in Section 9.9.
While free energy surfaces have been calculated for all-atom protein representations, including explicit solvent, detailed analysis of potential energy surfaces has usually focused on smaller systems, particularly on the formation of elements of secondary structure.
For an N-atom system, including models of bulk material with N atoms in a periodically repeated supercell, the potential energy is a 3N-dimensional function. To refer to a potential energy hypersurface we must embed the function in a 3N + 1 dimensional space where the extra dimension corresponds to the ‘height’ of the surface.
There are two immediate problems with trying to use such a high-dimensional function in calculations. The first is that it is hard to visualise, and the second is that the number of interesting features, such as local minima, tends to grow exponentially with N. In this chapter we first consider how the number of stationary points grows with the size of the system (Section 5.1), and then discuss how the PES can be usefully represented in graphical terms. Simply plotting the energy as a function of one or two coordinates for a high-dimensional function is usually not very enlightening, and can be rather misleading. A very different approach to reducing the 3N + 1 dimensions down to just two uses the idea of monotonic sequences (Section 5.2), and was introduced by Berry and Kunz (1,2). Subsequently, the utility of disconnectivity graphs was recognised by Becker and Karplus (3), and a number of examples have been presented, as discussed in Section 5.3.
The structure and dynamics of atomic and molecular clusters, the folding of proteins (1, 2), and the complicated phenomenology of glasses are all manifestations of the underlying potential energy surface (PES) (3). In each of these fields related ideas have emerged to explain and predict chemical and physical properties in terms of the PES. In studies of clusters and glasses the PES itself is often investigated directly, whereas for proteins and other biomolecules it is also common to define free energy surfaces, which are expressed in terms of a small number of order parameters. Here, typical order parameters include the number of hydrogen-bonds in the ‘native’ (folded) state, and structural quantities such as the radius of gyration.
The term ‘energy landscape’ was probably first introduced in the context of free energy surfaces (4–13). In particular, the surfaces obtained from models based on spin glass theory are discussed further in Sections 9.3 and 9.4. This approach is one aspect of ‘energy landscape theory’, but in this book a broader view is intended, which extends from the geometrical properties of potential energy surfaces to how these features determine the observed structural, dynamic and thermodynamic properties. Characterisation of a free energy surface often represents an important intermediate step in this analysis. The energy landscapes considered in subsequent chapters therefore include both potential energy and free energy surfaces.
This chapter provides further examples of potential energy surfaces for clusters, with emphasis upon how the thermodynamic and dynamic properties observed in simulations or experiments are determined by details of the PES. First the phenomenology of thermodynamics in finite systems is discussed, followed by an analysis of stability conditions for the most popular ensembles. Some technicalities involved in cluster simulation are discussed in Section 8.2, and an example is given for the LJ7 cluster. The subsequent sections treat various different systems, including Lennard-Jones, Morse, alkali halide, and water clusters, as well as buckminsterfullerene. Among the issues that are analysed in detail are the evolution of the PES with size for Lennard-Jones clusters, and the effect of the range of the potential for Morse clusters, which may provide insight into a number of different phenomena. Dynamics, including relaxation of the total energy, have been treated using the master equation approach (Section 7.2.2) for several of these systems.
Finite size phase transitions
A first-order phase transition in a bulk system occurs when the appropriate thermodynamic potential (e.g. Helmholtz free energy or entropy for the canonical and microcanonical ensembles) exhibits a double minimum (or maximum) over some range of parameter space, with a finite barrier (or well) between the two extrema. For a solid–liquid transition the control parameter may be either temperature or pressure. For conditions of constant N, P and T the transition occurs where the chemical potentials of the solid and liquid phases are equal.
The motivation for writing this book was to produce a unified and reasonably self-contained account of how potential energy and free energy surfaces are used to study clusters, biomolecules, glasses and supercooled liquids. Making connections between these different fields, where the same ideas frequently resurface in different guises, will hopefully assist future research and interdisciplinary communication.
While this is essentially a theoretical book, I have tried to provide sufficient background information and references to experiments to put the objectives in a proper context. Readers are assumed to be familiar with the basic ideas of quantum mechanics, statistical mechanics and point group symmetry. Most other derivations are treated in sufficient detail to make them accessible to nonspecialists, graduate students and advanced undergraduates. A number of more peripheral topics are covered at an introductory level to provide pointers to further reading.
Some of this material has formed the basis of lecture courses on the subject of energy landscapes delivered to students at Cambridge and Harvard Universities, and at Les Houches Summer Schools, although it has all been rewritten in the current endeavour. I am particularly grateful to all the people who read initial drafts, and helped me to prepare figures.
No molecules were harmed in the writing of this book, although a number underwent significant rearrangements.
One of the fundamentally nonlinear problems we have not yet discussed is the problem of interacting quantum spins. The nonlinearity is embedded in the commutation relations of spin operators; in contrast with Bose or Fermi creation and annihilation operators, commutators of spin operators are not c-numbers, but are still operators. Therefore even apparently very simple spin models, such as the Heisenberg model, for which the Hamiltonian is quadratic in spins, may exhibit complicated dynamics. In fact the Heisenberg model describes the majority of phenomena occurring in magnets, such as various types of magnetic ordering, spin-glass transitions, etc. In the traditional approaches spins are treated as almost classical arrows weakly fluctuating around some fixed reference frame. This reference frame is defined by the existing global magnetic order. For example, in ferromagnets or antiferromagnets, where the global order specifies only one preferential direction (the direction of average magnetization or staggered magnetization), it is supposed that spins fluctuate weakly around this direction. In helimagnets there are two preferential directions describing a spiral; their spins fluctuate around the spiral configuration. When deviations of spins from average positions are small, the spin operators can be approximated by Bose creation and annihilation operators. This approach is called the spin-wave approximation. As I have already said, it is based on two assumptions: existence of a global reference frame and smallness of fluctuations. Difficulties arise when fluctuations become strong and destroy the global order.
In this part of the book I will primarily discuss disordered magnetic systems. However, I shall not touch on such complicated sources of disorder as quenched randomness.