To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Lattice gauge theories have played an important role in the theoretical description of phenomena in particle physics, and Monte Carlo methods have proven to be very effective in their study. In the lattice gauge approach a field theory is defined on a lattice by replacing partial derivatives in the Lagrangian by finite difference operators. For physical systems a quantum field theory on a four-dimensional space–time lattice is used, but simpler models in lower dimensions have also been studied in the hope of gaining some understanding of more complicated models as well as for the development of computational techniques. The present chapter is not at all intended to give a thorough treatment, but rather to convey the flavor of the subject to the non-expert.
In a Monte Carlo simulation we attempt to follow the ‘time dependence’ of a model for which change, or growth, does not proceed in some rigorously predefined fashion (e.g. according to Newton’s equations of motion) but rather in a stochastic manner which depends on a sequence of random numbers which is generated during the simulation. With a second, different sequence of random numbers the simulation will not give identical results but will yield values which agree with those obtained from the first sequence to within some ‘statistical error’. A very large number of different problems fall into this category: in percolation an empty lattice is gradually filled with particles by placing a particle on the lattice randomly with each ‘tick of the clock’. Lots of questions may then be asked about the resulting ‘clusters’ which are formed of neighboring occupied sites. Particular attention has been paid to the determination of the ‘percolation threshold’, i.e. the critical concentration of occupied sites for which an ‘infinite percolating cluster’ first appears. A percolating cluster is one which reaches from one boundary of a (macroscopic) system to the opposite one. The properties of such objects are of interest in the context of diverse physical problems such as conductivity of random mixtures, flow through porous rocks, behavior of dilute magnets, etc. Another example is diffusion limited aggregation (DLA), where a particle executes a random walk in space, taking one step at each time interval, until it encounters a ‘seed’ mass and sticks to it. The growth of this mass may then be studied as many random walkers are turned loose. The ‘fractal’ properties of the resulting object are of real interest, and while there is no accepted analytical theory of DLA to date, computer simulation is the method of choice. In fact, the phenomenon of DLA was first discovered by Monte Carlo simulation.
In the preceding chapters we described the application of Monte Carlo methods in numerous areas that can be clearly identified as belonging to physics. Although the exposition was far from complete, it should have sufficed to give the reader an appreciation of the broad impact that Monte Carlo studies has already had in statistical physics. A more recent occurrence is the application of these methods in non-traditional areas of physics related research. More explicitly, we mean subject areas that are not normally considered to be physics at all but which make use of physics principles at their core. In some cases physicists have entered these arenas by introducing quite simplified models that represent a ‘physicist’s view’ of a particular problem. Often such descriptions are oversimplified, but the hope is that some essential insight can be gained as is the case in many traditional physics studies. (A provocative perspective of the role of statistical physics outside physics has been presented by Stauffer (2004).) In other cases, however, Monte Carlo methods are being applied by non-physicists (or ‘recent physicists’) to problems that, at best, have a tenuous relationship to physics. This chapter is to serve as a brief glimpse of applications of Monte Carlo methods ‘outside’ physics. The number of such studies will surely grow rapidly; and even now, we wish to emphasize that we will make no attempt to be complete in our treatment.
The concepts of scaling and universality presented in Chapter 2 can be given a concrete foundation through the use of renormalization group (RG) theory. The fundamental physical ideas underlying RG theory were introduced by Kadanoff (1971) in terms of a simple coarse-graining approach, and a mathematical basis for this viewpoint was completed by Wilson (1971). Kadanoff divided the system up into cells of characteristic size ba, where a is the nearest neighbor spacing, and ba < ξ , where ξ is the correlation length of the system (see Fig. 9.1). The singular part of the free energy of the system can then be expressed in terms of cell variables instead of the original site variables, i.e.
Within this book we have attempted to elucidate the essential features of Monte Carlo simulations and their application to problems in statistical physics. We have attempted to give the reader practical advice as well as to present theoretically based background for the methodology of the simulations as well as the tools of analysis. New Monte Carlo methods will be devised and will be used with more powerful computers, but we believe that the advice given to the reader in Section 4.8 will remain valid.
In the previous chapters of this text we have examined a wide variety of Monte Carlo methods in depth. Although these are exceedingly useful for many different problems in statistical physics, there are some circumstances in which the systems of interest are not well suited to Monte Carlo study. Indeed there are some problems which may not be treatable by stochastic methods at all, since the time-dependent properties as constrained by deterministic equations of motion are the subject of the study. The purpose of this chapter is thus to provide a very brief overview of some of the other important simulation techniques in statistical physics. Our goal is not to present a complete list of other methods or even a thorough discussion of these methods which are included, but rather to offer sufficient background to enable the reader to compare some of the different approaches and better understand the strengths and limitations of Monte Carlo simulations.
In this chapter we shall review some of the basic features of thermodynamics and statistical mechanics which will be used later in this book when devising simulation methods and interpreting results. Many good books on this subject exist and we shall not attempt to present a complete treatment. This chapter is hence not intended to replace any textbook for this important field of physics but rather to ‘refresh’ the reader’s knowledge and to draw attention to notions in thermodynamics and statistical mechanics which will henceforth be assumed to be known throughout this book.
In most of the discussion presented so far in this book, the quantum character of atoms and electrons has been ignored. The Ising spin models have been an exception, but since the Ising Hamiltonian is diagonal (in the absence of a transverse magnetic field), all energy eigenvalues are known and the Monte Carlo sampling can be carried out just as in the case of classical statistical mechanics. Furthermore, the physical properties are in accord with the third law of thermodynamics for Ising-type Hamiltonians (e.g. entropy S and specific heat vanish for temperature T → 0, etc.) in contrast to the other truly classical models dealt with in previous chapters (e.g. classical Heisenberg spin models, classical fluids and solids, etc.) which have many unphysical low temperature properties.
Advances in simulational methods sometimes have their origin in unusual places; such is the case with an entire class of methods which attempt to beat critical slowing down in spin models on lattices by flipping correlated clusters of spins in an intelligent way instead of simply attempting single spin-flips. The first steps were taken by Fortuin and Kasteleyn (Kasteleyn and Fortuin, 1969; Fortuin and Kasteleyn, 1972), who showed that it was possible to map a ferromagnetic Potts model onto a corresponding percolation model. The reason that this observation is so important is that in the percolation problem states are produced by throwing down particles, or bonds, in an uncorrelated fashion; hence there is no critical slowing down. In contrast, as we have already mentioned, the q-state Potts model when treated using standard Monte Carlo methods suffers from slowing down. (Even for large q where the transition is first order, the time scales can become quite long.) The Fortuin–Kasteleyn transformation thus allows us to map a problem with slow critical relaxation into one where such effects are largely absent. (As we shall see, not all slowing down is eliminated, but the problem is reduced quite dramatically.)
The combination of improved experimental capability, great advances in computer performance, and the development of new algorithms from computer science have led to quite sophisticated methods for the study of certain biomolecules, in particular of folded protein structures. One such technique, called ‘threading’, picks out small pieces of the primary structure of a protein whose structure is unknown and examines extensive databases of known protein structures to find similar pieces of primary structure. One then guesses that this piece will have the same folded structure as that in the known structure. Since pieces do not all fit together perfectly, an effective force field is used to ‘optimize’ the resultant structure, and Monte Carlo methods have already begun to play a role in this approach. (There are substantial similarities to ‘homology modeling’ approaches to the same, or similar, problems.) Of course, the certainty that the structure is correct comes primarily from comparison with experimental structure determination of crystallized proteins. One limitation is thus that not all proteins can be crystallized, and, even if they can, there is no assurance that the structure will be the same in vivo. Threading algorithms have, in some cases, been extraordinarily successful, but since they do not make use of the interactions between atoms it would be useful to complement this approach by atomistic simulations. (For an introductory overview of protein structure prediction, see Wooley and Ye (2007).) Biological molecules are extremely large and complex; moreover, they are usually surrounded by a large number of water molecules. Thus, realistic simulations that include water explicitly and take into account polarization effects are inordinately difficult. There have also been many attempts to handle this task by means of molecular dynamics simulations, but the necessity of performing very long runs of very large systems makes it very difficult to reach equilibrium. A possible advance is the use of so-called accelerated molecular dynamics (Miao et al., ), and it has been suggested that this may help to understand ‘genetic engineering’ mechanisms (Palermo et al., ). However, there are many phenomena that involve large spatial and temporal scales so that the use of coarse-grained models may often be necessary (Hyeon and Thirumalai, ).
The examination of the equation of state of a two-dimensional model fluid (the hard disk system) was the very first application of the importance sampling Monte Carlo method in statistical mechanics (Metropolis et al., 1953), and since then the study of both atomic and molecular fluids by Monte Carlo simulation has been a very active area of research. Remember that statistical mechanics can deal well analytically with very dilute fluids (ideal gases), and it can also deal well with crystalline solids (making use of the harmonic approximation and perfect crystal lattice periodicity and symmetry), but the treatment of strongly correlated dense fluids (and their solid counterparts, amorphous glasses) is much more difficult. Even the description of short range order in fluids in a thermodynamic state far away from any phase transition is a non-trivial matter (unlike the lattice models discussed in Chapter 5, where far away from phase transitions the molecular field approximation, or a variant thereof, is usually both good enough and easily worked out, and the real interest is generally in phase transition problems).
One longstanding limitation on the resolution of Monte Carlo simulations near phase transitions has been the need to perform many runs to precisely characterize peaks in response functions such as the specific heat. Dramatic improvements have become possible with the realization that entire distributions of properties, not just mean values, can be useful; in particular, they can be used to predict the behavior of the system at a temperature other than that at which the simulation was performed. There are several different ways in which this may be done. The reweighting may be done after a simulation is complete or it may become an integral part of the simulation process itself. The fundamental basis for this approach is the realization that the properties of the systems will be determined by a distribution function in an appropriate ensemble.
Modern Monte Carlo methods have their roots in the 1940s when Fermi, Ulam, von Neumann, Metropolis, and others began considering the use of random numbers to examine different problems in physics from a stochastic perspective (Cooper, 1989); this set of biographical articles about S. Ulam provides fascinating insight into the early development of the Monte Carlo method, even before the advent of the modern computer. Very simple Monte Carlo methods were devised to provide a means to estimate answers to analytically intractable problems. Much of this work is unpublished and a view of the origins of Monte Carlo methods can best be obtained through examination of published correspondence and historical narratives. Although many of the topics which will be covered in this book deal with more complex Monte Carlo methods which are tailored explicitly for use in statistical physics, many of the early, simple techniques retain their importance because of the dramatic increase in accessible computing power which has taken place during the last two decades. In the remainder of this chapter we shall consider the application of simple Monte Carlo methods to a broad spectrum of interesting problems.
Long time tails in Green-Kubo formulas for transport coefficients indicate that long range correlations in non-equilibrium fluids cause divergent transport coefficients in the Navier-Stokes equations for two dimensional fluids, and divergent higher order gradient corrections to these equations for three dimensional fluids. Possible resolutions of these difficulties are considered, for transport of momentum or energy in fluids maintained in non-equilibrium stationary states. The resolutions of the divergence difficulties depend on the particular flow under consideration. For stationary Couette flow in a gas, the divergences are resolved by including non-linear terms in the kinetic equations, leading to logarithmic terms in the velocity gradients for the equations of fluid flow for two dimensional gases, and fractional powers for three dimensional flows. Methods used for Couette flow do not resolve the divergence problems for stationary heat flow. Instead, the difficulties are resolved by taking the finite size of the system into account.
Kinetic theory is defined as a branch of statistical mechanics that attempts to describethe non-equilibrium properties of macroscopic systems in terms of microscopic propertiesof the constituent particles or quantum excitations. The history of kinetic theory is summarizedfrom the first understandings of the connections of temperature and pressure ofperfect gases with their average kinetic energy and with the average momentum transferto the walls by particle-wall collisions. The history continues with a discussion of the contributionsof Maxwell and Boltzmann, and the development of the Boltzmann transportequation. Modern developments include extending the Boltzmann equation to moderatelydense gases, formulation of kinetic theory for hard sphere systems, discovery of long timetail contributions to the Green-Kubo expressions for transport coefficients, applications ofkinetic theory to fluctuations in gases, to quantum gases and to granular particles. Thecontents of each chapter are then summarized.