Hostname: page-component-76fb5796d-45l2p Total loading time: 0 Render date: 2024-04-25T08:15:21.069Z Has data issue: false hasContentIssue false

Three decades of many-body potentials in materials research

Published online by Cambridge University Press:  09 May 2012

Susan B. Sinnott
Affiliation:
Department of Materials Science and Engineering, University of Florida; ssinn@mse.ufl.edu
Donald W. Brenner
Affiliation:
Department of Materials Science and Engineering, North Carolina State University; brenner@ncsu.edu

Abstract

A brief history of atomic simulation as it was used in chemistry, physics, and materials science is presented starting with seminal work by Eyring in the 1930s through to current work and future challenges. This article provides the background and perspective needed to understand the ways in which reactive many-body potentials developed over the last three decades and have impacted materials research. It also explains the way in which this substantial impact on the field has been facilitated by increases in computational resources and traces the development of reactive potentials, which have steadily increased in complexity and sophistication over time. Together with the other contributions in this issue of MRS Bulletin, this article will help guide and inspire the next generation of computational materials scientists and engineers as they build on current capabilities to expand atomic simulation into new and exciting areas of materials research.

Type
Introduction
Copyright
Copyright © Materials Research Society 2012

Intent of this issue

The introduction of physically sound interatomic potential energy functions that go beyond simple pair-additive interactions (e.g., many-body potentials) beginning in the 1980s opened tremendous modeling capabilities that continue to shape new directions and create critical breakthroughs in materials research.Reference Vitek and Srolovitz1Reference Brenner, Shenderova, Areshkin, Lipkowitz and Boyd5 The key to their proven usefulness is a combination of relative accuracy in reproducing important structures (including defects) across a wide range of material types and their overall computational efficiency. This combination of features allows atomic simulations that are large enough to explore phenomena such as correlated dynamics associated with plastic flow in metals and accurate enough to be compared to specific materials and structures. With this capability, continuum concepts can be tested at the atomic scale, experimental results interpreted in new ways, and virtual experiments carried out that are at the forefront of the development of new materials.

Our intentions with this issue of MRS Bulletin are to celebrate the rapid succession of many-body potentials that were introduced in the early to mid-1980s, to review the impact that these potentials have had on research carried out by the materials community in general, and to outline where the field is headed in the next three decades. Contributions in this issue are included from some of the original potential developers, from researchers who have made seminal contributions to materials research using these potentials, and from researchers who are at the forefront of developing and applying the next generation of methods. We expect that this “continuum” of modeling concepts and applications will help guide and inspire the next generation of computational materials scientists and engineers as they expand this capability to new and exciting areas of materials research.

Brief history of atomic simulation

To understand how this field has progressed to its current state and to predict where it is going, it is useful to briefly review the history of atomic simulation using classical trajectories (see the sidebar). The first reported study using classical trajectories to model a chemical process was published in 1936 by Hirschfelder, Eyring, and Topley.Reference Hirschfelder, Eyring and Topley6 At this point in the development of atomic simulation, the concept of a continuous potential energy hyper-surface over which atoms moved was still relatively new, and they were interested in understanding how to generate useful potential energy surfaces that define atomic motion for chemical reactions and how to best estimate rates from these surfaces. As part of their studies, they used a single, hand-calculated classical trajectory to study the reaction H + H2 → H2 + H constrained to a linear configuration. This reaction was chosen because with only three electrons, it is relatively simple. The linear configuration was used because it both reduces the number of degrees of freedom that had to be followed and corresponds to the lowest energy barrier for reaction (and hence the configuration that most contributes to the overall reaction rate). Their potential energy surface was generated from an analytic approximation to a quantum chemical calculation in which the exchange and Coulomb integrals are approximated by pair functions that depend on distance. The potential energy surface that they used included a potential energy “well” (known later as Lake EyringReference Truhlar and Wyatt7) that yielded a stable H3 molecule (Figure 1). One of the consequences of this well is that the reaction never completed the hydrogen atom exchange during the course of the trajectory that was published; instead the system remained an H3 intermediate. Nonetheless, this single trajectory was a beautiful illustration of the utility of classical trajectories for understanding complex chemical dynamics.

Classical trajectories

The potential energy functions discussed in the articles in this issue of MRS Bulletin are sometimes referred to as “classical potentials” to distinguish them from forces and energies that are obtained directly from quantum-mechanical electronic structure calculations. This can be misleading because the atomic trajectories for both types of interatomic forces are still typically obtained by assuming that the atoms are classical particles whose motion is obtained by numerically integrating classical equations of motion. Distinguishing between classical and quantum effects is more appropriate when describing atomic motion.

Figure 1. Three-dimensional representation of the potential energy surface used by Eyring and co-workers to model H + H2 → H2 + H.6 Each horizontal axis corresponds to the distance between the outer hydrogen atom and the center atom, and the vertical axis corresponds to potential energy. The different shades represent potential energy for the interacting atoms. The darker the color, the lower the potential energy. This is a representation rather than the exact surface used in Reference Reference Hirschfelder, Eyring and Topley6, so units are not given on the axes. “Lake Eyring” is a dip in the potential energy that corresponds to a stable (and non-physical) H3 molecule.

Since the early calculations of Eyring and co-workers, classical trajectories have proven to be a powerful tool in chemical physics not only for calculating reaction rates, but also for testing and refining theories of chemical kinetics. For example, the central concept of transition state theory (TST) is that the kinetics for a given reaction can be calculated from a single configuration along a reaction path. This configuration is often taken as that at the highest intermediate energy along the reaction path (e.g., the activated complex), but this is often not the best choice due to recrossings back toward reactants. Classical trajectories have been used for a wide range of systems to test the assumptions of TST and to refine the theory by finding more suitable configurations for the transition state.Reference Garrett and Truhlar8, Reference Truhlar and Wyatt9

During the 1940s and 1950s, classical trajectories using pair interactions were utilized to evaluate some of the basic principles of statistical mechanics, including the rate at which equilibrium conditions are established (if at all) for relatively simple systems.Reference Berman and Izrailev10, Reference Alder and Wainwright11 From a macroscopic viewpoint, the properties of many-body systems are irreversible, as dictated by the second law of thermodynamics. Classical trajectories, however, are in principle deterministic and reversible, implying that routes to systems involving an increase in order (decreased entropy) are possible. Among the conclusions of these studiesReference Berman and Izrailev10, Reference Alder and Wainwright11 is that some relatively small simplified systems, such as short harmonic chains with weak anharmonic terms, may never fully equilibrate, and therefore traditional statistical mechanics is not generally valid for these systems. On the other hand, numerical approximations for solving classical equations of motion for particles can ensure an approach to equilibrium due to effects such as small errors arising from the truncation of numbers during the calculation, similar to small disturbances (e.g., external temperature fluctuations) to physical systems that help drive a system toward equilibrium.

While making important contributions to chemistry and physics, classical atomic trajectories did not, in general, have the same early impact on the field of materials science and engineering due mainly to limited computational power relative to what is needed to model the structures of traditional interest to materials scientists. One exception to this was simulations by Vineyard and co-workers at Brookhaven National Laboratory, first published in 1960, who used classical dynamics to examine irradiation damage in metals (Figure 2).Reference Gibson, Goland, Milgram and Vineyard12 These calculations were performed in three dimensions with continuous pair additive potentials (as opposed, for example, to hard spheres). Soon after the Brookhaven calculations, Rahman at Argonne National Laboratory used classical trajectories with simple pair additive potentials to characterize the structure and atom motion in a model liquid.Reference Rahman13 Like the work of Eyring and co-workers on chemical dynamics, Rahman’s simulations demonstrated the utility of classical simulations for understanding dynamics in fluids and for generating quasi-experimental data that could be used to test and refine existing theories of liquid structure and correlated dynamics. As computing power continued to increase, Rahman and others extended this initial work to more complicated fluids, including water,Reference Stillinger and Rahman14 as well as biological systems.Reference McCammon, Gelin and Karplus15

Figure 2. Illustration of atomic trajectories from a 500 atom simulation of radiation damage in copper carried out by Vineyard (based on figures in Reference Reference Gibson, Goland, Milgram and Vineyard12). Spheres represent initial atomic positions in a crystal; the lines and dots trace atomic motion during a collision cascade started by motion of the atom represented by the red sphere.

There are several common themes that emerge from these early atomic simulations. First, molecular simulation was emerging as a powerful tool in physics and chemistry that complemented both theory and experiment. Not only could it provide data but it could be used to test and refine theories under exceedingly well-controlled conditions. Second, the development of atomic simulation as a tool was driven in large part by the availability of increasingly powerful computing resources, such as those at US national laboratories. Finally, it became clear that demands placed on a potential energy function depended very strongly on the application. For chemistry, well-refined potential energy surfaces are needed to provide qualitative data, but even approximate surfaces can yield useful data for exploring general phenomena. Similarly, in statistical mechanics, simple potentials can yield exceedingly useful information.

Over the same time period that simulations were emerging as important tools in chemistry and physics, new and sophisticated dislocation-based theories of materials deformation were being developed. Why did atomic simulation not have the same impact on materials science over the same time period, in particular in understanding material deformation? There are two likely leading reasons. First, stress fields, which play a critical role in dislocation dynamics and plasticity, require relatively large systems to be completely contained, and computing resources were not sufficient to allow simulation of necessary system sizes. Second, even if sufficient computing resources were available to model relatively large systems, the pair potentials that were being used have well-established limitations for quantitatively reproducing the properties of crystalline solids that are required to satisfactorily describe dislocations. These limitations include elastic constants that are forced to obey the Cauchy relations (C 12 = C 44, where C 12 and C 44 are elastic constants) in cubic systems, generally inaccurate surface and vacancy formation energies and relaxations, and a general inability to describe as the lowest energy state all but close-packed lattices.

There was a convergence of capabilities in the 1980s that led to a rapid rise in the use of atomic simulation in materials science. First, there were new theories of interatomic bonding,Reference Norskov and Lang16 out of which came many-body potential energy expressions that did not have the inherent limitations of pair potentials for quantitatively describing materials properties but had computing demands that were slightly higher than pair potentials.Reference Carlsson, Ehrenreich and Turnbill17 Second, major computing resources that had driven much of the early work at the US national laboratories (and at places such as IBM) were becoming readily available to academic and other institutions. Third, there was a pressing need for new materials with tailored properties down to the atomic scale that could not be adequately calculated with the existing theory and modeling tools. Finally, researchers such as the developers of the embedded-atom method (EAM)Reference Daw, Foiles and Baskes18 not only published papers, they also freely released their computer codes to researchers (Figure 3). Further details about the EAM are given in the contribution by Foiles and Baskes in this issue.

Figure 3. Atomic configurations from a molecular dynamics simulation of dislocations created by high strain rate compression cutting an AlCu precipitate in an Al matrix. The precipitate is about 5 nm thick. Only atoms with local symmetry different from the bulk are shown. The compression comes from the left, which drives the dislocations from left to right. The simulations were carried out using the embedded-atom method potentials and the large-scale atomic/molecular massively parallel simulator (LAMMPS) code.

An interesting aspect of this research convergence in the 1980s was that the potential functions that enabled simulations targeted at particular materials came almost simultaneously from multiple groups in different countries. The Finnis-Sinclair potentials from the United Kingdom,Reference Finnis and Sinclair19 the EAM potentials from the United States,Reference Daw and Baskes20 the glue potentials from Italy,Reference Ercolessi, Tosatti and Parrinello21 the effective medium potentials from Denmark,Reference Jacobsen, Norskov and Puska22 and the corrected effective medium theory potentials from the United StatesReference Kress and DePristo23 all looked very similar in form, despite different theoretical derivations and intended applications. Even the bond-order potentials developed around that same time,Reference Tersoff24Reference Khor and Das Sarma27 despite initially looking very different in form from functions such as the EAM,Reference Brenner28 came from similar moments expansion of the local electronic density of states.Reference Brenner, Shenderova, Areshkin, Lipkowitz and Boyd5 One exception to this common form (see the sidebar) is the Stillinger-Weber potential that was introduced to model silicon.Reference Stillinger and Weber29

Today’s challenges and tomorrow’s opportunities

The topics outlined in this issue were chosen to illustrate the research convergence of the 1980s in this area: the theoretical formalisms that led to new many-body potentials, the transition of these theories to potential functions that are useful for large-scale simulations, the codes that spread these potentials throughout the research community, and the materials and applications of atomic modeling that have been opened by these capabilities. In the first article, Finnis discusses some of the quantum mechanical concepts that have led to the development of effective many-body potentials as well as some of the functional forms and emerging methods being used to fit parameters for these functions. This is followed by a contribution from Foiles and Baskes that focuses on the widely used EAM and related potentials, including some contemporary applications in materials science. Next is a contribution from Pastewka et al., in which the bond-order potentials and their application to covalently bound ceramic materials are discussed in detail. This is followed by a contribution from Shin and co-workers that discusses two relatively new dynamic charge-transfer methods: the reactive force field (ReaxFF) and charge optimized many-body (COMB) potentials. The focus is on atomic-scale modeling of systems with non-uniform bonding environments, including surface chemistry between covalently bound molecules and metallic or metal-oxide surfaces. In the final contribution, Plimpton and Thompson examine the performance and application of multiple potentials, including those discussed in this issue of MRS Bulletin, that are incorporated within the highly parallelizable, open-source large-scale atomic/molecular massively parallel simulator (LAMMPS) molecular dynamics software.

The Stillinger-Weber silicon potential

Stillinger and Weber developed the original “work horse” potential function for silicon,Reference Vitek and Srolovitz1 which is a sum of two- and three-body interactions that has a potential energy that goes to zero as atoms are separated. The potential function was originally developed to uncover via simulation hidden structures in molten silicon. However, the potential function was so carefully parameterized to both solid and liquid properties that it quickly was adapted by a large number of researchers to model a wide range of silicon structures and properties.Reference Nieminen, Puska and Manninen2Reference Brenner, Shenderova, Areshkin, Lipkowitz and Boyd5

1.F.H. Stillinger, T.A. Weber, Phys. Rev. B 31, 5262 (1985).

2.S. Sastry, C.A. Angell, Nature Mater. 2, 739 (2003).

3.M.J. Caturla, T.D. de la Rubia, L.A. Marques, G.H. Gilmer, Phys. Rev. B 54, 16683 (1996).

4.W. Bulatov, S. Yip, A.S. Argon, Phil. Mag. A 72, 453 (1995).

5.J.L. Feldman, M.D. Kluge, P.B. Allen, F. Wooten, Phys. Rev. B 48, 12589 (1993).

These articles illustrate the capabilities of state-of-the-art many-body potentials and their application to fundamental problems in materials science. As was the case in the 1980s, the trifecta of requirements of many-body reactive potentials by the computational community remains (1) incorporation of appropriately high levels of theory, (2) implementation in a manner that enhances practical application, and (3) utilization in software that takes advantage of evolving computing capabilities. As discussed in the article by Plimpton and Thompson, the computational cost of many-body potentials developed over the last three decades varies by several orders of magnitude. This provides a wide array of new opportunities for the simulation community to take advantage of the associated advances in computing power to study ever larger materials systems (∼1011 atoms) with well-established and less expensive potentials, or to examine smaller systems (∼104 atoms) with recent, more computationally intensive potentials that are intended for use on more complex systems.

New directions and capabilities have recently emerged that are expected to greatly influence atomistic materials science and engineering over the next three decades. One example of a new direction is the combination of atomic simulations with traditional engineering approaches, such as finite element analysis that is beginning to blur the distinction between atomic and continuum concepts.Reference Miller and Tadmor30 Among the emerging capabilities is the continuing ad hoc incorporation of electronic effects into analytic potentials that is going significantly beyond the moments expansions of the 1980s that is discussed in the following contributions. Examples include charge equilibration approaches and electronic force fields that can simulate excited electronic states in complex, heterogeneous systems.Reference Su and Goddard31 Given the importance of excited states on the dynamic properties of complex material systems, such as photovoltaic devices, it is expected that the latter capabilities will find increasing utilization by the computational materials science community.

Enabling all of these advances are continuing enhancements in computing capabilities; these capabilities include faster and more numerous parallel processors, larger data storage with faster access speeds, specialized visualization and data analysis hardware, and the emerging availability of these resources through cloud computing. Graphics processing units (the processors traditionally used in video cards), for example, are currently providing individual researchers access to a high degree of parallel processing at a cost per processor that is substantially less than traditional computer processing units. At the same time, high-end computing is looking to transition from the peta- to the exascale (i.e., from 1015 to 1018 floating point operations per second) with access to these resources through the computing cloud. The next three decades will be an exciting time for atomic simulations in materials science and engineering.

References

1.Vitek, V., Srolovitz, D.J., Eds., Atomistic Simulation of Materials beyond Pair Potentials (Plenum, New York, 1989).CrossRefGoogle Scholar
2.Nieminen, R.M., Puska, M.J., Manninen, M.J., Eds., Many-Atom Interactions in Solids (Springer-Verlag, Berlin, 1990).Google Scholar
3.Garrison, B.J., Srivastava, D., Annu. Rev. Phys. Chem. 47, 373 (1995).Google Scholar
4.Erkoc, S., Phys. Rep. 278, 79 (1997).CrossRefGoogle Scholar
5.Brenner, D.W., Shenderova, O.A., Areshkin, D.A., in Reviews in Computational Chemistry, Lipkowitz, K.B., Boyd, D.B., Eds. (VCH Publishers, New York, 1998), pp. 213245.Google Scholar
6.Hirschfelder, J., Eyring, H., Topley, B., J. Chem. Phys. 4, 170 (1936).Google Scholar
7.Truhlar, D.G., Wyatt, R.E., Adv. Chem. Phys. 36, 141 (1977).Google Scholar
8.Garrett, B.C., Truhlar, D.G., J. Phys. Chem. 83, 1052 (1979).Google Scholar
9.Truhlar, D.G., Wyatt, R.E., Annu. Rev. Phys. Chem. 27, 1 (1976).CrossRefGoogle Scholar
10.Berman, G.P., Izrailev, F.M., Chaos 15, 015104 (2005).CrossRefGoogle Scholar
11.Alder, B.J., Wainwright, T.E., J. Chem. Phys. 27, 1208 (1957).CrossRefGoogle Scholar
12.Gibson, J.B., Goland, A.N., Milgram, M., Vineyard, G.H., Phys. Rev. 120, 1229 (1960).Google Scholar
13.Rahman, A., Phys. Rev. 136, A405 (1964).CrossRefGoogle Scholar
14.Stillinger, F.H., Rahman, A.J. Chem. Phys. 60, 1545 (1974).CrossRefGoogle Scholar
15.McCammon, J.A., Gelin, B.R., Karplus, M., Nature 267, 585 (1977).CrossRefGoogle Scholar
16.Norskov, J.K., Lang, N.D., Phys. Rev. B 21, 2131 (1980).Google Scholar
17.Carlsson, A.E., in Solid State Physics: Advances in Research and Applications, Ehrenreich, H., Turnbill, D., Eds. (Academic Press, Boston, 1990), vol. 43, pp. 191.Google Scholar
18.Daw, M.S., Foiles, S.M., Baskes, M.I., Mater. Sci. Rep. 9, 251 (1993).Google Scholar
19.Finnis, M.W., Sinclair, J.E., Philos. Mag. A 50, 45 (1984).Google Scholar
20.Daw, M.S., Baskes, M.I., Phys. Rev. B 29, 6443 (1984).CrossRefGoogle Scholar
21.Ercolessi, F., Tosatti, E., Parrinello, M., Phys. Rev. Lett. 57, 719 (1986).Google Scholar
22.Jacobsen, K.W., Norskov, J.K., Puska, M.J., Phys. Rev. B 35, 7423 (1987).Google Scholar
23.Kress, J.D., DePristo, A.E., J. Chem. Phys. 87, 4700 (1987).CrossRefGoogle Scholar
24.Tersoff, J., Phys. Rev. Lett. 56, 632 (1986).Google Scholar
25.Biswas, R., Hamann, D.R., Phys. Rev. Lett. 55, 2001 (1985).CrossRefGoogle Scholar
26.Dodson, B.W., Phys. Rev. B 35, 2795 (1987).Google Scholar
27.Khor, K.E., Das Sarma, S., Phys. Rev. B 38, 3318 (1988).Google Scholar
28.Brenner, D.W., Phys. Rev. Lett. 63, 1022 (1989).Google Scholar
29.Stillinger, F.H., Weber, T.A., Phys. Rev. B 31, 5262 (1985).Google Scholar
30.Miller, R.E., Tadmor, E.B., Modell. Simul. Mater. Sci. Eng. 17, 053001 (2009).Google Scholar
31.Su, J.T., Goddard, W.A., Phys. Rev. Lett. 99, 185003 (2007).Google Scholar
Figure 0

Figure 1. Three-dimensional representation of the potential energy surface used by Eyring and co-workers to model H + H2 → H2 + H.6 Each horizontal axis corresponds to the distance between the outer hydrogen atom and the center atom, and the vertical axis corresponds to potential energy. The different shades represent potential energy for the interacting atoms. The darker the color, the lower the potential energy. This is a representation rather than the exact surface used in Reference 6, so units are not given on the axes. “Lake Eyring” is a dip in the potential energy that corresponds to a stable (and non-physical) H3 molecule.

Figure 1

Figure 2. Illustration of atomic trajectories from a 500 atom simulation of radiation damage in copper carried out by Vineyard (based on figures in Reference 12). Spheres represent initial atomic positions in a crystal; the lines and dots trace atomic motion during a collision cascade started by motion of the atom represented by the red sphere.

Figure 2

Figure 3. Atomic configurations from a molecular dynamics simulation of dislocations created by high strain rate compression cutting an AlCu precipitate in an Al matrix. The precipitate is about 5 nm thick. Only atoms with local symmetry different from the bulk are shown. The compression comes from the left, which drives the dislocations from left to right. The simulations were carried out using the embedded-atom method potentials and the large-scale atomic/molecular massively parallel simulator (LAMMPS) code.