Skip to main content Accessibility help
Hostname: page-component-558cb97cc8-kwsbc Total loading time: 1.683 Render date: 2022-10-06T04:58:10.144Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "displayNetworkTab": true, "displayNetworkMapGraph": true, "useSa": true } hasContentIssue true

A systematic analysis of the memory term in coarse-grained models: The case of the Markovian approximation

Published online by Cambridge University Press:  08 June 2022

Department of Chemical Engineering, Brunel University, London, Uxbridge UB8 3PH, UK email:
Warwick Mathematics Institute, University of Warwick, Coventry CV4 7AL, UK email:
School of Mathematical Sciences, University of Nottingham, Nottingham NG7 2RD, UK email:
Dipartimento di Fisica, Sapienza Università di Roma, Rome 00185, Italy email:
Dipartimento di Scienze Molecolari e Nanosistemi, Università Ca’ Foscari, Venezia 30123, Italy email:
Rights & Permissions[Opens in a new window]


The systematic development of coarse-grained (CG) models via the Mori–Zwanzig projector operator formalism requires the explicit description of a deterministic drift term, a dissipative memory term and a random fluctuation term. The memory and fluctuating terms are related by the fluctuation–dissipation relation and are more challenging to sample and describe than the drift term due to complex dependence on space and time. This work proposes a rational basis for a Markovian data-driven approach to approximating the memory and fluctuating terms. We assumed a functional form for the memory kernel and under broad regularity hypothesis, we derived bounds for the error committed in replacing the original term with an approximation obtained by its asymptotic expansions. These error bounds depend on the characteristic time scale of the atomistic model, representing the decay of the autocorrelation function of the fluctuating force; and the characteristic time scale of the CG model, representing the decay of the autocorrelation function of the momenta of the beads. Using appropriate parameters to describe these time scales, we provide a quantitative meaning to the observation that the Markovian approximation improves as they separate. We then proceed to show how the leading-order term of such expansion can be identified with the Markovian approximation usually considered in the CG theory. We also show that, while the error of the approximation involving time can be controlled, the Markovian term usually considered in CG simulations may exhibit significant spatial variation. It follows that assuming a spatially constant memory term is an uncontrolled approximation which should be carefully checked. We complement our analysis with an application to the estimation of the memory in the CG model of a one-dimensional Lennard–Jones chain with different masses and interactions, showing that even for such a simple case, a non-negligible spatial dependence for the memory term exists.

Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (, which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
© The Author(s), 2022. Published by Cambridge University Press

1 Introduction

The problem of predicting the evolution of a non-linear dynamical system is ubiquitous in science and engineering. In many such applications, the fact that thousands or millions degrees of freedom (DoF) must be solved for at once makes direct numerical solution infeasible, and so many methods are constrained to apply to smaller (and hence less scientifically interesting) problems. One of the most relevant example of this kind is represented by Molecular Dynamics (MD) simulation [Reference Leimkuhler and Matthews40]. In an MD simulation, a set of atoms which represents a particular molecular system is considered, and their positions and momenta are calculated, assuming that they evolve according to the Newton’s equations of motion [Reference Rapaport46] with prescribed interaction potentials and an appropriate thermostat.

Fully atomistic MD simulations regularly performed nowadays reach the order of a few million atoms ( $\approx\!10^6$ ) [Reference Zhao, Perilla, Yufenyuy, Meng, Chen, Ning, Ahn, Gronenborn, Schulten and Aiken55], a number which remains well below the size of a macroscopic system with Avogadro’s number of atoms ( $\approx\!10^{23}$ ). Although simulations with around $10^4$ atoms are usually large enough to extract properties of interest in systems like crystals and small macromolecules (such as polymers and peptides), in order to understand the dynamics of proteins, one must consider the environment in which they evolve. From the point of view of simulation, the problem for this kind of systems is not only the number of DoF required to represent the system, but also the relaxation times of these biomacromolecules. These can be so large that the cost of performing any atomistic simulation becomes quickly prohibitive, since billions of time steps are still required to extract meaningful information. This dual space–time problem continues to render accurate simulations of this type out of our reach.

For these reasons, in recent years, a series of techniques to derive simpler coarse-grained (CG) model have been developed. The main idea behind CG models is to reduce the number of variables in the system while maintaining accuracy. In place of the detailed evolution of the atoms, we consider a smaller number of macroscopic variables, functions of the underlying atomistic structure [Reference Voth51, Reference Papoian44, Reference Di Pasquale, Marchisio and Carbone19, Reference Di Pasquale and Carbone17]. Although theoretical results concerning static equilibrium statistics for coarse-grained systems are relatively well-developed [Reference Lelievre, Rousset and Stoltz41], theoretical work on dynamical CG models is still evolving, particularly focusing on applications of the Mori–Zwanzig (MZ) projection operator formalism. In [Reference Hijón, Español, Vanden-Eijnden and Delgado-Buscalioni27], a systematic derivation of CG models based on MZ techniques was proposed. The authors of the present work contributed by putting various aspects of MZ-CG models on a more rigorous footing [Reference Di Pasquale, Hudson and Icardi18]. The idea behind the application of the MZ framework to coarse-graining is to follow the dynamics of the CG DoF by explicitly integrating over the non-coarse-grained DoF. The resulting equations represent a formidable mathematical problem, and several works were already published reporting the analysis of different aspect of the MZ equations [Reference Givon, Kupferman and Hald21, Reference Chorin and Hald7, Reference Chorin, Kast and Kupferman10]. The importance of the MZ equations is not limited to CG problems alone, and similar approaches have been proposed to describe many different systems, ranging from the relativistic heavy-ion collision, where Fick’s law breaks down [Reference Koide36, Reference Koide37], to the dynamics of supercooled liquids, glasses and other amorphous solids [Reference Cui, Gebbia, Romanini, Rudić, Fernandez-Perea, Bermejo, Tamarit and Zaccone12, Reference Cui, Gebbia, Tamarit and Zaccone13, Reference Cui, Milkus and Zaccone14, Reference Cui, Yang, Qiao, Jiang, Dai, Wang and Zaccone15, Reference Götze22, Reference Handle, Rovigatti and Sciortino24, Reference Voigtmann, Puertas and Fuchs50, Reference Zaccarelli, Foffi, Sciortino, Tartaglia and Dawson53, Reference Zaccone54] and the study of heat conduction [Reference Chu and Li11].

A significant challenge with handling the MZ equations arises due to the need to approximate the memory kernel $\boldsymbol{\mathcal{M}}$ , which varies in both time and space. Different methods were proposed to calculate this term as the use of Krylov-subspace method [Reference Chen, Li and Liu6] or data-driven approximations [Reference Lei, Baker and Li38]. A route often followed in the literature is to assume a functional form for either the whole memory kernel or just the fluctuating force, and to ensure that both are coupled through the Fluctuation–Dissipation Theorem. In Berkowitz et al. [Reference Berkowitz, Morgan and McCammon2], the starting point is to assume that the fluctuating force is a periodic function which can thus be developed in Fourier series. Taking the other perspective, functional forms for the memory kernel that have been considered include Gauss-exponential process [Reference Berkowitz, Morgan and McCammon2, Reference Berne and Harp3, Reference Wang and Uhlenbeck52], the Gauss-Gauss process [Reference Berkowitz, Morgan and McCammon2], or combination of exponentially decaying kernels [Reference Ayaz, Tepper, Brünig, Kappler, Daldrop and Netz1, Reference Berkowitz, Morgan and McCammon2]. In all these cited examples, however, the choice of ansatz implies a strong hypothesis: the memory kernel does not depend on the positions (i.e., it is spatial homogeneous). However, as we will show in the next sections, for most CG systems this hypothesis does not seem justified. Even in a one-dimensional chain interacting with a simple Lennard–Jones potential, which is the example we present here, we observe that the memory kernel has a strong spatial dependence. The spatial dependence for this term was also observed in [Reference Chu and Li11]. Although it is not clear yet how to quantify the importance of this spatial dependence, in particular for more complicated systems in higher dimensions, the spatial homogeneity of the friction should not be taken for granted without verification.

1.1 The Mori–Zwanzig equations

The MZ-CG equations are a set of integro-differential equations which take the form of Generalised Langevin Equations under appropriate assumptions. We assume that, at the microscopic level, the considered system is well described by classical mechanics and is composed by $N^{FG}$ particles. To derive a reduced description of this system, we consider a smaller set of function of phase variables $\boldsymbol\zeta_{}$ and their time evolution

(1.1) \begin{equation} \frac{\mbox{d}{\boldsymbol\zeta_{}(\textbf{r}(t),\textbf{p}(t))}}{\mbox{d}{t}} = \mathfrak{L}\,\boldsymbol\zeta_{}(\textbf{r}(t),\textbf{p}(t))\,, \quad\boldsymbol\zeta_{}(\textbf{r}(0),\textbf{p}(0)) \equiv \boldsymbol\zeta_{0}\,.\end{equation}

Here, the operator $\mathfrak{L}\equiv iL \,\,\unicode{x2254} \left\lbrace {\ldots,\mathcal{H}}\right\rbrace$ is the Liouville operator where $\left\lbrace {\ldots,\mathcal{H}}\right\rbrace$ is Poisson bracket with the Hamiltonian $\mathcal{H}$ , and $i=\sqrt{-1}$ .

The Mori–Zwanzig formalism allows us to replace equation (1.1) for the functions of the phase variables $\boldsymbol\zeta_{}(\textbf{r}(t),\textbf{p}(t))$ by an equation of motion which depends on the microstate $(\textbf{r}(t),\textbf{p}(t))$ only through the CG DoF themselves, and where, as shown by Di Pasquale et al. [Reference Di Pasquale, Hudson and Icardi18], the complexity of the computation of equation (1.1) is shifted to the computation of the so-called orthogonal dynamics.

Generally speaking, a CG model can be obtained by grouping DoF of the fine-grained (FG) system into $N^{CG}$ beads. In the case of the Mori–Zwanzig projection this is done by projecting the equations for the functions of the phase variables $\boldsymbol\zeta_{}(\textbf{r}(t),\textbf{p}(t))$ onto the space of functions of the CG variables, which may be regarded as a subspace of the FG observable space. Naturally, the CG variables must be continuous and differentiable in the FG variables. The equations satisfied by these beads can be written as (for full derivations see [Reference Di Pasquale, Hudson and Icardi18, Reference Hijón, Español, Vanden-Eijnden and Delgado-Buscalioni27])

(1.2) \begin{equation} \begin{aligned} \frac{\mbox{d}{\boldsymbol Z_{\textbf{R}}}}{\mbox{d}{t}} &= \textbf{M}^{-1}\boldsymbol Z_{\textbf{P}}\,; \\[5pt] \frac{\mbox{d}{\boldsymbol Z_{\textbf{P}}}}{\mbox{d}{t}} &= - \frac{\partial{V^{\textrm{eff}}}}{\partial{\boldsymbol Z_{\textbf{R}}}} + \int_{0}^{t} \textrm{e}^{(t-s)\,\mathfrak{L}}\,\mathcal{P}\mathfrak{L}\,\textrm{e}^{s\,\mathcal{Q}\mathfrak{L}}\,\mathcal{Q}\mathfrak{L}\,\boldsymbol Z_{}\,\mbox{d} s + {\boldsymbol{\mathcal{F}}}_{\boldsymbol Z_{\textbf{P}}}(t,\cdot)\,; \end{aligned} \end{equation}


  • $\boldsymbol Z_{}=\left(\boldsymbol Z_{\textbf{R}},\boldsymbol Z_{\textbf{P}}\right)=\left(\textbf{R}_1,\ldots,\textbf{R}_N^{CG},\textbf{P}_1,\ldots,\textbf{P}_N^{CG}\right)$ are the CG DoF;

  • $\textbf{M}=\textrm{diag}(M_1,\dots,M_{N^{CG}})$ is a diagonal matrix of the masses of the beads;

  • $V^{\textrm{eff}}$ is the effective potential (for an explicit definition we refer to [Reference Di Pasquale, Hudson and Icardi18]);

  • $\mathcal{P}$ and $\mathcal{Q}$ are the MZ projection operator and its orthogonal counterpart (i.e., $\mathcal{Q}=\mathcal{I}-\mathcal{P}$ where $\mathcal{I}$ is the identity operator), respectively.

The projection operator $\mathcal{P}$ applied to a generic function $F(\textbf{z})$ acting on the space phase $X=\mathbb{R}^{6N^{FG}}$ is defined by

(1.3) \begin{equation} \mathcal{P}_{\textbf{a}}F(\textbf{z}) = \frac{1}{\Omega(\textbf{a})} \int_X F(\textbf{z}')\,\delta\left({{\boldsymbol\zeta_{}(\textbf{z}')-\textbf{a}}}\right) \rho^{eq}(\textbf{z}')\, \mbox{d}{\textbf{z}}' \,.\end{equation}

Here, $\textbf{z} = (\textbf{r}(t),\textbf{p}(t))$ are the FG DoF, $\boldsymbol\zeta_{}(\textbf{z})$ is a smaller set of phase function, and we have introduced a set of fixed values $\textbf{a}$ such that for a given set $\textbf{z}$ we have $\boldsymbol\zeta_{} = \textbf{a}$ . The normalisation factor (also called the structure function) represents the total volume of the surface $S(\boldsymbol\zeta_{})$ in phase space, which is defined by $\boldsymbol\zeta_{}(\textbf{z})=\textbf{a}$ :

(1.4) \begin{equation} \Omega(\textbf{a}) = \int_X \delta\left({{\boldsymbol\zeta_{}(\textbf{z}')-\textbf{a}}}\right) \mathbb{P}(\mbox{d}{\textbf{z}}')\,,\end{equation}

where $\mathbb{P}(\mbox{d}{\textbf{z}'}) = \rho^{eq}(\textbf{z}')\, \mbox{d}{\textbf{z}}'$ is the Gibbs measure on the FG phase space X. Loosely speaking, the projection operator $\mathcal{P}_{\textbf{a}}$ ‘averages’ the function $F(\textbf{z})$ over the surface $S(\textbf{a})$ in case the observables $\boldsymbol\zeta_{}$ are equal to $\textbf{a}$ for a given set $\textbf{z}$ . The fact that $\mathcal{P}$ is a projection operator can be easily verified by showing that it satisfies the necessary and sufficient conditions for a projection operator, i.e., it is Hermitian and $\mathcal{P}^2 = \mathcal{P}$ [Reference Zwanzig57]. Since the observed value of the phase function $F(\textbf{z})$ is the expected value $\mathbb{E}[F]$ with respect to the Gibbs measure, we can interpret the phase function as a random variable. Therefore, the projection operator $\mathcal{P}$ has a nice interpretation as a conditional expectation over the CG DoF:

(1.5) \begin{equation} \mathcal{P}_{\textbf{a}} F = \mathbb{E}[F|\boldsymbol\zeta_{}(\textbf{z}) = \textbf{a}] \,.\end{equation}

Finally, the term ${\boldsymbol{\mathcal{F}}}_{\boldsymbol Z_{\textbf{P}}}(t,\cdot)$ is often called fluctuating force and can be written (for the K-th bead) as

(1.6) \begin{equation} {\boldsymbol{\mathcal{F}}}_K(t,\cdot) = \,\textrm{e}^{t\,\mathcal{Q}\mathfrak{L}}\,\mathcal{Q}\mathfrak{L}\,\textbf{P}_K\,, \quad K = 1, \ldots , N^{CG}.\end{equation}

Here, we used a dot in the second argument of ${\boldsymbol{\mathcal{F}}}_K$ to stress that this term is a function of all of the variables in the atomistic system. Computing ${\boldsymbol{\mathcal{F}}}_K$ requires the calculation of the evolution of the orthogonal variables in the null space of $\mathcal{P}$ . In general, it is given by solving an auxiliary set of equations called orthogonal dynamics equations [Reference Givon, Kupferman and Hald21], which (with an appropriate CG mapping) can be determined by means of constrained dynamics (see [Reference Di Pasquale, Hudson and Icardi18]). Its presence in the MZ equations equation (1.2) is usually considered as a random noise for the CG DoF [Reference Chorin, Hald and Kupferman8]. Making this assumption avoids the problem of its direct calculation, and so allows us to reduce the complexity of the problem. However, ensuring that statistics of ${\boldsymbol{\mathcal{F}}}_K$ are captured accurately in a stochastic model is important to preserve the accuracy of the CG system. This represents the main problem we address in this work.

The integral term in equation (1.2) is known as the ‘memory’, since it requires information coming from the elapsed time. In a sense, in order to know the next position occupied by the system, we need to ‘remember’ what happened since the initial time $t=0$ . The memory term can be rewritten (for the K-th bead) as [Reference Di Pasquale, Hudson and Icardi18, Reference Hijón, Español, Vanden-Eijnden and Delgado-Buscalioni27]

(1.7) \begin{align} \int_{0}^{t} \left[\textrm{e}^{(t-s)\mathfrak{L}}\,\mathcal{P}\mathfrak{L}\,\textrm{e}^{s\,\mathcal{Q}\mathfrak{L}}\,\mathcal{Q}\mathfrak{L}\,\boldsymbol Z_{}\right]_K\,\mbox{d} s = \sum_{J=1}^{N^{CG}} \int_{0}^{t} \mathcal{M}_{KJ}(\boldsymbol Z_{}(t-s),s) \,\frac{\textbf{P}_J(t-s)}{M_J} \,\mbox{d}{{s}} \nonumber \\[5pt] + \beta^{-1} \sum_{J=1}^{N^{CG}} \int_{0}^{t} \frac{\partial{\mathcal{M}_{KJ}(\boldsymbol Z_{}(t-s),s)}}{\partial{\textbf{P}_{J}}} \,\mbox{d}{s},\end{align}

where we introduced the KJ-th components of the memory kernel $ \boldsymbol{\mathcal{M}}\left({\boldsymbol Z_{}(t-s),s}\right)$ :

(1.8) \begin{align} \mathcal{M}_{KJ}\left({\boldsymbol Z_{}(t-s),s}\right) =& \,\beta\,\mathcal{P}\left[{ \left({ \mathcal{Q}\mathfrak{L}\,\textbf{P}_K }\right) \otimes \left({ \textrm{e}^{s\,\mathcal{Q}\mathfrak{L}}\,\mathcal{Q}\mathfrak{L}\,\textbf{P}_J }\right) }\right] {} \nonumber\\[5pt] =& \,\beta\, \mathcal{P} \left[{ {\boldsymbol{\mathcal{F}}}_K(0,\cdot) \otimes {\boldsymbol{\mathcal{F}}}_J(s,\cdot) }\right] \nonumber \\[5pt] \equiv& \,\beta\, \mathbb{E}\left[ {{ {\boldsymbol{\mathcal{F}}}_K(0,\cdot) \otimes {\boldsymbol{\mathcal{F}}}_J(s,\cdot) }} \middle\vert {{ \,\boldsymbol Z_{}(t-s) }} \right] \,.\end{align}

Here, $\otimes$ represents a tensor product and $\beta^{-1}=k_B T$ , with $k_B$ the Boltzmann constant, and ${\boldsymbol{\mathcal{F}}}_K(0,\cdot) = \mathcal{Q}\mathfrak{L}\,\textbf{P}_K\,$ (see equation (1.6)). Di Pasquale et al. have shown that (with an appropriate CG mapping) the fluctuating force ${\boldsymbol{\mathcal{F}}}_K$ is given by the difference of the total force acting on atoms within a bead and the mean force given by the effective potential that describe the bead interaction (see [Reference Di Pasquale, Hudson and Icardi18]). Finally, in the last passage, we used the Zwanzig projection $\mathcal{P}$ , and the fact that it is equivalent to take the conditional expectation given the CG (set of) variables $\boldsymbol Z_{}$ , interpreted as random variables, with respect to the equilibrium distribution [Reference Chorin, Hald and Kupferman8, Reference Di Pasquale, Hudson and Icardi18].

For a system in thermal equilibrium, the memory kernel is related to the autocovariance of the fluctuating forces (ACF) through the Fluctuation–dissipation theorem [Reference Farias, Ramos and da Silva20, Reference Hänggi and Jung25, Reference Jung, Hanke and Schmid30]:

(1.9) \begin{equation}\beta\,\mathbb{E}[{\boldsymbol{\mathcal{F}}}(0)\otimes{\boldsymbol{\mathcal{F}}}(t)] = \boldsymbol{\mathcal{M}}(t) \,,\end{equation}

where, as above, expectations are taken with respect to an appropriate ensemble. This description is usually called ‘colored noise’ fluctuation, as opposed to the ‘white noise’ represented by the Markovian behaviour (see Section 2.1) [Reference Hänggi and Jung25].

Different approaches have been proposed in the literature to deal with the memory term. In [Reference Kauzlarić, Español, Greiner and Succi31], an analysis of the memory term obtained from the Mori projector operator [Reference Mori43, Reference Kawasaki32], a special case of the more general MZ projector, was presented. A more detailed analysis is reported in [Reference Zhu, Dominy and Venturi56] where some boundary analysis is proposed and the hierarchical approximation, first introduced by [Reference Stinis47]), is further developed. Other approaches proposed in the literature to calculate the memory term, include the use of the Galerkin decomposition [Reference Darve, Solomon and Kia16], or the assumption that the evolution of the CG DoF can be modelled using an autoregressive stochastic process [Reference Kneller and Hinsen35].

The two most used approximations for the calculation of the memory term are related to the behaviour of the integrand of the memory integral with respect to time. If the integrand quickly decays to zero, it can be assumed that all the information is concentrated in a very short time (i.e. we don’t need the whole history of the system from $t=0$ ). The memory term becomes local in time and we talk about the Markovian Approximation [Reference Chorin and Hald7, Reference Hijón, Español, Vanden-Eijnden and Delgado-Buscalioni27]. If, on the other hand, the decay of the integrand of the memory term is very slow compared to the characteristic time of the dynamics of the system, then we can assume that the integrand is constant in the interval [0,t] and we obtain the so-called t-model [Reference Chorin, Hald and Kupferman9, Reference Chorin and Hald7, Reference Hald and Stinis23, Reference Price and Stinis45].

In this work, we will focus on the short-time approximation by describing a model which can be interpreted as giving a Markovian reduction of the MZ equation. This paper is structured as follows: we will start with the discussion of the memory term from the point of view of the Markovian Approximation as usually is considered in the literature, and then we present our new description for it. We will show how it can be effectively approximated and we will give a precise definition of all the hypothesis used in the derivation. We will then apply the framework derived in a simple example composed by a one-dimensional linear chain where atoms interact with a non-linear potential. We will show how the relevant quantities described here can be explicitly calculated and we draw some conclusions.

2 The behaviour of the MZ CG system

2.1 Memory approximation

The physical theory of Brownian motion assumes that thermal fluctuations which occur due to the collision of the small, light particles with the larger Brownian particle happen at a time scale which is much shorter than the one describing the evolution of the larger particle. The consequence of this assumption is that the force acting on the Brownian particle at each time is completely uncorrelated with the history of its evolution. Mathematically speaking, this corresponds to the idea that the random force is a stochastic process which obeys the Markov property. We note that in the theory of Brownian motion, there is a further important assumption, which is that the behaviour of the heat bath is spatially uniform. This assumption is clearly justified for a particle within a fluid which is at rest macroscopically. However, the spatial uniformity may not be generally valid, since many coarse-grained DoF are tightly bound within a molecule or bulk solid, and so are in close contact with their particular environment, rather than a homogeneous random one.

Moreover, for coarse-graining of molecular systems, it is common to make an analogous time-scale separation assumption about the action of the neglected DoF on the coarse-grained DoF, i.e. that the discarded DoF fluctuate at a faster time-scale and so act on the CG DoF randomly with no correlations in time.

Setting aside the justification for the use of time-scale separation and spatial homogeneity, under these two assumptions the fluctuating forces ${\boldsymbol{\mathcal{F}}}$ are usually modelled as a white noise process, with an ACF which is assumed to take the form:

(2.1) \begin{equation} \beta\,\mathbb{E}[{\boldsymbol{\mathcal{F}}}_K(0)\otimes{\boldsymbol{\mathcal{F}}}_J(t)] = 2\,\Phi_{KJ} \delta\left({{t}}\right)\quad\text{where}\quad \Phi_{KJ} = \beta\int_{0}^{\infty} \mathbb{E}[{\boldsymbol{\mathcal{F}}}_K(0)\otimes{\boldsymbol{\mathcal{F}}}_J(\xi)] \mbox{d} \xi,\end{equation}

with ${\boldsymbol{\mathcal{F}}}$ being the fluctuating forces which act in the system at full resolution. Since the fluctuating forces are assumed to be stationary as a random process (in fact they are defined as averages over equilibrium ensembles, see [Reference Berne and Pecora4]), the terms $\Phi_{KJ}$ do not depend on the particular time origin [Reference Lei, Caswell and Karniadakis39, Reference Kinjo and Hyodo33], i.e. there is no time dependence in the definition of $\Phi_{KJ}$ given in (2.1). One of the problems with this description is the so-called plateau problem, first noticed by Kirkwood and Buff [Reference Kirkwood and Buff34]. The plateau problem stems from the fact that the integral definition of $\Phi_{KJ}$ is zero if the upper boundary is $+\infty$ , because of the decay of the negative part of the correlation function at infinity [Reference Helfand26]. For this reason, one approach has been to cut the domain of integration for the last integral in equation (2.1) at a certain time $\tau$ which should be large enough to include the relevant information in the ACF, but small compared to the time scale of the evolution of the CG DoF. By cutting the integral at a finite time, we are assuming that it remains constant for $t > \tau$ (i.e. the integral reaches a plateau). The problem comes from the fact that the intermediate time $\tau$ is not well defined within the theory and there is no method to estimate it a priori.

In the following, we will show that our derivation overcomes this issue by proposing an asymptotic methodology which allows us to use data to fit a functional form for the memory, and thereby derive an explicit form for the Markovian approximation directly from data.

2.2 Memory ansatz

Our analysis starts in a similar fashion by making an ansatz on the form of the memory kernel $ \boldsymbol{\mathcal{M}}(\boldsymbol Z_{},t)$ , albeit a less restrictive one. We assume that the memory kernel for the generic pair of beads K and J takes the general form

(2.2) \begin{equation} \mathcal{M}_{KJ}(\boldsymbol Z_{},t) = \mbox{exp}\left[{-\left({\frac{t}{\tau_{KJ}}}\right)^\alpha}\right]f_{KJ}(\boldsymbol Z_{},t),\end{equation}

where $\tau_{KJ}$ is a characteristic time, $\alpha$ is a constant, and $f_{KJ}(\boldsymbol Z_{},t)$ is a sufficiently smooth function on the interval $0\leq t < \infty$ . In general, each entry of the memory kernel is specific to each pair of beads. However, in the following, in order to simplify the notation, we will write

(2.3) \begin{equation} \boldsymbol{\mathcal{M}}(\boldsymbol Z_{},t) = \textrm{e}^{- \left({\frac t\tau}\right)^\alpha}f(\boldsymbol Z_{},t),\end{equation}

i.e. we will consider the case where all the entries in the memory kernel are equal. The general case can then be easily obtained by repeating the same arguments for all the possible KJ pairs. It is interesting to note that a similar ansatz was considered when using a MZ-derived Generalised Langevin Equation to study the dynamical behaviour of amorphous solids [Reference Cui, Gebbia, Romanini, Rudić, Fernandez-Perea, Bermejo, Tamarit and Zaccone12, Reference Cui, Gebbia, Tamarit and Zaccone13, Reference Cui, Milkus and Zaccone14, Reference Cui, Yang, Qiao, Jiang, Dai, Wang and Zaccone15, Reference Zaccone54].

While equation (2.3) represents a somewhat strict ansatz for the form of the memory kernel, it is consistent with similar quantities calculated in other systems [Reference Li, Lee, Darve and Karniadakis42], and moreover, our approach includes a possibly non-uniform dependence on the spatial position, in contrast to assumptions made in other applications of CG theory discussed above. Moreover, given the connection between the memory kernel and the ACF of equation (1.9), both $\tau$ and $\alpha$ can be extracted from the ACF, for example by fitting to numerical data. In particular, $\tau$ represents the time decay of the ACF and we will show in Section 3.3 its explicit calculation in a test system. We will also make the natural physical assumptions that $\boldsymbol{\mathcal{M}}(\boldsymbol Z_{},t)$ is maximal when $t=0$ , and that the parameters satisfy $\tau>0$ and $\alpha > 0$ .

Under the assumptions that $\alpha>0$ and $\tau>0$ , the exponential factor makes it possible to expand the memory kernel as an asymptotic series from which the behaviour close to the origin can be explicitly obtained. Here, we argue that this asymptotic series approach leads naturally to a form of the Markovian approximation commonly considered in CG theory. We note that the case where $\alpha = 0$ corresponds to algebraic decay of the memory kernel, and will require a different approach which we leave for future work.

In the following, we consider only the first integral of the memory term (i.e., equation (1.7)) since $\boldsymbol{\mathcal{M}}$ does not depend on the momenta for an appropriate CG mapping, see [Reference Di Pasquale, Hudson and Icardi18]. Furthermore, we make the memory term dimensionless by multiplying it by suitable constants:

(2.4) \begin{align} &\frac{M L_c }{\tau_P^2}\int_0^{\tilde{t}}\, \boldsymbol{\widetilde{\mathcal{M}}}\big(\widetilde{\boldsymbol Z_{}}(\tilde{t}-\tilde{s}),\tilde{s}\big) \cdot \widetilde{\textbf{P}}(\tilde{t}-\tilde{s}) \,\mbox{d}{\tilde{s}} \end{align}
\begin{align} &\qquad = \int_0^{\tilde{t}} \, \textrm{e}^{-\left({\frac{\tau_P}{\tau}}\right)^\alpha \tilde{s}^\alpha} \tilde{f}\big(\widetilde{\boldsymbol Z_{}}(\tilde{t}-\tilde{s}),\tilde{s}\big) \widetilde{\textbf{P}}(\tilde{t}-\tilde{s}) \,\mbox{d}{\tilde{s}} \nonumber\end{align}


  • M is the mean effective mass of the coarse-grained DoF;

  • $L_c$ is a characteristic length whose specific definition is not important for what is discussed here (and can be taken as the average bead size, for instance);

  • $\tau_P$ is a characteristic time, which can be interpreted as the decay of the autocovariance function of the momenta (ACM) of the beads. This quantity can be interpreted also as the characteristic time scale of the macroscopic system.

We indicate with $\widetilde{\cdot}$ the dimensionless quantities and we note that we can always rewrite the different terms in equation (2.4) as function of the dimensionless time $\tilde{t} = t/\tau_P$ , by writing $\boldsymbol Z_{}(t-s)=\boldsymbol Z_{}(\tau_P(\tilde{t}-\tilde{s}))=\widetilde{\boldsymbol Z_{}}(\tilde{t}-\tilde{s})$ , where we absorb $\tau_P$ in the definition of the function. We will also simplify the notation in equation (2.4) by defining the dimensionless quantity $\lambda=\left({\frac{\tau_P}{\tau}}\right)^\alpha$ . We want to highlight here the first result of the formulation presented here. The concept of the time-scale separation can be precisely and quantitatively defined by measuring it with the parameter $\lambda$ . From the definition of $\lambda$ , we can distinguish two cases:

  1. (i) $\lambda \gg 1$ , then $\tau_P \gg \tau$ , that is to say that the macroscopic time scale is much larger than the microscopic one. The Markovian approximation is expected to hold.

  2. (ii) $\lambda \approx 1$ , then $\tau_P \approx \tau$ and there is no separation between the macroscopic and the microscopic time-scale. The Markovian approximation is expected to fail and the full memory term must be considered.

In the next section, we will present a result which will make more clear the qualitative explanation given here in terms of validity or not of the Markovian approximation. Then, the problem will become the explicit calculation of the two quantities, $\tau$ and $\tau_P$ , which will be presented in the following sections. In order to simplify our notation, we will drop tildes from now on, and assume all quantities are dimensionless.

2.3 Rigorous approximation result

Under the assumption that the ansatz equation (2.3) holds, we now derive an error estimate for an expansion of the memory integral which we can use to define proxy dynamics, providing a theoretical guarantee of accuracy in the case where the time-scale separation is significant.

Proposition 1. Suppose that the memory kernel $\mathcal{M}$ takes the form

\begin{equation*}\mathcal{M}(\boldsymbol Z_{}(t-s),s) = \textrm{e}^{- \lambda s^\alpha}f(\boldsymbol Z_{}(t-s),s),\end{equation*}

where $\lambda>0$ , $\alpha>0$ , and the function

\begin{equation*}g(t,s) = f(\boldsymbol Z_{}(t-s),s)\textbf{P}(t-s)\end{equation*}

is of class $\textrm{C}^{k,\gamma}$ in both t and s, for some non-negative integer k and Hölder exponent $\gamma\in(0,1]$ . Then the absolute error committed by replacing the memory

\begin{equation*}\int_0^t \mathcal{M}(\boldsymbol Z_{}(t-s),s)\textbf{P}(t-s) \mbox{d}{s} = \int_0^t \textrm{e}^{- \lambda s^\alpha} g(t,s) \mbox{d}{s}\end{equation*}

by a k-term approximation is bounded by

\begin{equation*} \Bigg|\int_0^t \textrm{e}^{- \lambda s^\alpha} g(t,s)\mbox{d}{s} - \sum_{n=0}^{k} \frac{G_n(t)}{n!}\frac{\Gamma\big(\frac{n+1}{\alpha}\big)}{\alpha \lambda^{\frac{n+1}{\alpha}}}\Bigg| \leq \sum_{n=0}^{k}\frac{C_n}{n!}\int_t^\infty \textrm{e}^{-\lambda s^\alpha} s^n \mbox{d}{s}+C_{k+1}\frac{\Gamma\Big(\frac{k+\gamma+1}{\alpha}\Big)}{k!\alpha \lambda^{\frac{k+\gamma+1}{\alpha}}},\end{equation*}


\begin{equation*}G_i(t) = \frac{\partial^i}{\partial s^i}g(t,0)=\frac{\partial^i}{\partial s^i}\Big(f(\boldsymbol Z_{}(t-s),s)\textbf{P}(t-s)\Big)\bigg|_{s=0},\end{equation*}

$\Gamma$ is the Gamma function, and $C_n$ are appropriate positive constants which depend upon the regularity of the trajectories.

A full proof of this proposition is given in Appendix A, and we focus on the physical interpretation of the result here. The error bound for the memory matrix is composed of two terms. The first of these is

(2.5) \begin{equation} \psi(t,\lambda) \overset{\operatorname{def}}{=} \sum_{n=0}^{k}\frac{C_n}{n!}\int_t^\infty \textrm{e}^{-\lambda s^\alpha} s^n \mbox{d}{s},\end{equation}

and it can be shown that $\psi$ decays exponentially in time for fixed $\lambda$ (see Appendix B). Physically, this decay represent transient behaviour as the system relaxes towards the equilibrium. We believe this transient behaviour is closely related to the plateau problem which has been observed in earlier works referred to above.

The second term

(2.6) \begin{equation} \zeta(\lambda) \overset{\operatorname{def}}{=} C_{k+1}\frac{\Gamma\Big(\frac{k+\gamma+1}{\alpha}\Big)}{k!\alpha \lambda^{\frac{k+\gamma+1}{\alpha}}}\end{equation}

does not decay with increasing time t. However, we can study its behaviour with respect to $\lambda$ . In particular, it follows that $\zeta(\lambda)$ decays to zero as $\lambda\to\infty$ because $\frac{k+\gamma+1}{\alpha} > 0$ , then

(2.7) \begin{equation}\zeta(\lambda) = \mathcal{O}\left({\lambda^{-\frac{k+\gamma+1}{\alpha}}}\right) \,, \quad \, \lambda\to\infty.\end{equation}

As already observed (see Section 2.2), $\lambda$ represents the scale-separation between the underlying atomistic model and the coarse-grained one, so as we will show below, the above result provides both a form for a Markovian approximation of the dynamics, and a theoretical guarantee backing up the commonly held belief that a Markovian approximation is more accurate when there is a large time scale separation between atomistic and CG models.

In particular, as a corollary of Proposition 1, we note that if the integrand function g(t,s) in the memory integral is Lipschitz, so that $k=0$ and $\gamma=1$ , then we have

\begin{equation*}\Bigg|\int_0^t \textrm{e}^{- \lambda s^\alpha} g(t,s)\mbox{d}{s} - G_0(t)\frac{\Gamma\big(\frac{1}{\alpha}\big)}{\alpha \lambda^{\frac{1}{\alpha}}}\Bigg|\leq C_0\int_t^\infty \textrm{e}^{-\lambda s^\alpha}\mbox{d}{s}+C_1\frac{\Gamma\big(\frac2\alpha\big)}{\alpha \lambda^{\frac2\alpha}}.\end{equation*}

We remark that the assumption that g(t,s) is Lipschitz is particularly mild. It is guaranteed to hold if trajectories of the system are twice continuously differentiable in time, and this will in turn be the case when the potential energy function is twice continuously differentiable in space. As such, the assumption is therefore satisfied for almost all Hamiltonian systems of interest.

We note that the function $G_0(t)$ is

\begin{equation*} G_0(t) = f(\boldsymbol Z_{}(t),0)\textbf{P}(t) = \beta\, \mathbb{E}\left[ {{ {\boldsymbol{\mathcal{F}}}(0,\cdot) \otimes {\boldsymbol{\mathcal{F}}}(0,\cdot) }} \middle\vert {{ \,\boldsymbol Z_{}(t) }} \right]\textbf{P}(t),\end{equation*}

and hence we are able to approximate equation (2.4), where we have dropped the tildes, as

(2.8) \begin{align} \int_0^t \mathcal{M}(\boldsymbol Z_{}(t-s),s)\textbf{P}(t-s) \mbox{d}{s} & \approx \frac{\Gamma\big(\frac{1}{\alpha}\big)}{\alpha \lambda^{\frac{1}{\alpha}}} \beta\, \mathbb{E}\left[ {{ {\boldsymbol{\mathcal{F}}}(0,\cdot) \otimes {\boldsymbol{\mathcal{F}}}(0,\cdot) }} \middle\vert {{ \,\boldsymbol Z_{}(t) }} \right]\textbf{P}(t), \nonumber \\[5pt] & \equiv \boldsymbol{\chi}(\boldsymbol Z_{}(t);\alpha,\lambda )\textbf{P}(t).\end{align}

or in other words, we can define a spatially varying friction cooefficient matrix $\boldsymbol{\chi}$ to replace the memory integral.

In this case, we can use this expression to show that the relative error of the approximation is bounded by

(2.9) \begin{equation} \Bigg|\frac{\int_0^t \textrm{e}^{- \lambda s^\alpha} g(t,s)\mbox{d}{s} - G_0(t)\Gamma\big(\frac{1}{\alpha}\big)\alpha^{-1} \lambda^{-\frac{1}{\alpha}}}{G_0(t)\Gamma\big(\frac{1}{\alpha}\big)\alpha^{-1} \lambda^{-\frac{1}{\alpha}}}\Bigg| \leq \frac{\int_t^\infty \textrm{e}^{-\lambda s^\alpha}\mbox{d}{s}}{\int_0^\infty \textrm{e}^{-\lambda s^\alpha} \mbox{d}{s}}+\frac{C_1}{\lambda^{\frac1\alpha}|G_0(t)|}\frac{\Gamma\big(\frac2\alpha\big)}{\Gamma(\frac1\alpha)}.\end{equation}

As we already observed for the absolute error bound, the former term exhibits decays to zero as $t\to\infty$ , regardless of the value of $\lambda$ , while the latter decays as $\lambda\to\infty$ for fixed t.

In the next section, we will show how to compute $\chi$ in practice and discuss the error committed in replacing the memory term with its approximation shown in equation (2.8).

3 A numerical example

In this section, we demonstrate how one might apply the mathematical result of Proposition 1. To do so, we use a numerical implementation of a one-dimensional periodic chain made of $N^{FG} = 30$ atoms of two different species, which differ in mass and bond stiffness, grouped in $N^{CG} = 10$ CG beads.

3.1 Model

We assume the atoms are arranged in a repeating pattern, and a coarse-grained model is obtained by combining the single repeated units into beads. The CG variables are given by the centre of mass of each bead and by the corresponding momenta, and the repeating pattern consists of two atoms of mass $M_2$ and a single atom of mass $M_1$ arranged in the following configuration: $(M_2-M_1-M_2)$ . We considered two test cases: $M_1=M_2=1$ and $M_1=1$ , $M_2=100$ .

The bond stiffness between the two different species of atoms is described by means of an inter-atomic potential, which is a simple 12-6 Lennard–Jones potential:

\begin{equation*} U_{i,i+1}(r) = 4\,\epsilon_{i,i+1} \left[{\left({\frac{\sigma_{i,i+1}}{r}}\right)^{12} - \left({\frac{\sigma_{i,i+1}}{r}}\right)^6}\right] = \epsilon_{i,i+1} \left[{\left({\frac{r^*_{i,i+1}}{r}}\right)^{12} - 2\left({\frac{r^*_{i,i+1}}{r}}\right)^6} \right]\,.\end{equation*}

Here, $i \in \left\lbrace {1,2,\dots,N^{FG}}\right\rbrace$ and we choose $\sigma_{i,i+1}=2^{-1/6}$ so that the minimum value of the potential is obtained for $r^*_{i,i+1}=1$ . The value of the interaction strength, $\epsilon_{i,i+1}$ , is set to 1 and 10 for $M_1 - M_1$ and $M_1 - M_2$ pairs, respectively. In the following, all parameters, such as temperature and time step, are expressed in terms of Lennard–Jones reduced units [Reference Tuckerman48].

The simulations to obtain the fluctuating force (and therefore to determine the memory kernel) are implemented in the canonical (NVT) ensemble at fixed $k_B T = 1$ and friction parameter $\gamma=1$ . Furthermore, we assume periodic boundary conditions and nearest neighbour interactions.

The Hamiltonian of the atomistic system is given by

(3.1) \begin{equation} \mathcal{H}(\textbf{r},\textbf{p}) = \frac12 \,\textbf{p}^T\,\textbf{M}^{-1}\,\textbf{p}\,+\sum_{i=1}^{N^{FG}-1}U_{i,i+1}\left({r_{i+1}-r_i}\right)\,+ U_{N^{FG},1}\left({r_1-r_{N^{FG}}+r^*N^{FG}}\right), \end{equation}

where $\textbf{M}$ is the mass matrix (that is, a diagonal matrix whose entries are the masses of each atom) and $\textbf{r}$ and $\textbf{p}$ are the positions and momenta of the atoms, respectively. The final term takes into account that periodic boundary conditions are in place, i.e., that the two ends of the linear chain are connected to each other.

The results shown below were obtained through simulation of two different dynamics: (i) the Fine-Grained Dynamics (FGD) when solving the equations of motion with the Hamiltonian defined in equation (3.1),

\begin{equation*} \frac{\mbox{d}{\textbf{z}(t)}}{\mbox{d}{t}} = \mathfrak{L}\,\textbf{z} \,, \end{equation*}

where, $\textbf{z}=\left({ \textbf{z}_{ \textbf{r} },\textbf{z}_{ \textbf{p} } }\right)=( \textbf{r}_1,\ldots,\textbf{r}_{N^{FG}} , \textbf{p}_1,\ldots,\textbf{p}_{N^{FG}} )$ is a state of the system characterised by the instantaneous positions and momenta of the $N^{FG}$ particles that compose it; (ii) the Orthogonal Dynamics (OD) when solving the system of equations

\begin{equation*} \frac{\mbox{d}{r_k}}{\mbox{d}{t}} = \frac{p_k}{m_{k}}- \frac{P_I}{M_I} \,; \qquad \frac{\mbox{d}{p_k}}{\mbox{d}{t}} = - \frac{\partial{U(\boldsymbol\zeta_{\textbf{r}})}}{\partial{r_k}} + \frac{m_k}{M_I}\sum_{i\in S_I}\frac{\partial{U(\boldsymbol\zeta_{\textbf{r}})}}{\partial{r_i}}\,; \end{equation*}

for $k = 1,2,\dots,N^{FG}$ , where I is the index such that $k \in S_I$ and $S_I$ is the set of particles of the FG system included within the bead of index I. We note that both FGD and OD have roughly the same numerical performances, and that for the simple system investigated here the speed-up achieved by the coarse-graining procedure is $\approx\!5$ .

3.2 Numerical sampling of the memory kernel

All the simulations reported below were performed using the code HybridZwanzJulia [Reference Icardi, Hudson, Di Pasquale, Spinaci and Rovigatti29] written in Julia v1.4.1 [Reference Bezanson, Edelman, Karpinski and Shah5]. This code allows a relatively simple recalibration of the test system through Julia ‘types’, which are initialised by the user to fix all the parameters of the simulation, including the repeating pattern of the beads, the number of beads, the mass and stiffness of the inter-atomic potentials. Here, we give an idea of the sampling implemented in the code to compute the fluctuating force, given in equation (1.6), and therefore the memory term as given in equation (2.8).

As mentioned in the introduction, the idea is to sample using OD, since it is possible to determine the fluctuating force through constrained dynamics. First, the initial conditions for the simulation are fixed, so that the momenta are distributed according to the Maxwell–Boltzmann distribution and the positions lie in a neighbourhood of the equilibrium positions, determined by the equilibrium distance of the inter-atomic potential. Then the trajectory of the FGD is computed and samples of the positions and momenta are stored. These samples are used to determine the CG positions and momenta and are used as initial condition for computing the trajectories of the OD. Along these trajectories, time averages of the first and second moment of the effective force between beads are stored. The first moment gives the value of the mean force acting between adjacent beads given the position of the center of mass, while the second moment is used to compute the fluctuating force. Since the sampling steps of the OD can run at the same time without having to communicate between the processes, these procedures can be run in parallel.

In order to improve the time required to sample the fluctuating force, we use the translational symmetry of this simple system, the fact that the beads are identical and the assumption that the effective potential can be approximated as a sum of two-body interactions based on the distance to the nearest-neighbour beads only. We thus compute the interactions between any combination of beads and assume that the interactions computed are valid for any equivalent pair in the system. In this way, we reduce the dimensionality of the space to be sampled from the total number of CG variables to simply one, i.e., the inter-bead distance, which simplifies the exploration of the entire constrained phase space.

If we consider bead I, we can assume that the forces that act on it can be decomposed as the independent ‘stress’ contributions $\sigma_{I+1,I}$ and $-\sigma_{I,I-1}$ , which are the forces exerted on bead I by its two neighbours, $I-1$ and $I+1$ . Under these conditions, we write the conditional expectation in the equation (2.8) as

\begin{align*} \mathbb{E}\left[ {{{\boldsymbol{\mathcal{F}}}_I \otimes {\boldsymbol{\mathcal{F}}}_I}} \middle\vert {{\textbf{R}}} \right] &= \mathbb{E}\left[ {{ (\Delta\sigma_{I+1,I}-\Delta\sigma_{I,I-1}) \otimes (\Delta\sigma_{I+1,I}-\Delta\sigma_{I,I-1}) }} \middle\vert {{\textbf{R}}} \right] \\[5pt] &= \mathbb{E}\left[ {{ \Delta\sigma_{I+1,I} \otimes \Delta\sigma_{I+1,I} }} \middle\vert {{\textbf{R}}} \right] + \mathbb{E}\left[ {{ \Delta\sigma_{I,I-1} \otimes \Delta\sigma_{I,I-1} }} \middle\vert {{\textbf{R}}} \right] \,,\end{align*}

where $\Delta\sigma$ is the fluctuating part of the ‘stress’ contribution, $\textbf{R}$ is the inter-bead distance and we assume that the cross terms are negligible, i.e.,

\begin{equation*} \mathbb{E}\left[ {{ \Delta\sigma_{I+1,I} \otimes \Delta\sigma_{I,I-1} }} \middle\vert {{\textbf{R}}} \right]\approx 0 \quad\text{and}\quad \mathbb{E}\left[ {{ \Delta\sigma_{I,I-1} \otimes \Delta\sigma_{I+1,I}}} \middle\vert {{\textbf{R}}} \right]\approx 0\,.\end{equation*}

In other words, we assume that the fluctuating stresses acting between adjacent pairs of beads are not correlated, since we have made the assumption that pair interactions are sufficient to describe the behaviour of the system.

Under this assumption, we obtain

\begin{align*} \mathbb{E}\left[ {{{\boldsymbol{\mathcal{F}}}_I \otimes {\boldsymbol{\mathcal{F}}}_{I+1}}} \middle\vert {{\textbf{R}}} \right] &= \mathbb{E}\left[ {{ (\Delta\sigma_{I+1,I}-\Delta\sigma_{I,I-1}) \otimes (\Delta\sigma_{I+2,I+1}-\Delta\sigma_{I+1,I}) }} \middle\vert {{\textbf{R}}} \right] \\[5pt] &= -\mathbb{E}\left[ {{ \Delta\sigma_{I+1,I} \otimes \Delta\sigma_{I+1,I} }} \middle\vert {{\textbf{R}}} \right] \,, \\[-5pt] \\ \mathbb{E}\left[ {{{\boldsymbol{\mathcal{F}}}_I \otimes {\boldsymbol{\mathcal{F}}}_{I-1}}} \middle\vert {{\textbf{R}}} \right] &= \mathbb{E}\left[ {{ (\Delta\sigma_{I+1,I}-\Delta\sigma_{I,I-1}) \otimes (\Delta\sigma_{I,I-1}-\Delta\sigma_{I-1,I-2}) }} \middle\vert {{\textbf{R}}} \right] \\[5pt] &= -\mathbb{E}\left[ {{ \Delta\sigma_{I,I-1} \otimes \Delta\sigma_{I,I-1} }} \middle\vert {{\textbf{R}}} \right] \,.\end{align*}

Therefore, we compute a smooth approximation of the variance of the fluctuating force between beads which allows us to define

(3.2) \begin{equation} \chi_{I+1,I} = \frac{\tau}{\tau_P}\,\frac{\Gamma\left(\tfrac{1}{\alpha}\right)}{\alpha}\,\beta\,\mathbb{E}\left[ {{ \Delta\sigma_{I+1,I} \otimes \Delta\sigma_{I+1,I} }} \middle\vert {{ \textbf{R} }} \right] \,,\end{equation}

and the spatially varying friction coefficient matrix $\boldsymbol{\chi}$ (see equation (2.8)) as

\begin{equation*}\chi_{IJ}(\textbf{R}; \alpha, \tau) = \begin{cases} -\chi_{I+1,I} & I=J-1 \mod N^{CG} \,, \\[5pt] \chi_{I+1,I} + \chi_{I,I-1} & I=J \,, \\[5pt] -\chi_{I,I-1} & I=J+1 \mod N^{CG} \,, \end{cases}\end{equation*}

where $I,J \in \left\lbrace {1,2,\dots,N^{CG}}\right\rbrace$ . We note here that the memory term we consider depends on CG coordinates. The importance of this feature in this kind of models was shown in [Reference Hudson and Li28].

3.3 Fitting and results

In this section, we show the numerical results for the autocorrelation function (ACF) of the fluctuating forces and of the bead momenta.

Applying our ansatz (see equation (2.3)), we fit the decaying of the ACF of the fluctuating forces obtained from a fine-grained simulation using the function

(3.3) \begin{equation} F_F(t;\alpha_F,\tau) = \mbox{exp}\left[ -\left({\frac{t}{\tau}}\right)^{\alpha_F} \right].\end{equation}

The best fit values for $\alpha_F$ and $\tau$ , obtained with the XMGRACE software [Reference Turner49], are reported in Table 1 for the two systems investigated here. In order to check the consistency of the results, we also fit the initial decay of the ACF of the momenta and of the fluctuating forces using

(3.4) \begin{equation} F_P(t;\tau_P) = \mbox{exp}\left[ -\left({\frac{t}{\tau_P}}\right)^2 \right]\,.\end{equation}

For comparison, the data and the accompanying fitted curves are shown in Figure 1.

Table 1. Results of fitting to the numerical data

Figure 1. Time autocorrelation function of the (a) fluctuating force and (b) momenta for two values of $M_2$ . Symbols represents the simulations data whereas dotted lines and dashed lines are fits to equations (3.4) and 3.3, respectively. In the ACF of the momenta (b) only one fitting is reported (solid black line) because equations (3.4) and (3.3) coincide.

For the fluctuating force, for both systems ( $M_2 = 1$ and $M_2 = 100$ ), the initial behaviour can be described by a quasi-Gaussian behaviour with a characteristic decay time that does not depend on the exponent used, followed by damped oscillations that we do not take into account in our analysis. By contrast, the bead momenta ACF of both systems exhibit a nearly-perfect Gaussian decay. This distinction is likely due to the quadratic nature of the kinetic energy term in the Hamiltonian.

We can now use equation (3.2) to obtain the elements of $\boldsymbol{\chi}$ , which explicitly depends on the inter-bead distance $\textbf{R}$ and are used to build our approximation of the memory term equation (2.8). The results for the two systems investigated here are reported in Figure 2. The numerical data shows that, while the fluctuating forces ${\boldsymbol{\mathcal{F}}}(0,\cdot)$ (shown in the inset of Figure 2) depends only weakly on the atomic masses, the resulting friction term features a much stronger dependence on $M_2$ through the value of $\tau$ . This formulation demonstrates the possibility of using the result of Proposition 1 to provide a CG approximation of the full system with appropriate theoretical guarantees on the error committed in replacing the memory integral with a simpler friction matrix.

Figure 2. The friction coefficients evaluated for two values of $M_2$ . The inset reports the fluctuating force for the same systems. The curves are spline interpolations of the numerical data, which is shown with symbols in the inset.

3.4 Discussion

We can now use the numerical results reported in Table 1 to estimate the the error committed in replacing the full memory term with its approximated value based on the ansatz reported in equation (2.3). In particular, this error is controlled by the two terms $\psi(t,\lambda)$ and $\zeta(\lambda)$ (see equations (2.5), (2.6)). Using the results obtained for equations (B2), (2.7), we can write for the two terms of the boundary of the relative error in equation (2.9):

(3.5a) \begin{align} \psi^\prime(\tilde{t},\lambda) = \frac{\int_{\tilde{t}}^\infty \textrm{e}^{-\lambda \tilde{s}^\alpha}\mbox{d}{\tilde{s}}}{\int_0^\infty \textrm{e}^{-\lambda \tilde{s}^\alpha} \mbox{d}{\tilde{s}}} = \mathcal{O}\left({\textrm{e}^{-\lambda \tilde{t}^{\alpha}}}\right) = \mathcal{O}\left({\mbox{exp}\left[{-{\frac{t}{\tau}}^{\alpha}}\right]}\right) \,, \quad \tilde{t}\to\infty\, \end{align}
(3.5b) \begin{align} \zeta^\prime(\lambda) = \frac{C_1}{\lambda^{\frac1\alpha}|G_0(\tilde{t})|}\frac{\Gamma\big(\frac2\alpha\big)}{\Gamma(\frac1\alpha)} = \mathcal{O}\left({\lambda^{-\frac{1}{\alpha}}}\right) = \mathcal{O}\left({\frac{\tau}{\tau_P}}\right) \,, \quad \lambda\to\infty\,. \end{align}

In order to check the magnitude of the leading term in equation (3.5a), we need the time t, which represents the total simulated time of our simulation. However, for the sake of giving an estimate of the order of magnitude of $\psi^\prime(\tilde{t},\lambda)$ , we can simply use a time of the order of time scales considered in a standard CG simulation. For this reason, it seems reasonable to assume for it the typical time scale of the bead dynamics given by $\tau_P$ . Using the numbers reported in Table 1, the leading term in equation (3.5a) is $\mbox{exp}\left[{-\left({\frac{\tau_P}{\tau}}\right)^\alpha}\right] \approx 10^{-21}$ for the system with $M_2 = 1$ , and $\mbox{exp}\left[{-\left({\frac{\tau_P}{\tau}}\right)^\alpha}\right] \approx 10^{-8}$ for the system with $M_2 = 100$ . Although the specific values of these numbers (and even their orders of magnitude) depend on the functional form used to fit the simulation data, they are in all cases always much smaller than unity, demonstrating that the portion of the relative error bounded by $\psi^\prime(t)$ is extremely small. Moreover, the simulated time is usually well beyond $\tau_P$ , that is to say that in a standard simulation we can expect an even lower value of $\psi^\prime(\tilde{t},\lambda)$ .

The term $\psi^\prime(t)$ is roughly related to the replacement of the integration boundary in equation (2.4) from (0,t) to $(0,\infty)$ . It is therefore expected that the error due to this replacement decreases with the increase of t and, given the argument of the integral, it decreases with exponential behaviour.

Let us now consider the portion of the relative error bounded by $\zeta^\prime(\lambda)$ , equation (3.5b). With the same values reported in Table 1 we obtain $\frac{\tau}{\tau_P} = 0.225$ for the system with $M_2 = 1$ and $\frac{\tau}{\tau_P} = 0.29$ for the system with $M_2 = 100$ . The term $\zeta^\prime(\lambda)$ represents the relative error committed by replacing the memory term by its Taylor expansion stopped at a some k. In this case, we considered $k=0$ , i.e. we replaced the whole memory term with the leading term of its Taylor expansion. We see that, in this case, the bounding value for the percentage error is $\approx$ 23% for the system with $M_2 = 1$ and $\approx$ 30% for the system with $M_2 = 100$ , meaning that the results of the simulations should be carefully checked as it is not guaranteed that the error committed would be negligible.

The analysis just showed puts on a more rigorous ground the qualitative statement that the characteristic time of the decay of the momenta ACF, being of the same order of magnitude of the decay time of the fluctuating-force ACF, makes resorting to equation (3.2) questionable. Moreover, our results show that including more terms in the approximation (see Section 2.3) should improve such model. Indeed, since in equation (2.7) $\zeta(\lambda)$ approaches zero faster upon the increasing of k, this error bound can be reduced, making the results of such simulations more robust. We plan to test this approach in future work.

4 Conclusions

In this work, we discussed the properties and various approximations of the dissipative memory term in the Mori–Zwanzig equations applied to CG simulations in MD. A common approach to approximate this term is to assume a-priori a particular behaviour (e.g., Markovian) for the dynamics of the system and therefore simplify the terms involved in the memory accordingly. In this work, instead, starting from the Green–Kubo definition of the memory term involving the autocovariance of the fluctuating forces, we assume a functional form for this latter quantity. This is chosen such that an explicit calculation of the memory is possible, and it can be therefore compared and calibrated with fine-grained data.

Starting from this ansatz, we proved a rigorous results on the boundary of the error committed by replacing the full memory term with its approximation. We also included an asymptotic analysis on the terms bounding the error and an explicit definition of the parameters which control the approximations used in the derivation.

Finally, we tested this approach on a simple system represented by a one-dimensional chain interacting with a Lennard–Jones type potential in two different cases, with different masses of the beads. We explicitly calculated the friction term for both of them checking the consistency of the Markovian approximation, using the asymptotic analysis presented in Section 2.3.

Our results show that this approach is feasible in a one-dimensional case, and demonstrates that the friction exhibits strong spatial dependence, suggesting that further investigation is required for more complex, multidimensional systems.

Moreover, our rigorous analysis clearly gives what is the range of error expected, showing that even for such a simple system like the one we considered the Markovian approximation should be checked carefully, as the error committed by such approximation can be important.

This work represents a step towards a systematic data-driven analysis of the memory term, and future work will focus on the analysis of the long-time memory behaviour of the system and the application to more realistic models.


During this project, the work of T. Hudson was supported by the Leverhulme Trust through Early Career Fellowship ECF-2016-526. The authors also wish to acknowledge the comments of anonymous referees, which helped to significantly improve this manuscript.

Conflicts of interest

The authors declare that there are no conflicts of interest relating to this work.

A. Proof of Proposition 1

We begin by using Taylor’s theorem (with remainder in Lagrange form) to expand g in s about $s=0$ up to order $k-1$ , writing

\begin{align*} g(t,s) &= \sum_{n=0}^{k} \frac{1}{n!}\frac{\partial^n}{\partial s^n}g(t,0)s^n+\frac{1}{k!}\bigg(\frac{\partial^k}{\partial s^k} g(t,\theta s)-\frac{\partial^k}{\partial s^k} g(t,0)\bigg)s^k\\[5pt] &= \sum_{n=0}^{k} \frac{1}{n!}G_n(t)s^n+\frac{1}{k!}\bigg(\frac{\partial^k}{\partial s^k} g(t,\theta s)-\frac{\partial^k}{\partial s^k} g(t,0)\bigg)s^k\end{align*}

for some $\theta\in[0,1]$ . Using this expansion and splitting the domain of integration, we have

\begin{align*}&\int_0^t \textrm{e}^{- \lambda s^\alpha} g(t,s)\mbox{d}{s} - \sum_{n=0}^{k} \frac{G_n(t)}{n!}\int_0^{\infty} \textrm{e}^{-\lambda s^\alpha} s^n\mbox{d}{s}\\[5pt] & \ \ \qquad \qquad \qquad \quad = \int_0^t \textrm{e}^{- \lambda s^\alpha} \frac{1}{k!}\bigg(\frac{\partial^k}{\partial s^k} g(t,\theta s)-\frac{\partial^k}{\partial s^k} g(t,0)\bigg)s^k\mbox{d}{s} - \sum_{n=0}^{k} \frac{G_n(t)}{n!}\int_t^\infty \textrm{e}^{-\lambda s^\alpha} s^n\mbox{d}{s}.\end{align*}

Employing the triangle inequality, the regularity assumption on g, and the fact that $\theta\in[0,1]$ , we obtain

\begin{align*}&\bigg|\int_0^t \textrm{e}^{- \lambda s^\alpha} g(t,s)\mbox{d}{s} - \sum_{n=0}^{k} G_n(t)\int_0^{\infty} \textrm{e}^{-\lambda s^\alpha} s^n\mbox{d}{s}\bigg|\\[5pt] &\qquad\leq \int_0^t \textrm{e}^{- \lambda s^\alpha} \frac{1}{k!}\bigg|\frac{\partial^k}{\partial s^k} g(t,\theta s)-\frac{\partial^k}{\partial s^k} g(t,0)\bigg|s^k\mbox{d}{s} + \bigg|\sum_{n=0}^{k} \frac{G_n(t)}{n!}\int_t^\infty \textrm{e}^{-\lambda s^\alpha} s^n\mbox{d}{s} \bigg|\\[5pt] &\qquad\leq \frac{G_{k,\gamma}(t)}{k!}\int_0^t \textrm{e}^{- \lambda s^\alpha} s^{k+\gamma}\mbox{d}{s} + \bigg|\sum_{n=0}^{k} \frac{G_n(t)}{n!}\int_t^\infty \textrm{e}^{-\lambda s^\alpha} s^n\mbox{d}{s}\bigg|\\[5pt] &\qquad\leq \frac{G_{k,\gamma}(t)}{k!}\int_0^\infty \textrm{e}^{- \lambda s^\alpha} s^{k+\gamma}\mbox{d}{s} + \sum_{n=0}^{k} \frac{\big|G_n(t)\big|}{n!}\int_t^\infty \textrm{e}^{-\lambda s^\alpha} s^n\mbox{d}{s}\,,\end{align*}

where $G_{k,\gamma}(t)$ is the Hölder $\gamma$ -semi-norm of a bounded function g on the set [0,t]. We note that, for the evolution over a given time interval [0,T], we can replace the functions $G_n(t)$ and $G_{k,\gamma}(t)$ by appropriate constants $C_n$ and $C_{k+1}$ by taking a supremum over the time interval.

To conclude, we note that by making the change of variable, we can express the integrals over the interval $(0,\infty)$ in terms of the Gamma function:

\begin{equation*}\int_0^\infty \textrm{e}^{-\lambda s^\alpha} s^\gamma \mbox{d}{s} = \frac{\Gamma\Big(\frac{\gamma+1}{\alpha}\Big)}{\alpha \lambda^{\frac{\gamma+1}{\alpha}}}.\end{equation*}

Using this result, we conclude that the statement holds.

B. Proof of equation (2.5)

Let us start with the definition of $\psi(t)$

(B1) \begin{align} \psi(t) & \overset{\operatorname{def}}{=} \sum_{n=0}^{k}\frac{C_n}{n!}\int_t^\infty \textrm{e}^{-\lambda s^\alpha} s^n \mbox{d}{s} \nonumber \\[5pt] & \leq (k+1) \max_{n \in \{0,1,\ldots,k\}}\left[{ \frac{C_n}{n!}\int_t^\infty \textrm{e}^{-\lambda s^\alpha} s^n \mbox{d}{s}}\right].\end{align}

Let us now estimate the last integral in the previous expression. We start by making the change of variable $v = s^\alpha$ :

\begin{align*} \int_t^\infty \textrm{e}^{-\lambda s^\alpha} s^n\mbox{d}{s} = \int_{t^\alpha}^\infty \frac{1}{\alpha} \textrm{e}^{-\lambda v} v^{\frac{n+1}{\alpha}-1}\mbox{d}{v}.\end{align*}

There are now two cases: either (i) $\frac{n+1}{\alpha}-1\leq 0$ , or (ii) $\frac{n+1}{\alpha}-1> 0$ .

In case (i), we can estimate the integral as being

\begin{gather*} \int_t^\infty \textrm{e}^{-\lambda s^\alpha} s^n\mbox{d}{s} = \int_{t^\alpha}^\infty \frac{1}{\alpha} \textrm{e}^{-\lambda v} v^{\frac{n+1}{\alpha}-1}\mbox{d}{v} \leq \\[5pt] \qquad \frac{t^{n+1-\alpha}}{\alpha}\int_{t^\alpha}^\infty \textrm{e}^{-\lambda v} \mbox{d}{v}= \frac{t^{n+1}}{\alpha}\frac{\textrm{e}^{-\lambda t^{\alpha}}}{\lambda t^{\alpha}} \,.\end{gather*}

This is exponentially small as long as $\lambda t^\alpha\gg 1$ .

In case (ii), we can integrate by parts to obtain

\begin{equation*} \int_{t^\alpha}^\infty \frac{1}{\alpha}\,\textrm{e}^{-\lambda v}\,v^{\frac{n+1}{\alpha}-1}\,\mbox{d}{v} = -\frac{1}{\alpha} \left( \frac{ \textrm{e}^{-\lambda t^\alpha} }{ \lambda t^\alpha } \,t^{n+1} - \left({\frac{ \tfrac{n+1}{\alpha}-1 }{\lambda}}\right)\int_{t^\alpha}^\infty \,\textrm{e}^{-\lambda v} \,v^{\frac{n+1}{\alpha}-2} \,\mbox{d}{v}\right).\end{equation*}

Integrating by parts more times as necessary, we ultimately obtain a similar result to that obtained in case (i), i.e. that this error term is exponentially small as long as $\lambda t^\alpha\gg 1$ .

In summary, we have shown that

(B2) \begin{equation} \psi(t) = \mathcal{O}\left({\textrm{e}^{-\lambda t^{\alpha}}}\right)\,, \quad t\to\infty.\end{equation}


Ayaz, C., Tepper, L., Brünig, F. N., Kappler, J., Daldrop, J. O. & Netz, R. R. (2021) Non-markovian modeling of protein folding. Proc. Natl Acad. Sci. 118(31), e2023856118.CrossRefGoogle Scholar
Berkowitz, M., Morgan, J. & McCammon, J. A. (1983) Generalized langevin dynamics simulations with arbitrary time-dependent memory kernels. J. Chem. Phys. 78(6), 32563261.CrossRefGoogle Scholar
Berne, B. J. & Harp, G. (1970) On the calculation of time correlation functions. Adv. Chem. Phys. 17, 63227.Google Scholar
Berne, B. J. & Pecora, R. (2000) Dynamic Light Scattering: With Applications to Chemistry, Biology, and Physics . Dover Books on Physics Series, Dover Publications, New York.Google Scholar
Bezanson, J., Edelman, A., Karpinski, S. & Shah, V. B. (2017) Julia: a fresh approach to numerical computing. SIAM Rev. 59(1), 6598.CrossRefGoogle Scholar
Chen, M., Li, X. & Liu, C. (2014) Computation of the memory functions in the generalized langevin models for collective dynamics of macromolecules. J. Chem. Phys. 141(6), 064112.CrossRefGoogle ScholarPubMed
Chorin, A. J. & Hald, O. H. (2009) Stochastic Tools in Mathematics and Science, Vol. 1, Springer, New York.CrossRefGoogle Scholar
Chorin, A. J., Hald, O. H. & Kupferman, R. (2000) Optimal prediction and the Mori-Zwanzig representation of irreversible processes. Proc. Natl Acad. Sci. 97(7), 29682973.CrossRefGoogle Scholar
Chorin, A. J., Hald, O. H. & Kupferman, R. (2002) Optimal prediction with memory. Physica D 166, 239257.CrossRefGoogle Scholar
Chorin, A. J., Kast, A. P. & Kupferman, R. (1998) Optimal prediction of underresolved dynamics. Proc. Natl Acad. Sci. USA 95, 40944098.CrossRefGoogle Scholar
Chu, W. & Li, X. (2019) The Mori-Zwanzig formalism for the derivation of a fluctuating heat conduction model from molecular dynamics. Commun. Math. Sci. 17, 539563.CrossRefGoogle Scholar
Cui, B., Gebbia, J. F., Romanini, M., Rudić, S., Fernandez-Perea, R., Bermejo, F. J., Tamarit, J.-L. & Zaccone, A. (2020) Secondary relaxation in the terahertz range in 2-adamantanone from theory and experiments. Phys. Rev. B 101(10), 104202.CrossRefGoogle Scholar
Cui, B., Gebbia, J. F., Tamarit, J.-L. & Zaccone, A. (2018) Disentangling $\alpha$ and $\beta$ relaxation in orientationally disordered crystals with theory and experiments. Phys. Rev. E 97(5), 053001.CrossRefGoogle ScholarPubMed
Cui, B., Milkus, R. & Zaccone, A. (2017) Direct link between boson-peak modes and dielectric $\alpha$ -relaxation in glasses. Phys. Rev. E 95(2), 022603.CrossRefGoogle ScholarPubMed
Cui, B., Yang, J., Qiao, J., Jiang, M., Dai, L., Wang, Y.-J. & Zaccone, A. (2017) Atomic theory of viscoelastic response and memory effects in metallic glasses. Phys. Rev. B 96(9), 094203.CrossRefGoogle Scholar
Darve, E., Solomon, J. & Kia, A. (2009) Computing generalized langevin equations and generalized fokker–planck equations. Proc. Natl Acad. Sci. 106(27), 1088410889.CrossRefGoogle Scholar
Di Pasquale, N. & Carbone, P. (2017) Local and Global Dynamics of multi-resolved polymer chains: effects of the interactions atoms-beads on the dynamic of the chains. J. Chem. Phys. 146, 084905–9.CrossRefGoogle ScholarPubMed
Di Pasquale, N., Hudson, T. & Icardi, M. (2019) Systematic derivation of hybrid coarse-grained models. Phys. Rev. E 99, 013303.CrossRefGoogle ScholarPubMed
Di Pasquale, N., Marchisio, D. & Carbone, P. (2012) Mixing atoms and coarse-grained beads in modelling polymer melts. J. Chem. Phys. 137, 164111.CrossRefGoogle ScholarPubMed
Farias, R., Ramos, R. O. & da Silva, L. (2009) Stochastic langevin equations: Markovian and non-markovian dynamics. Phys. Rev. E 80(3), 031143.CrossRefGoogle ScholarPubMed
Givon, D., Kupferman, R. & Hald, O. H. (2005) Existence proof for orthogonal dynamics and the Mori-Zwanzig formalism. Isr. J. Math. 145(1), 221241.CrossRefGoogle Scholar
Götze, W. (2008) Complex Dynamics of Glass-Forming Liquids: A Mode-Coupling Theory, Vol. 143, OUP, Oxford.CrossRefGoogle Scholar
Hald, O. H. & Stinis, P. (2007) Optimal prediction and the rate of decay for solutions of the Euler equations in two and three dimensions. Proc. Natl Acad. Sci. 104(16), 65276532.CrossRefGoogle Scholar
Handle, P. H., Rovigatti, L. & Sciortino, F. (2019) q-independent slow dynamics in atomic and molecular systems. Phys. Rev. Lett. 122(17), 175501.CrossRefGoogle ScholarPubMed
Hänggi, P. & Jung, P. (1995) Colored noise in dynamical systems. Adv. Chem. Phys. 89, 239326.Google Scholar
Helfand, E. (1961) Theory of the molecular friction constant. Phys. Fluids 4(6), 681691.CrossRefGoogle Scholar
Hijón, C., Español, P., Vanden-Eijnden, E. & Delgado-Buscalioni, R. (2010) Mori–Zwanzig formalism as a practical computational tool. Faraday Discuss. 144, 301322.CrossRefGoogle ScholarPubMed
Hudson, T. & Li, X. H. (2020) Coarse-graining of overdamped Langevin dynamics via the Mori–Zwanzig formalism. Multiscale Model. Simul. 18(2), 11131135.CrossRefGoogle Scholar
Icardi, M., Hudson, T., Di Pasquale, N., Spinaci, M. & Rovigatti, L. (2020) Hybridzwanzjulia. 10.5281/zenodo.3938811.Google Scholar
Jung, G., Hanke, M. & Schmid, F. (2017) Iterative reconstruction of memory kernels. J. Chem. Theory Comput. 13(6), 24812488.CrossRefGoogle ScholarPubMed
Kauzlarić, D., Español, P., Greiner, A. & Succi, S. (2011) Three routes to the friction matrix and their application to the coarse-graining of atomic lattices. Macromol. Theory Simul. 20(7), 526540.CrossRefGoogle Scholar
Kawasaki, K. (1973) Simple derivations of generalized linear and nonlinear langevin equations. J. Phys. A Math. Nuclear General 6(9), 1289.CrossRefGoogle Scholar
Kinjo, T. & Hyodo, S. A. (2007) Equation of motion for coarse-grained simulation based on microscopic description. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 75, 051109–9.CrossRefGoogle ScholarPubMed
Kirkwood, J. G. & Buff, F. P. (1949) The statistical mechanical theory of surface tension. J. Chem. Phys. 17(3), 338343.CrossRefGoogle Scholar
Kneller, G. & Hinsen, K. (2001) Computing memory functions from molecular dynamics simulations. J. Chem. Phys. 115(24), 1109711105.CrossRefGoogle Scholar
Koide, T. (2005) Microscopic derivation of causal diffusion equation using the projection operator method. Phys. Rev. E 72(2), 026135.CrossRefGoogle ScholarPubMed
Koide, T. (2007) Microscopic formula for transport coefficients of causal hydrodynamics. Phys. Rev. E 75(6), 060103.CrossRefGoogle ScholarPubMed
Lei, H., Baker, N. A. & Li, X. (2016) Data-driven parameterization of the generalized langevin equation. PNAS 113, 1418314188.CrossRefGoogle ScholarPubMed
Lei, H., Caswell, B. & Karniadakis, G. E. (2010) Direct construction of mesoscopic models from microscopic simulations. Phys. Rev. E 81(2), 026704.CrossRefGoogle ScholarPubMed
Leimkuhler, B. & Matthews, C. (2016) Molecular Dynamics, Springer International Publishing, Switzerland.Google ScholarPubMed
Lelievre, T., Rousset, M. & Stoltz, G. (2010) Free Energy Computations: A Mathematical Perspective, Imperial College Press, London; Hackensack, NJ.CrossRefGoogle Scholar
Li, Z., Lee, H. S., Darve, E. & Karniadakis, G. E. (2017) Computing the non-Markovian coarse-grained interactions derived from the Mori-Zwanzig formalism in molecular systems: application to polymer melts. J. Chem. Phys. 146, 014104–14.CrossRefGoogle ScholarPubMed
Mori, H. (1965) Transport, collective motion, and brownian motion. Prog. Theoret. Phys. 33(3), 423455.CrossRefGoogle Scholar
Papoian, G. A. (2017) Coarse-Grained Modeling of Biomolecules, CRC Press, Taylor & Francis group, Boca Raton, FL.CrossRefGoogle Scholar
Price, J. & Stinis, P. (2019) Renormalized reduced order models with memory for long time prediction. Multiscale Model. Simul. 17(1), 6891.CrossRefGoogle Scholar
Rapaport, D. C. (2004) The Art of Molecular Dynamics Simulation, Cambridge University Press, Cambridge.CrossRefGoogle Scholar
Stinis, P. (2007). Higher order Mori–Zwanzig models for the Euler equations. Multiscale Model. Simul. 6(3), 741760.CrossRefGoogle Scholar
Tuckerman, M. (2010) Statistical Mechanics: Theory and Molecular Simulation. Oxford Graduate Texts, OUP, Oxford.Google Scholar
Turner, P. (2015) Xmgrace, version 5.1.25. Center for Coastal and Land-Margin Research, Oregon Graduate Institute of Science and Technology, Beaverton, OR.Google Scholar
Voigtmann, T., Puertas, A. M. & Fuchs, M. (2004) Tagged-particle dynamics in a hard-sphere system: mode-coupling theory analysis. Phys. Rev. E 70(6), 061506.CrossRefGoogle Scholar
Voth, G. A. (2008) Coarse-Graining of Condensed Phase and Biomolecular Systems, CRC Press, Taylor & Francis group, Boca Raton, FL.CrossRefGoogle Scholar
Wang, M. C. & Uhlenbeck, G. E. (1945) On the theory of the brownian motion ii. Rev. Modern Phys. 17(2–3), 323.CrossRefGoogle Scholar
Zaccarelli, E., Foffi, G., Sciortino, F., Tartaglia, P. & Dawson, K. (2001) Gaussian density fluctuations and mode coupling theory for supercooled liquids. EPL (Europhys. Lett.) 55(2), 157.CrossRefGoogle Scholar
Zaccone, A. (2020) Relaxation and vibrational properties in metal alloys and other disordered systems. J. Phys. Condens. Matter 32(20), 203001.CrossRefGoogle ScholarPubMed
Zhao, G., Perilla, J. R., Yufenyuy, E. L., Meng, X., Chen, B., Ning, J., Ahn, J., Gronenborn, A. M., Schulten, K., Aiken, C. & Zhang, P. (2013) Mature hiv-1 capsid structure by cryo-electron microscopy and all-atom molecular dynamics. Nature 497(7451), 643646.CrossRefGoogle ScholarPubMed
Zhu, Y., Dominy, J. M. & Venturi, D. (2018) On the estimation of the Mori-Zwanzig memory integral. J. Math. Phys. 59(10), 103501.CrossRefGoogle Scholar
Zwanzig, R. (1961) Memory effects in irreversible thermodynamics. Phys. Rev. 124(4), 983–922.CrossRefGoogle Scholar
Figure 0

Table 1. Results of fitting to the numerical data

Figure 1

Figure 1. Time autocorrelation function of the (a) fluctuating force and (b) momenta for two values of $M_2$. Symbols represents the simulations data whereas dotted lines and dashed lines are fits to equations (3.4) and 3.3, respectively. In the ACF of the momenta (b) only one fitting is reported (solid black line) because equations (3.4) and (3.3) coincide.

Figure 2

Figure 2. The friction coefficients evaluated for two values of $M_2$. The inset reports the fluctuating force for the same systems. The curves are spline interpolations of the numerical data, which is shown with symbols in the inset.

You have Access Open access