Hostname: page-component-65b85459fc-jtdgp Total loading time: 0 Render date: 2025-10-18T16:26:31.622Z Has data issue: false hasContentIssue false

Mean-field interacting systems with sequential coalescence at future ensemble averages

Published online by Cambridge University Press:  13 October 2025

Levent Ali Mengütürk*
Affiliation:
University College London
Murat Cahit Mengütürk*
Affiliation:
Özyeğin University
*
*Postal address: Department of Mathematics, University College London, and Artificial Intelligence and Mathematics Research Lab. Emails: ucaheng@ucl.ac.uk, levent@aimresearchlab.com
**Postal address: Center for Financial Engineering, Özyeğin University, and Artificial Intelligence and Mathematics Research Lab. Emails: murat.menguturk@ozyegin.edu.tr, murat@aimresearchlab.com
Rights & Permissions [Opens in a new window]

Abstract

We introduce a new family of coalescent mean-field interacting particle systems by producing a pinning property that acts over a chosen sequence of multiple time segments. Throughout their evolution, these stochastic particles converge in time (i.e. get pinned) to their random ensemble average at the termination point of any one of the given time segments, only to burst back into life and repeat the underlying principle of convergence in each of the successive time segments, until they are fully exhausted. Although the architecture is represented by a system of piecewise stochastic differential equations, we prove that the conditions generating the pinning property enable every particle to preserve their continuity over their entire lifetime almost surely. As the number of particles in the system increases asymptotically, the system decouples into mutually independent diffusions, which, albeit displaying progressively uncorrelated behaviour, still close in on, and recouple at, a deterministic value at each termination point. Finally, we provide additional analytics including a universality statement for our framework, a study of what we call adjourned coalescent mean-field interacting particles, a set of results on commutativity of double limits, and a proposal of what we call covariance waves.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

A family of interacting diffusions that converge to their random ensemble average at the end point of a finite time period has been studied in [Reference Mengütürk26], which was proposed as a mathematical intersection between pinned diffusions [Reference Barczy and Kern1, Reference Barczy and Pap2, Reference Brody and Hughston4, Reference Hildebrandt and Roelly14, Reference Li20, Reference Mansuy23, Reference Mengütürk25] and mean-field interacting systems [Reference Bolley, Cañizo and Carillo3, Reference Budhiraja, Dupuis and Fischer7Reference Del Moral and Rio10, Reference Gartner12, Reference Huang, Liu and Pickl16, Reference Lacker19, Reference Sznitman32]. In this paper we shall significantly generalise the framework of [Reference Mengütürk26, Reference Mengütürk27] through a system of piecewise stochastic differential equations (SDEs) that allows the pinning property to extend to multiple time segments fixed from the outset, while ensuring that every path in the system remains continuous throughout their lifetime. In this context, a time segment is defined as a finite slice of time, or a self-contained interval of habitation, with a fixed initial and end point, which allows us to compartmentalise the evolution of stochastic particles without forfeiting their continuity; and gives us the flexibility to change the number and velocity (and other parametric dependences such as momentum or energy levels) of particles subsequent to the rebirth of a system. As such, our framework represents a dynamic structure with continual termination–renewal characteristics on a recurrent basis, whereby the distribution attained at each termination–renewal time-point is driven solely by the ensemble average of the system. Here, a termination–renewal time-point becomes an instance when a system of stochastic particles collapses into a Gaussian random value (whose variance is inversely proportional to the number of particles in the system) and gets reanimated back to life, either instantaneously or after a certain suspension of time. To the best of our knowledge, there is no other work that studies interacting diffusion systems with such behaviour. As such, this paper generalises [Reference Mengütürk26] considerably from a single time segment to any number of successive time segments, while sustaining the time-continuity of the running system with multiple randomly determined collapse points – a mathematical consideration that did not arise in [Reference Mengütürk26]. We also allow an additional flexibility in defining interaction as a weighted average of the particles, instead of solely a simple average used in [Reference Mengütürk26]; this is relevant to encapsulating systems in which the number, or the interaction-contribution, of particles may change from one time segment to another. In addition, the current framework additionally provides a universality statement, a connection to random n-bridges of [Reference Mengütürk and Mengütürk29], and what we call adjourned coalescent mean-field interacting particles as well as covariance waves that could not arise in [Reference Mengütürk26]. We shall also highlight that the system we define here is in general not strictly an n-dimensional pinned diffusion, since n-dimensional pinned diffusions do not interact on their pinning-point (as it is a deterministic value given from the outset), whereby in our system, such a point, and in fact a collection of such points, manifest randomly as a consequence of dynamic mean-field interactions. However, as we take the limit of the number of particles asymptotically, we prove that such systems converge to n-dimensional pinned diffusions, where they get pinned to their initial values from their most recent time segment initiation (which coincides with their termination point from the last time segment).

Our motivation arises from modelling any multivariate system where each unit advances into a (possibly random) shared value that is dictated by the statistics of the whole flock of units at multiple fixed time-points, where the termination (or suspension) of the movement of every unit impregnates the renewal of another in a successive manner. The question is: What sort of a system behaves in this way? An example for a composite physical system with continuous termination–renewal behaviour has been studied in [Reference Mengütürk and Mengütürk29], where an alternative mathematical approach (without mean-field interactions) is developed through so-called random n-bridges to model sequential state reduction of commutative Hamiltonians in quantum measurement theory. In [Reference Mengütürk and Mengütürk29], the authors achieve a generalised multi-period stochastic Schrödinger equation on a complex Hilbert space that consistently captures sequential collapse dynamics of eigenstates through a single wave function. In this current paper, we retain our interest in multi-period architectures that produce recurrent interchanges between future collapses and regenerations, but provide a different approach which can also embed mean-field interactions between every existing particle – a non-trivial trait that could not be addressed in [Reference Mengütürk and Mengütürk29]. Accordingly, the family of processes proposed in this paper may find use in representing coalescent interacting particle systems where the distribution of the values of members over a sequence of fixed future time-points are driven by mean-fields. The framework, though not only constrained to phenomenon that occur in particle physics, may also carve out an alternative view to the Einstein–Rosen bridges [Reference Einstein and Rosen11, Reference Raine and Thomas31], multi-particle collision (MPC) dynamics [Reference Gompper, Ihle, Kroll and Winkler13, Reference Malevanets and Kapral21, Reference Malevanets and Kapral22], interacting information flows that encapsulate noisy observations [Reference Brody, Hughston and Macrina6, Reference Hoyle, Macrina and Mengütürk15, Reference Mengütürk24, Reference Mengütürk25, Reference Mengütürk28], and mean-field games that attain sequential equilibria when optimal decisions involve infinite agents in the limit [Reference Huang, Malhamé and Caines17, Reference Jovanovic and Rosenthal18, Reference Nourian and Caines30]. Although we humbly envision these connections, we reserve this paper for mainly establishing the mathematical machinery, and leave these possible applications for future research.

In the classical literature on interacting particles, as the number of particles increases asymptotically, one typically obtains mutually independent Ornstein–Uhlenbeck processes that display uncoupled and divergent behaviour in the limit (leading to propagation of chaos, see [Reference Sznitman32]), thereby lacking any form of coalescing behaviour. On the other hand, in our framework, owing to the introduction of an additional dimension of time-convergence property through a class of strictly non-constant functions satisfying particular integration conditions that dynamically regulate the dominance of mean-field interactions, we can enforce mutually independent diffusions to culminate into a single point, at the cessation of every time segment, allowing a degree of control and foresight from the outset of the entire particle evolution. Accordingly, as the number of units in the system trends to infinity, these units grow into independent diffusions that necessarily recouple and nestle at a fixed value at each successive terminus, before passing into the next time segment. As an example, in the mean-field limit, we prove that we can reach a family of $\alpha$ -Wiener bridges that are continuously glued to each other across the whole span of time. This framework, being flexibly parametric in its constitution, also prompts various interesting questions around the commutativity of double limits, the adjournment of termination–renewal points, the oscillation of what we call covariance waves, and the dynamic allocation of what can be viewed as particle inventories, which will be introduced and discussed with various examples in the paper. As such, the paper also touches on the following questions, which manifest as natural by-products of the framework.

  1. (i) Does the system satisfy a consistency property regarding the exchangeability of double limits across time and space? In other words, is the system commutative at its limits?

  2. (ii) Given that the mean-field communication between particles is foundational, how do the induced covariance matrices behave in time? Can we infer the coalescence characteristics of the system through covariance trajectories?

  3. (iii) In the case of being constrained by a limited ‘inventory’ of particles, how many particles must one add successively into each time segment to ensure the same degree of convergence under a certain confidence interval?

These questions will be addressed rigorously in the coming sections. The paper is organised as follows. Section 2 provides the mathematical model and the main results. Section 3 provides additional mathematical analysis and numerical simulations. Section 4 concludes.

2. Mathematical framework

We work over a finite time horizon $\mathbb{T}=[0,T]$ for some fixed $T<\infty$ , and choose multiple time-points in $\mathbb{T}$ denoted as $T^{(k)}_{\mathrm{start}}$ and $T^{(k)}_{\mathrm{end}}$ for $k\in\mathcal{K}$ with $\mathcal{K}=\{1,\ldots,m\}$ such that

(2.1) \begin{align}0 = T^{(1)}_{\mathrm{start}} < T^{(1)}_{\mathrm{end}} \leq T^{(2)}_{\mathrm{start}} < T^{(2)}_{\mathrm{end}} \leq \cdots \leq T^{(m)}_{\mathrm{start}} < T^{(m)}_{\mathrm{end}} = T\end{align}

for some $m\in\mathbb{N}_+$ . If $m=1$ or $m=2$ , the relationship in (2.1) should be understood by the reader accordingly. We collect each $T^{(k)}_{\mathrm{start}}$ and $T^{(k)}_{\mathrm{end}}$ in the following:

\begin{align*}\mathbb{T}^{(k)} = \bigl[T^{(k)}_{\mathrm{start}},T^{(k)}_{\mathrm{end}}\bigr) \quad \text{and} \quad \mathbb{T}^{\mathrm{start}} = \bigl\{T^{(k)}_{\mathrm{start}} \colon \forall k\in\mathcal{K} \bigr\} \quad \text{and} \quad \mathbb{T}^{\mathrm{end}} = \bigl\{T^{(k)}_{\mathrm{end}} \colon \forall k\in\mathcal{K} \bigr\}.\end{align*}

Each element of $\mathbb{T}^{\mathrm{end}}$ will be interpreted as a termination (or a suspension) time-point when all the stochastic units in the interacting system converge to their random ensemble average, and each element of $\mathbb{T}^{\mathrm{start}}$ will be interpreted as a revival time-point when the system is restored to a dynamic state with all the units picking up their stochastic movements; this remark will be clarified later on in the paper. The relationship given by $T^{(k)}_{\mathrm{end}} \leq T^{(k+1)}_{\mathrm{start}}$ for $k\in\mathcal{K}$ allows the possibility of representing non-zero durations between termination and renewal times.

2.1. Main model

The number of particles in the system is given by $n\in\mathbb{N}_+$ , where the process (i.e. path) of each $\mathbb{R}$ -valued particle $X^{(i,n)}$ over time is denoted by

(2.2) \begin{align}&\{X^{(i,n)}_t\}_{t\in\mathbb{T}} = \{X^{(i,n)}_t \colon \forall t\in\mathbb{T} \} \quad \text{with $ X^{(i,n)}_0 = x\in\mathbb{R}$.}\end{align}

We shall note that the initial points can be generalised to random variables, and each particle can be extended to take values in $\mathbb{R}^d$ for some $d\in\mathbb{N}_+$ , which we skip in this paper for parsimony. For the rest of the paper, $\{A_t^{(n)}\}_{t\in\mathbb{T}}$ models the ensemble average

(2.3) \begin{align}A_t^{(n)} = \dfrac{1}{n}\sum_{j=1}^n \beta^{(j,n)} X^{(j,n)}_t \quad \text{for all $t\in\mathbb{T}$,}\end{align}

with the conditions

\begin{align*}|\beta^{(i,n)}| <\infty \quad \text{and} \quad \sum_{i=1}^n \beta^{(i,n)} = n,\end{align*}

which dynamically quantifies the weighted empirical average state of the particle system. If $\beta^{(i,n)}=1$ for every $i\in\mathcal{I}$ , then $A_t^{(n)}$ is the standard simple ensemble average for any $t\in\mathbb{T}$ . We work on a filtered probability space $(\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t \leq \infty},\mathbb{P})$ , where a time-shifted $(\mathbb{P},\{\mathcal{F}_{t}\})$ -Brownian motion is denoted by

(2.4) \begin{align}&\{W^{(i,k)}_t\}_{ t \in\mathbb{T}} = \{W^{(i,k)}_t \colon \forall t \in \mathbb{T} \} \quad \text{with} \ W^{(i,k)}_s = 0 \quad \text{for all $s \leq T^{(k)}_{\mathrm{start}}$,}\end{align}

for all $i\in\mathcal{I}=\{1,\ldots,n\}$ . We can collect every $\{W^{(i,k)}_t\}_{t\in\mathbb{T}}$ in the $\mathbb{R}^{n\times m}$ -valued $\{\boldsymbol{W}_t\}_{t\in\mathbb{T}}$ given by

\begin{equation*}\boldsymbol{W}_t =\begin{bmatrix}W^{(1,1)}_t, \ldots, W^{(1,k)}_t, \ldots, W^{(1,m)}_t \\\vdots \hspace{0.5in} \vdots \hspace{0.5in} \vdots \\W^{(i,1)}_t, \ldots, W^{(i,k)}_t, \ldots, W^{(i,m)}_t \\\vdots \hspace{0.5in} \vdots \hspace{0.5in} \vdots \\W^{(n,1)}_t, \ldots, W^{(n,k)}_t, \ldots, W^{(n,m)}_t\end{bmatrix}\!.\end{equation*}

We assume that each $\{W^{(i,k)}_t\}_{t\in\mathbb{T}}$ is mutually independent across $i\in\mathcal{I}$ for a fixed $k\in\mathcal{K}$ , i.e. across the rows of $\{\boldsymbol{W}_t\}_{t\in\mathbb{T}}$ . On the other hand, unless stated otherwise, $\{W^{(i,k)}_t\}_{t\in\mathbb{T}}$ does not have to be mutually independent across $k\in\mathcal{K}$ for a fixed $i\in\mathcal{I}$ , i.e. across the columns of $\{\boldsymbol{W}_t\}_{t\in\mathbb{T}}$ . We are now in a position to construct a stochastic system of interacting particles pinned sequentially to $A_t^{(n)}$ at every $t\in\mathbb{T}^{\mathrm{end}}$ . We shall first propose the following system of SDEs and later specify certain conditions on it for our purposes:

(2.5) \begin{align}{\mathrm{d}} X^{(i,n)}_t = f^{(k)}(t)\bigl(A^{(n)}_t - X^{(i,n)}_t \bigr)\,{\mathrm{d}} t + \sigma^{(k)}_t\Bigl(\rho^{(k)} \,{\mathrm{d}} B_t^{(k)} + \sqrt{1 -(\rho^{(k)})^2}\,{\mathrm{d}} W^{(i,k)}_t\Bigr),\end{align}

for $t\in\mathbb{T}^{(k)}$ , $ k\in\mathcal{K}$ , and

(2.6) \begin{align}X^{(i,n)}_{e} = A_{T^{(k-1)}_{\mathrm{end}}}^{(n)} \quad \text{for all}\ e\in\bigl[T^{(k-1)}_{\mathrm{end}},T^{(k)}_{\mathrm{start}}\bigr],\ k\in\mathcal{K},\end{align}

for all $i\in\mathcal{I}$ with $T^{(0)}_{\mathrm{end}}=T^{(1)}_{\mathrm{start}}=0$ , where

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}\int_{T^{(k)}_{\mathrm{start}}}^t\bigl(\sigma^{(k)}_s\bigr)^2\mathrm{d} s \lt \infty.\end{align*}

Here $\rho^{(k)}\in[-1,1]$ , $\{B_t^{(k)}\}_{t\in\mathbb{T}}$ is a mutually independent time-shifted $(\mathbb{P},\{\mathcal{F}_{t}\})$ -Brownian motion (as in (2.4)) that represents common noise in the system, and $f^{(k)}\colon \mathbb{T}^{(k)} \rightarrow\mathbb{R}_{+}$ is a measurable map that is continuous. Note that we have

\begin{align*}&\int_{T^{(k)}_{\mathrm{start}}}^t\exp\biggl( -\int_{s}^t f^{(k)}(u)\,{\mathrm{d}} u\biggr)\,{\mathrm{d}} s < \infty \quad \text{for all $t\in\mathbb{T}^{(k)}, \ k\in\mathcal{K}$,} \end{align*}

since $f^{(k)}$ is non-negative. For parsimony, we shall set $\sigma^{(k)}_t = \sigma^{(k)}\neq 0$ for the rest of this work.

From (2.5)–(2.6), we see that each particle takes the value $A_e^{(n)}$ at each $e\in\mathbb{T}^{\mathrm{end}}\setminus\{T^{(m)}_{\mathrm{end}}\}$ , which are unknown prior to any of these time-points. Hence (2.6) may appear ill-defined at first since (2.5) defines the dynamics of $X_t^{(i,n)}$ only up to some t strictly smaller than $T^{(k)}_{\mathrm{end}}$ for $k\in\mathcal{K}$ . However, the system is in fact well-defined since we can show (see (2.10) below) that

(2.7) \begin{align}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} A^{(n)}_{t} = A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_{T^{(k)}_{\mathrm{end}}} + \dfrac{\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j,k)}_{T^{(k)}_{\mathrm{end}}}, \end{align}

which is well-defined. This property of being able to write the limit of the ensemble average exogenously through $B^{(k)}_{T^{(k)}_{\mathrm{end}}}$ and $W^{(j,k)}_{T^{(k)}_{\mathrm{end}}}$ is a key component for (2.5)–(2.6) to be well-defined for every $t\in\mathbb{T}$ . On the other hand, even if we can achieve a well-defined system, these processes are a priori not necessarily time-continuous $\mathbb{P}\otimes\mathrm{d} t$ everywhere for an arbitrary choice of $f^{(k)}$ , which possibly makes the system jump to $A_e^{(n)}$ at $e\in\mathbb{T}^{\mathrm{end}}\setminus\{T^{(m)}_{\mathrm{end}}\}$ in an ad hoc manner, rather than converge to these values in a continuous manner. To address this, we shall impose sufficiency conditions on $f^{(k)}$ to achieve time-continuity $\mathbb{P}\otimes\mathrm{d} t$ everywhere for the entire system with convergence to $A_e^{(n)}$ at every $e\in\mathbb{T}^{\mathrm{end}}$ (including $T^{(m)}_{\mathrm{end}}$ ) given by (2.7), which will make (2.5)–(2.6) a well-defined continuous system with consistent limits for each $t\rightarrow T^{(k)}_{\mathrm{end}}$ .

2.2. Temporal analysis

As part of Definition 2.1 below, we are now in a position to highlight the sufficiency conditions on $f^{(k)}$ to achieve time-continuity for the system; for example, $f^{(k)}$ cannot be constant – an uncommon restriction one would typically not be concerned about in the classical mean-field setting.

Definition 2.1. Let $F^{(k)}$ be the space of measurable, non-negative, continuous functions on $\mathbb{T}^{(k)}$ that satisfy the following properties:

  1. (i) $\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \int_{T^{(k)}_{\mathrm{start}}}^{t} f^{(k)}(s)\,{\mathrm{d}} s = \infty$ ,

  2. (ii) $\int_{T^{(k)}_{\mathrm{start}}}^{\tau} f^{(k)}(s)\,{\mathrm{d}} s < \infty$ for any $\tau\in\mathbb{T}^{(k)}$ .

Unless stated otherwise, $t\rightarrow T^{(k)}_{\mathrm{end}}$ should be understood as the limit from the left for $t\in\mathbb{T}^{(k)}$ .

Proposition 2.1. Let $f^{(k)}\in F^{(k)}$ for every $k\in\mathcal{K}$ . Then each $\{X^{(i,n)}_t\}_{t\in\mathbb{T}}$ is time-continuous $\mathbb{P}\otimes\mathrm{d} t$ everywhere, such that

(2.8) \begin{align}X^{(i,n)}_{e}=A^{(n)}_{e} \quad for\ all \text{ $e\in\mathbb{T}^{\mathrm{end}},\ i\in\mathcal{I}$.}\end{align}

Proof. If we write (2.5) in the integral form and use (2.2) and (2.3), we have

(2.9) \begin{align}X^{(i,n)}_t &= A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \int_{T^{(k)}_{\mathrm{start}}}^t f^{(k)}(s)\bigl(A^{(n)}_s - X^{(i,n)}_s \bigr)\,{\mathrm{d}} s + \sigma^{(k)}\Bigl(\rho^{(k)} B_t^{(k)} + \sqrt{1 -(\rho^{(k)})^2} W^{(i,k)}_t\Bigr)\end{align}

for every $t\in\mathbb{T}^{(k)}$ and $k\in\mathcal{K}$ . Then, using $\sum_{j=1}^n \beta^{(j,n)} = n$ and (2.9), we can rewrite the weighted ensemble average as

(2.10) \begin{align}A^{(n)}_t = A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_t + \dfrac{\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j,k)}_t\end{align}

for $t\in\mathbb{T}^{(k)}$ , where the time-shifted Brownian motions in (2.10) satisfy

\begin{align*}B_{T^{(k)}_{\mathrm{start}}} = 0 \quad \text{and} \quad W^{(i,k)}_{T^{(k)}_{\mathrm{start}}} = 0\end{align*}

for all $i\in\mathcal{I}$ . Hence, in solving the system of SDEs in (2.5)–(2.6), we generalise the solution in [Reference Mengütürk26, Proposition 2.1] through the ansatz maps

\begin{align*}\dfrac{1}{n}\sum_{j=1}^n W^{(j)}_t & \mapsto \dfrac{1}{\sum_{j=1}^n \beta^{(j,n)}}\sum_{j=1}^n \beta^{(j,n)}W^{(j,k)}_t, \quad \text{$t\in\mathbb{T}^{(k)}$} \\\dfrac{1}{n}\sum_{j=1}^n \int_{0}^t \dfrac{\gamma(t)}{\gamma(s)}\,{\mathrm{d}} W^{(j)}_s & \mapsto \dfrac{1}{\sum_{j=1}^n \beta^{(j,n)}}\sum_{j=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\beta^{(j,n)}\,{\mathrm{d}} W^{(j,k)}_s, \quad \text{$t\in\mathbb{T}^{(k)}$},\end{align*}

which, by verifying against (2.5)–(2.6) using Itô’s lemma, gives us the integral representation

(2.11) \begin{align}X^{(i,n)}_t &= A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_t + \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\dfrac{1}{n}\sum_{j=1}^n \beta^{(j,n)}W^{(j,k)}_t\Biggr) \notag \\&\quad + \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(i,k)}_s - \dfrac{1}{n}\sum_{j=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\beta^{(j,n)}\,{\mathrm{d}} W^{(j,k)}_s \Biggr) \notag \\&= A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_t + \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\dfrac{1}{n}\sum_{j=1}^n \beta^{(j,n)}W^{(j,k)}_t\Biggr) \notag \\&\quad + \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\gamma^{(k)}(t)Y^{(i,k)}_t - \dfrac{1}{n}\sum_{j=1}^n \beta^{(j,n)}\gamma^{(k)}(t)Y^{(j,k)}_t \Biggr)\end{align}

for $t\in\mathbb{T}^{(k)}$ , where $\gamma^{(k)}\colon \mathbb{T}^{(k)}\rightarrow\mathbb{R}_+$ and $\{Y^{(i,k)}_t\}_{t\in\mathbb{T}^{(k)}}$ are given by

\begin{align*}\gamma^{(k)}(t) = \exp\biggl( - \int_{T^{(k)}_{\mathrm{start}}}^t f^{(k)}(u)\,{\mathrm{d}} u \biggr), \quad Y^{(i,k)}_t = \int_{T^{(k)}_{\mathrm{start}}}^t \gamma^{(k)}(s)^{-1}\,{\mathrm{d}} W^{(i,k)}_s,\end{align*}

given $\sum_{j=1}^n \beta^{(j,n)} = n$ , and having the time-shift property of $\{W^{(i,k)}_t\}_{t\in\mathbb{T}}$ from (2.4). Denoting the first derivative by

\begin{align*}(\gamma^{(k)})^{'}(t) = \dfrac{\,\mathrm{d} \gamma^{(k)}(t) }{\,\mathrm{d} t},\end{align*}

and applying Itô’s integration-by-parts formula as in [Reference Hildebrandt and Roelly14, Reference Li20], we obtain

\begin{align*}\,\mathrm{d}\bigl(\gamma^{(k)}(t)^{-1} W^{(i,k)}_t\bigr) = \mathrm{d} Y^{(i,k)}_t - W^{(i,k)}_t\dfrac{(\gamma^{(k)})^{'}(t)}{(\gamma^{(k)})^{2}(t)}\,\mathrm{d} t\end{align*}

for $t\in\mathbb{T}^{(k)}$ , where we have

\begin{align*}(\gamma^{(k)})^{'}(t) = -f^{(k)}(t)\exp\biggl( - \int_{T^{(k)}_{\mathrm{start}}}^t f^{(k)}(u)\,{\mathrm{d}} u \biggr)\end{align*}

for $t\in\mathbb{T}^{(k)}$ . By integration, it thus follows that

(2.12) \begin{align}\gamma^{(k)}(t)^{-1} W^{(i,k)}_t &= Y^{(i,k)}_t - \int_{T^{(k)}_{\mathrm{start}}}^t W^{(i,k)}_s\dfrac{(\gamma^{(k)})^{'}(s)}{(\gamma^{(k)})^{2}(s)}\,{\mathrm{d}} s \notag \\&= Y^{(i,k)}_t + \int_{T^{(k)}_{\mathrm{start}}}^t W^{(i,k)}_s\dfrac{f^{(k)}(s)}{\gamma^{(k)}(s)}\,{\mathrm{d}} s. \end{align}

For the following, define the map $U^{(k)}\colon \mathbb{T}^{(k)}\times \mathbb{T}^{(k)}\rightarrow\mathbb{R}$ by

(2.13) \begin{align}U^{(k)}(s,t)=f^{(k)}(s)\dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}. \end{align}

If we rearrange (2.12) and multiply both sides by $\gamma(t)$ , we can write

(2.14) \begin{align}Y^{(i,k)}_t &= \gamma^{(k)}(t)^{-1}W^{(i,k)}_t - \int_{T^{(k)}_{\mathrm{start}}}^t W^{(i,k)}_s\dfrac{f^{(k)}(s)}{\gamma^{(k)}(s)}\,{\mathrm{d}} s \notag \\&\Rightarrow \quad \gamma^{(k)}(t)Y^{(i,k)}_t = W^{(i,k)}_t - \int_{T^{(k)}_{\mathrm{start}}}^t W^{(i,k)}_sU^{(k)}(s,t)\,{\mathrm{d}} s. \end{align}

From (2.13), we have

(2.15) \begin{align}\int_{T^{(k)}_{\mathrm{start}}}^\tau U^{(k)}(s,t) \,{\mathrm{d}} s &= \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(\tau)} - \gamma^{(k)}(t), \end{align}

and since $f^{(k)}\in F^{(k)}$ for every $k\in\mathcal{K}$ , we have

(2.16) \begin{align}\lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \gamma^{(k)}(t) = \lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \exp\biggl( - \int_{T^{(k)}_{\mathrm{start}}}^t f^{(k)}(u)\,{\mathrm{d}} u \biggr) = 0 , \end{align}

which, using (2.15), implies

(2.17) \begin{align}&\lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \int_{T^{(k)}_{\mathrm{start}}}^\tau U^{(k)}(s,t) \,{\mathrm{d}} s = 0 \end{align}

for $\tau\in\mathbb{T}^{(k)}$ , given that $t_{-}\rightarrow T^{(k)}_{\mathrm{end}}$ indicates the limit from the left with $t\in\mathbb{T}^{(k)}$ ; we specify the directional notation of the limit for the purpose of this proof rather than just using $t\rightarrow T^{(k)}_{\mathrm{end}}$ . In addition,

(2.18) \begin{align}\int_{T^{(k)}_{\mathrm{start}}}^t U^{(k)}(s,t) \,{\mathrm{d}} s &= 1 - \gamma^{(k)}(t), \end{align}

and hence, using (2.16) and (2.18), we have

(2.19) \begin{align}\lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \int_{T^{(k)}_{\mathrm{start}}}^t U^{(k)}(s,t) \,{\mathrm{d}} s = 1. \end{align}

From (2.17) and (2.19), the map $U^{(k)}$ is an approximation to the identity in the sense of [Reference Li20]. Accordingly, using [Reference Li20], if there is a continuous function $\lambda\colon [\rho_1,\rho_2] \rightarrow \mathbb{R}$ , with $0 \leq \rho_1 < \rho_2 < \infty$ , and a function $\Lambda\colon [\rho_1,\rho_2) \times [\rho_1,\rho_2) \rightarrow \mathbb{R}_+$ that satisfy

\begin{align*}&\lim_{t\rightarrow \rho_2} \int_{\rho_1}^t \Lambda(s,t) \,{\mathrm{d}} s = 1 \quad \text{and} \quad \lim_{t\rightarrow \rho_2} \int_{\rho_1}^{\rho^*} \Lambda(s,t) \,{\mathrm{d}} s = 0, \quad \rho^*\in[\rho_1,\rho_2),\end{align*}

it is an approximation to the identity, so that

(2.20) \begin{align}\lim_{t\rightarrow \rho_2} \biggl( \lambda(t) - \int_{\rho_1}^t \lambda(s) \Lambda(s,t) \,{\mathrm{d}} s \biggr) = 0.\end{align}

Since the time-shifted Brownian motion $\{W^{(i,k)}_t\}_{t\in\mathbb{T}}$ has continuous sample paths $\mathbb{P}$ -a.s. on $\mathbb{T}$ , using (2.20), we thus have

(2.21) \begin{align}\mathbb{P}\biggl(\, \lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \biggl( W^{(i,k)}_t - \int_{T^{(k)}_{\mathrm{start}}}^t W^{(i,k)}_sU^{(k)}(s,t) \,{\mathrm{d}} s \biggr) = 0 \biggr) = 1.\end{align}

Note that, using (2.14), we have

\begin{align*}\lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \gamma^{(k)}(t)Y^{(i,k)}_t = \lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} W^{(i,k)}_t - \lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \int_{T^{(k)}_{\mathrm{start}}}^t W^{(i,k)}_sU^{(k)}(s,t)\,{\mathrm{d}} s,\end{align*}

which, by using (2.21), implies

\begin{align*}\mathbb{P}\biggl(\, \lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \gamma^{(k)}(t)Y^{(i,k)}_t = 0 \biggr) = 1, \quad \mathbb{P}\Biggl( \lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \dfrac{1}{n}\sum_{j=1}^n \gamma^{(k)}(t)Y^{(j,k)}_t = 0 \Biggr) = 1,\end{align*}

which further means

(2.22) \begin{align}\lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \Biggl( \gamma^{(k)}(t)Y^{(i,k)}_t - \dfrac{1}{n}\sum_{j=1}^n \beta^{(j,n)}\gamma^{(k)}(t)Y^{(j,k)}_t\Biggr) = 0, \quad \text{$\mathbb{P}$-a.s.} \end{align}

Since affine transformations of Gaussian processes are Gaussian, $X^{(i,n)}_t$ is Gaussian for all $t\in\mathbb{T}^{(k)}$ and $i\in\mathcal{I}$ . Taking the limit from the left $t_{-}\rightarrow T^{(k)}_{\mathrm{end}}$ of $X^{(i,n)}_t$ in (2.11), and using (2.22),

(2.23) \begin{align}\lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} X^{(i,n)}_t &= A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_{T^{(k)}_{\mathrm{end}}} + \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\dfrac{1}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j,k)}_{T^{(k)}_{\mathrm{end}}}\Biggr), \quad \text{$\mathbb{P}$-a.s.} \notag \\&= A^{(n)}_{T^{(k)}_{\mathrm{end}}},\end{align}

which provides $L^1$ -convergence due to the Gaussian property. On the other hand, since the limit from the right $t_{+}\rightarrow T^{(k)}_{\mathrm{end}}$ with $t > T^{(k)}_{\mathrm{end}}$ gives

\begin{align*}\lim_{t_{+}\rightarrow T^{(k)}_{\mathrm{end}}} X^{(i,n)}_t = A^{(n)}_{T^{(k)}_{\mathrm{end}}},\end{align*}

using the SDE in (2.5)–(2.6), each $\{X^{(i,n)}_t\}_{t\in\mathbb{T}}$ must be time-continuous $\mathbb{P}\otimes\mathrm{d} t$ everywhere. Since the above arguments hold for every $k\in\mathcal{K}$ and $i\in\mathcal{I}$ , the equality in (2.8) holds.

Proposition 2.1 proves that solution to (2.5)–(2.6) exists for all $t\in\mathbb{T}$ , and in turn provides us with a system of particles with continuous paths, where each unit converges to the random ensemble average of the system at every $t\in\mathbb{T}^{\mathrm{end}}$ . We shall now make sense of the distribution of the ensemble average at $t\in\mathbb{T}^{\mathrm{end}}$ recursively, as we pass along $k\in\mathcal{K}$ . Let

\begin{align*}\|\boldsymbol{\beta}^{(n)}\|^{2}_{L^2} = \sum_{j=1}^n (\beta^{(j,n)})^2,\end{align*}

as the squared $L^2$ -norm of the averaging weights, and define, for every $k\in\mathcal{K}$ ,

(2.24) \begin{align}V^{(k,n)} = \sum_{l=1}^k (\sigma^{(l)})^2\bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr)\biggl((\rho^{(l)})^2 + \dfrac{1 - (\rho^{(l)})^2}{n^2}\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}\biggr).\end{align}

The results for the rest of this section require the Brownian motions in (2.5) to be mutually independent.

Proposition 2.2. Let every time-shifted Brownian motion $\{B^{(k)}_t\}_{t\in\mathbb{T}}$ and $\{W^{(i,k)}_t\}_{t\in\mathbb{T}}$ in the system be mutually independent for every $k\in\mathcal{K}$ and $i\in\mathcal{I}$ . Then

\begin{align*}A^{(n)}_{T^{(k)}_{\mathrm{end}}} \sim \mathcal{N}(x, V^{(k,n)} ),\end{align*}

given that $\mathcal{N}(.)$ stands for the Gaussian distribution.

Proof. We prove this result recursively. As a start, fix $k=1$ as the base case. Using (2.10), we have

(2.25)

where $Z^{(1)}_{T^{(1)}_{\mathrm{end}}}$ is a Gaussian random variable such that

\begin{align*}Z^{(1)}_{T^{(1)}_{\mathrm{end}}} &\sim \mathcal{N}\bigl(0, T^{(1)}_{\mathrm{end}} - T^{(1)}_{\mathrm{start}}\bigr) = \mathcal{N}\bigl(0, T^{(1)}_{\mathrm{end}} \bigr).\end{align*}

Now fix $k=2$ . From (2.5)–(2.6), we know that

(2.26) \begin{align}A^{(n)}_{T^{(1)}_{\mathrm{end}}}=A^{(n)}_{T^{(2)}_{\mathrm{start}}}, \end{align}

even when $T^{(1)}_{\mathrm{end}} \neq T^{(2)}_{\mathrm{start}}$ . We can now evaluate the distribution of $A^{(n)}_{T^{(2)}_{\mathrm{end}}}$ using (2.10), (2.25), and (2.26) as follows:

(2.27)

since all Brownian motions in the system are mutually independent, including the ones across the columns of $\{\boldsymbol{W}_t\}_{t\in\mathbb{T}}$ . We can see that for any $k>2$ the additive structure to the variance of the Gaussian distribution continues, and the result follows.

The following result provides the covariance structure of the system over every time segment $\mathbb{T}^{(k)}$ . For the statement below, we shall denote

\begin{align*}\mathbf{1}_{i,j}\triangleq\mathbf{1}(i=j) \quad \text{for $i,j\in\mathcal{I}$}\end{align*}

as the indicator function. Moreover, we denote the covariance between $X^{(i,n)}_t$ and $X^{(j,n)}_t$ as

\begin{align*}C^{(i,j,n)}_t&=\mathbb{E}\bigl[X^{(i,n)}_tX^{(j,n)}_t\bigr]-\mathbb{E}\bigl[X^{(i,n)}_t\bigr]\mathbb{E}\bigl[X^{(j,n)}_t\bigr]\end{align*}

for $t\in\mathbb{T}^{(k)}$ . We are now in a position to state the following.

Proposition 2.3. Keep the conditions of Proposition 2.1 and let every time-shifted Brownian motion $\{B^{(k)}_t\}_{t\in\mathbb{T}}$ and $\{W^{(i,k)}_t\}_{t\in\mathbb{T}}$ in the system be mutually independent for every $k\in\mathcal{K}$ and $i\in\mathcal{I}$ . Then

(2.28) \begin{align}& C^{(i,j,n)}_t \notag \\& = (\sigma^{(k)})^2\bigl(t - T^{(k)}_{\mathrm{start}}\bigr)\biggl((\rho^{(k)})^2 + \dfrac{(1-(\rho^{(k)})^2)\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}}{n^2}\biggr) \notag \\& \quad + (\sigma^{(k)})^2(1-(\rho^{(k)})^2)\mathbf{1}_{i,j}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2}\,{\mathrm{d}} s+ V^{(k-1,n)} \notag \\& \quad + \dfrac{(\sigma^{(k)})^2(1-(\rho^{(k)})^2)}{n}\biggl((\beta^{(i,n)}+\beta^{(j,n)})\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)} \,{\mathrm{d}} s - 2\dfrac{\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}}{n}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)} \,{\mathrm{d}} s\biggr) \notag \\& \quad -\dfrac{(\sigma^{(k)})^2(1-(\rho^{(k)})^2)}{n}\biggl((\beta^{(i,n)}+\beta^{(j,n)})\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2} \,{\mathrm{d}} s - \dfrac{\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}}{n}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2} \,{\mathrm{d}} s\biggr)\end{align}

for $t\in\mathbb{T}^{(k)}$ and $i,j\in\mathcal{I}$ , where $V^{(k,n)}$ is defined as in (2.24) and $V^{(0,n)}=0$ .

Proof. First of all, for $\mathbb{E}\bigl[X^{(i,n)}_t X^{(j,n)}_t\bigr]$ , we define the following functions:

\begin{align*}\phi^{(i,j)}_t &= \Bigl(A^{(n)}_{T^{(k)}_{\mathrm{start}}}\Bigr)^2 + 2A^{(n)}_{T^{(k)}_{\mathrm{start}}}\sigma^{(k)}\rho^{(k)} B^{(k)}_t + A^{(n)}_{T^{(k)}_{\mathrm{start}}}\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\dfrac{1}{n}\sum_{p=1}^n \beta^{(p,n)}W^{(p,k)}_t\Biggr) \\&\quad + A^{(n)}_{T^{(k)}_{\mathrm{start}}}\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(i,k)}_s - \dfrac{1}{n}\sum_{p=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\beta^{(p,n)}\,{\mathrm{d}} W^{(p,k)}_s \Biggr) \\&\quad +A^{(n)}_{T^{(k)}_{\mathrm{start}}}\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\dfrac{1}{n}\sum_{r=1}^n \beta^{(r,n)}W^{(r,k)}_t\Biggr) \\&\quad +A^{(n)}_{T^{(k)}_{\mathrm{start}}}\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(j,k)}_s - \dfrac{1}{n}\sum_{r=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\beta^{(r,n)}\,{\mathrm{d}} W^{(r,k)}_s \Biggr)\end{align*}

for $t\in\mathbb{T}^{(k)}$ , $i,j\in\mathcal{I}$ , and

\begin{align*}&\psi^{(i,j)}_t \\ & = \bigl(\sigma^{(k)}\rho^{(k)}B^{(k)}_t\bigr)^2 + \sigma^{(k)}\rho^{(k)} B^{(k)}_t\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl( \dfrac{1}{n}\sum_{p=1}^n\beta^{(p,n)} W^{(p,k)}_t\Biggr) \\& \quad + \sigma^{(k)}\rho^{(k)} B^{(k)}_t\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(i,k)}_s - \dfrac{1}{n}\sum_{p=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \beta^{(p,n)}\dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(p,k)}_s \Biggr) \\& \quad + \sigma^{(k)}\rho^{(k)} B^{(k)}_t\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl( \dfrac{1}{n}\sum_{r=1}^n \beta^{(r,n)}W^{(r,k)}_t\Biggr) \\& \quad + \sigma^{(k)}\rho^{(k)} B^{(k)}_t\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(j,k)}_s - \dfrac{1}{n}\sum_{r=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \beta^{(r,n)}\dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(r,k)}_s \Biggr)\end{align*}

for $t\in\mathbb{T}^{(k)}$ , $i,j\in\mathcal{I}$ , and finally

(2.29)

for $t\in\mathbb{T}^{(k)}$ , $i,j\in\mathcal{I}$ . Therefore we have

(2.30) \begin{align}X^{(i,n)}_t X^{(j,n)}_t = \phi^{(i,j)}_t + \psi^{(i,j)}_t + \kappa^{(i,j)}_t \quad \Rightarrow \quad \mathbb{E}\bigl[X^{(i,n)}_t X^{(j,n)}_t\bigr] = \mathbb{E}\bigl[\phi^{(i,j)}_t\bigr] + \mathbb{E}\bigl[\psi^{(i,j)}_t\bigr] + \mathbb{E}\bigl[\kappa^{(i,j)}_t\bigr].\end{align}

We shall now compute each expectation in (2.30). First, note that

(2.31) \begin{align}A^{(n)}_{T^{(k)}_{\mathrm{start}}} = A^{(n)}_{T^{(k-1)}_{\mathrm{start}}} + \sigma^{(k-1)}\rho^{(k-1)} B^{(k-1)}_{T^{(k-1)}_{\mathrm{end}}} + \dfrac{\sigma^{(k-1)}\sqrt{1 - (\rho^{(k-1)})^2}}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j,k-1)}_{T^{(k-1)}_{\mathrm{end}}}. \end{align}

Since each time-shifted Brownian motion $\{B^{(k)}_t\}_{t\in\mathbb{T}}$ and $\{W^{(i,k)}_t\}_{t\in\mathbb{T}}$ in the system is mutually independent for every $k\in\mathcal{K}$ and $i\in\mathcal{I}$ , we have

(2.32)

Now note that (2.32) itself collapses to zero recursively at $\mathbb{E}\bigl[ x B^{(1)}_t\bigr] = 0$ , and hence

\begin{align*}&\mathbb{E}\Bigl[A^{(n)}_{T^{(k)}_{\mathrm{start}}}B^{(k)}_t\Bigr] = 0.\end{align*}

Following similar steps, we also have

\begin{align*}&\mathbb{E}\Biggl[A^{(n)}_{T^{(k)}_{\mathrm{start}}}\Biggl(\dfrac{1}{n}\sum_{p=1}^n \beta^{(p,n)}W^{(p,k)}_t\Biggr)\Biggr] = \mathbb{E}\Biggl[A^{(n)}_{T^{(k-1)}_{\mathrm{end}}}\Biggl(\dfrac{1}{n}\sum_{p=1}^n \beta^{(p,n)}W^{(p,k)}_t\Biggr)\Biggr] = 0.\end{align*}

Therefore we have

which follows recursively from (2.31), which can also be verified using Proposition 2.2. In addition, since

\begin{align*}\mathbb{E}\Biggl[B^{(k)}_t\Biggl( \dfrac{1}{n}\sum_{p=1}^n\beta^{(p,n)} W^{(p,k)}_t\Biggr)\Biggr] = \mathbb{E}\bigl[B^{(k)}_t\bigr]\mathbb{E}\Biggl[\Biggl( \dfrac{1}{n}\sum_{p=1}^n\beta^{(p,n)} W^{(p,k)}_t\Biggr)\Biggr] = 0,\end{align*}

and the same for similar terms in $\psi^{(i,j)}_t$ , we have

\begin{align*}\mathbb{E}\bigl[\psi^{(i,j)}_t\bigr] &= \mathbb{E}\bigl[(\sigma^{(k)}\rho^{(k)} B^{(k)}_t)^2\bigr] \\&=\bigl(\sigma^{(k)}\rho^{(k)}\bigr)^2 \bigl(t - T^{(k)}_{\mathrm{start}}\bigr).\end{align*}

To compute the third term in (2.30), we use the mutual independence of Brownian motions and Itô isometry to get the following:

\begin{align*}\sum_{p=1}^n\beta^{(p,n)}\mathbb{E}\biggl[\int_{T^{(k)}_{\mathrm{start}}}^t \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)} \,{\mathrm{d}} W^{(p,k)}_s\,{\mathrm{d}} W^{(j,k)}_s\biggr]& = \beta^{(j,n)}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)} \,{\mathrm{d}} s , \\\sum_{p=1}^n\sum_{r=1}^n\beta^{(p,n)}\beta^{(r,n)}\mathbb{E}\biggl[ \int_{T^{(k)}_{\mathrm{start}}}^t\int_{T^{(k)}_{\mathrm{start}}}^t\dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(p,k)}_s\,{\mathrm{d}} W^{(r,k)}_s\biggr]& = \| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)} \,{\mathrm{d}} s.\end{align*}

We also get

\begin{align*}&\mathbb{E}\Biggl[\sum_{r=1}^n \beta^{(r,n)}\int_{T^{(k)}_{\mathrm{start}}}^t\int_{T^{(k)}_{\mathrm{start}}}^t\dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2}\,{\mathrm{d}} W^{(i,k)}_s\,{\mathrm{d}} W^{(r,k)}_s\Biggr] = \beta^{(i,n)}\int_{T^{(k)}_{\mathrm{start}}}^t\dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2}\,{\mathrm{d}} s .\end{align*}

Thus, taking the expectation of (2.29) provides

\begin{align*}\mathbb{E}\bigl[\kappa^{(i,j)}_t\bigr] &= \dfrac{(\sigma^{(k)})^2(1-(\rho^{(k)})^2)}{n^2}\sum_{p=1}^n (\beta^{(p,n)})^2\mathbb{E}\bigl[W^{(p,k)}_t W^{(p,k)}_t\bigr] \\&\quad + \dfrac{(\sigma^{(k)})^2(1-(\rho^{(k)})^2)}{n}(\beta^{(i,n)}+\beta^{(j,n)})\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)} \,{\mathrm{d}} s \\&\quad - 2\dfrac{(\sigma^{(k)})^2(1-(\rho^{(k)})^2)\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}}{n^2}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)} \,{\mathrm{d}} s \\&\quad + (\sigma^{(k)})^2(1-(\rho^{(k)})^2)\mathbf{1}(i=j)\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2}\,{\mathrm{d}} s \\&\quad -\dfrac{(\sigma^{(k)})^2(1-(\rho^{(k)})^2)}{n}(\beta^{(i,n)}+\beta^{(j,n)})\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2} \,{\mathrm{d}} s \\&\quad + \dfrac{(\sigma^{(k)})^2(1-(\rho^{(k)})^2)\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}}{n^2}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2} \,{\mathrm{d}} s,\end{align*}

where we have the following:

\begin{align*}\mathbb{E}\bigl[W^{(p,k)}_t W^{(p,k)}_t\bigr] = t - T^{(k)}_{\mathrm{start}}.\end{align*}

Finally, from (2.5)–(2.6), (2.10) and (2.11), we get

\begin{align*}A^{(n)}_{T^{(1)}_{\mathrm{start}}} = x \quad \text{and} \quad A^{(n)}_{T^{(k-1)}_{\mathrm{end}}}=A^{(n)}_{T^{(k)}_{\mathrm{start}}} \forall k\in\mathcal{K}\setminus\{1\}\quad \Rightarrow \quad \mathbb{E}\bigl[X^{(i,n)}_t\bigr]\mathbb{E}\bigl[X^{(j,n)}_t\bigr]=x^2\end{align*}

for $t\in\mathbb{T}^{(k)}$ and $i,j\in\mathcal{I}$ , which can also be verified from (2.27). Putting all terms together under

\begin{align*}C^{(i,j,n)}_t&=\mathbb{E}\bigl[X^{(i,n)}_tX^{(j,n)}_t\bigr]-\mathbb{E}\bigl[X^{(i,n)}_t\bigr]\mathbb{E}\bigl[X^{(j,n)}_t\bigr] =\mathbb{E}\bigl[\phi^{(i,j)}_t\bigr] + \mathbb{E}\bigl[\psi^{(i,j)}_t\bigr] + \mathbb{E}\bigl[\kappa^{(i,j)}_t\bigr] - x^2\end{align*}

provides the result for $t\in\mathbb{T}^{(k)}$ and $i,j\in\mathcal{I}$ .

We defined $V^{(0,n)}=0$ . For compact representation, we shall always allow expressions of the form

\begin{align*}\sum_{l=1}^{0} z^{(l)} \triangleq 0 \quad \text{for any $z^{(l)}\in\mathbb{R}$.}\end{align*}

Now, as an example, where $\sigma^{(k)}=1$ , $\rho^{(k)}=0$ for every $k\in\mathcal{K}$ and $\beta^{(i,n)}=1$ for every $i\in\mathcal{I}$ , we get an interacting particle system with no common noise component, where the ensemble average is given by the equally weighted simple average. Accordingly, using Proposition 2.3, we have the covariance structure

\begin{align*}C^{(i,j,n)}_t &= \dfrac{1}{n}\sum_{l=1}^{k-1} \bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr) + \dfrac{t - T^{(k)}_{\mathrm{start}}}{n} + \mathbf{1}_{i,j}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2}\,{\mathrm{d}} s- \dfrac{1}{n}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2} \,{\mathrm{d}} s\end{align*}

for $t\in\mathbb{T}^{(k)}$ and $i,j\in\mathcal{I}$ ; we shall provide simulations of such a system in the next section due to its fundamental standing in our framework.

Proposition 2.4. Keep the conditions of Proposition 2.3. Then

\begin{align*}&\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}C^{(i,j,n)}_t = V^{(k,n)}\end{align*}

for $t\in\mathbb{T}^{(k)}$ and $i,j\in\mathcal{I}$ .

Proof. Since $f^{(k)}\in\mathcal{F}^{(k)}$ for all $k\in\mathcal{K}$ , we must also have

(2.33) \begin{align}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}\int_{T^{(k)}_{\mathrm{start}}}^t yf^{(k)}(s) \,{\mathrm{d}} s = \infty \quad \text{and} \quad \int_{T^{(k)}_{\mathrm{start}}}^\tau yf^{(k)}(s) \,{\mathrm{d}} s < \infty \end{align}

for any $\tau\in\mathbb{T}^{(k)}$ and $1 \leq y <\infty$ . Accordingly, we introduce a scaled map g by

(2.34) \begin{align}g^{(k)}_y(t) = yf^{(k)}(t)\end{align}

for all $t\in\mathbb{T}^{(k)}$ , and define the following:

\begin{align*}\Psi^{(k)}_y(t) &\triangleq \gamma^{(k)}_y(t) = \exp\biggl( - \int_{T^{(k)}_{\mathrm{start}}}^t g^{(k)}_y(u)\,{\mathrm{d}} u \biggr), \\Z_t &\triangleq T^{(k)}_{\mathrm{start}} + \int_{T^{(k)}_{\mathrm{start}}}^t \Psi^{(k)}_y(s)^{-1}\,{\mathrm{d}} s.\end{align*}

We denote

\begin{align*}{\bigl(\Psi^{(k)}_y\bigr)^{'}(t) = \dfrac{\,\mathrm{d} \Psi^{(k)}_y(t) }{\,\mathrm{d} t}}\end{align*}

as the first derivative. If we apply integration by parts, then

(2.35) \begin{align}\,\mathrm{d}\biggl(\dfrac{t}{\Psi^{(k)}_y(t)}\biggr) = \mathrm{d} Z_t - t\dfrac{(\Psi^{(k)}_y)^{'}(t)}{(\Psi^{(k)}_y(t))^{2}}\,\mathrm{d} t \quad \Rightarrow \quad \dfrac{t}{\Psi^{(k)}_y(t)} = Z_t - \int_{T^{(k)}_{\mathrm{start}}}^t s\dfrac{(\Psi^{(k)}_y)^{'}(s)}{(\Psi^{(k)}_y(s))^{2}}\,\mathrm{d} s, \end{align}

which, by using (2.34), implies

\begin{align*}\dfrac{t}{\Psi^{(k)}_y(t)} = Z_t + \int_{T^{(k)}_{\mathrm{start}}}^t s\dfrac{g^{(k)}_y(s)}{\Psi^{(k)}_y(s)}\,{\mathrm{d}} s \quad \Rightarrow \quad \Psi^{(k)}_y(t)Z_t = t - \int_{T^{(k)}_{\mathrm{start}}}^t sQ^{(k)}_y(s,t)\,{\mathrm{d}} s,\end{align*}

where we defined the function $Q^{(k)}_y\colon \mathbb{T}^{(k)}\times\mathbb{T}^{(k)}\rightarrow\mathbb{R}$ as follows:

\begin{align*}Q^{(k)}_y(s,t)=g^{(k)}_y(s)\dfrac{\Psi^{(k)}_y(t)}{\Psi^{(k)}_y(s)}.\end{align*}

Taking similar steps as in Proposition 2.1, we have

(2.36) \begin{align}\int_{T^{(k)}_{\mathrm{start}}}^\tau Q^{(k)}_y(s,t) \,{\mathrm{d}} s = \dfrac{\Psi^{(k)}_y(t)}{\Psi^{(k)}_y(\tau)} - \Psi^{(k)}_y(t) \quad \Rightarrow \quad \lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \int_{T^{(k)}_{\mathrm{start}}}^\tau Q^{(k)}_y(s,t) \,{\mathrm{d}} s = 0 \end{align}

for $\tau\in\mathbb{T}^{(k)}$ , and

(2.37) \begin{align}\int_{T^{(k)}_{\mathrm{start}}}^t Q^{(k)}_y(s,t) \,{\mathrm{d}} s = 1 - \Psi^{(k)}_y(t) \quad \Rightarrow \quad \lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \int_{T^{(k)}_{\mathrm{start}}}^t Q^{(k)}_y(s,t) \,{\mathrm{d}} s = 1, \end{align}

since, using (2.33)–(2.34), we have

\begin{align*}\lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \Psi^{(k)}_y(t) = \lim_{t_{-}\rightarrow T^{(k)}_{\mathrm{end}}} \exp\biggl( - \int_{T^{(k)}_{\mathrm{start}}}^t g^{(k)}_y(u)\,{\mathrm{d}} u \biggr) = 0.\end{align*}

From (2.36)–(2.37), $Q^{(k)}_y$ approximates the identity as in [Reference Li20], and hence, using the continuity of t and (2.35), we get the following:

(2.38) \begin{align}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \biggl( t - \int_{T^{(k)}_{\mathrm{start}}}^t sQ^{(k)}_y(s,t) \,{\mathrm{d}} s \biggr) = \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \bigl( \Psi^{(k)}_y(t)Z_t \bigr) = 0.\end{align}

Using (2.38) and setting $y=1$ , we have

(2.39) \begin{align}&\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \biggl((\beta^{(i,n)}+\beta^{(j,n)})\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\Psi^{(k)}_1(t)}{\Psi^{(k)}_1(s)} \,{\mathrm{d}} s - 2\dfrac{\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}}{n}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\Psi^{(k)}_1(t)}{\Psi^{(k)}_1(s)} \,{\mathrm{d}} s\biggr) \notag \\&\ \ =\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \biggl((\beta^{(i,n)}+\beta^{(j,n)})\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)} \,{\mathrm{d}} s - 2\dfrac{\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}}{n}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)} \,{\mathrm{d}} s\biggr) \notag \\&\ \ = 0, \end{align}

and by setting $y=2$ , we have

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \mathbf{1}_{i,j}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\Psi^{(k)}_2(t)}{\Psi^{(k)}_2(s)}\,{\mathrm{d}} s = \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \mathbf{1}_{i,j}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2}\,{\mathrm{d}} s = 0 \end{align*}

as well as

(2.40) \begin{align}&\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \biggl((\beta^{(i,n)}+\beta^{(j,n)})\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\Psi^{(k)}_2(t)}{\Psi^{(k)}_2(s)} \,{\mathrm{d}} s - \dfrac{\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}}{n}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\Psi^{(k)}_2(t)}{\Psi^{(k)}_2(s)} \,{\mathrm{d}} s\biggr) \notag \\&\quad \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \biggl((\beta^{(i,n)}+\beta^{(j,n)})\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2} \,{\mathrm{d}} s - \dfrac{\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}}{n}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2} \,{\mathrm{d}} s\biggr) = 0. \end{align}

Using (2.39)–(2.40), and (2.28) in Proposition 2.3, we have

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}C^{(i,j,n)}_t &= (\sigma^{(k)})^2\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}\bigl(t - T^{(k)}_{\mathrm{start}}\bigr)\biggl((\rho^{(k)})^2 + \dfrac{(1-(\rho^{(k)})^2)\| \boldsymbol{\beta}^{(n)}\|^{2}_{L^2}}{n^2}\biggr) + V^{(k-1,n)},\end{align*}

and the result follows.

The following corollary provides a fundamental scenario with simple ensemble average and no common noise. We omit its proof as it follows directly from Proposition 2.4.

Corollary 2.1. Keep the conditions of Proposition 2.3 with (2.28), where $\sigma^{(k)}=1$ , $\rho^{(k)}=0$ for every $k\in\mathcal{K}$ and $\beta^{(i,n)}=1$ for every $i\in\mathcal{I}$ . Then

(2.41) \begin{align}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} C^{(i,j,n)}_t &= \dfrac{1}{n}\sum_{l=1}^{k} \bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr) \end{align}

for $t\in\mathbb{T}^{(k)}$ and $i,j\in\mathcal{I}$ .

From (2.41), if $n=k$ , the limiting covariance as $t\rightarrow T^{(k)}_{\mathrm{end}}$ becomes the average of the distance between the termination and revival time-points at each time segment until $T^{(k)}_{\mathrm{end}}$ .

Remark 2.1 The limiting values from Propositions 2.1, 2.2, and 2.4 do not depend on the choice of $f^{(k)}\in\mathcal{F}^{(k)}$ ; rather, $f^{(k)}$ controls the speed of convergence to these values. We also see from Proposition 2.3 that $f^{(k)}\in\mathcal{F}^{(k)}$ impacts the covariance trajectories of the system.

Regarding the effect of $f^{(k)}$ on the convergence rate, we shall leave a detailed study for future research.

2.3. Spatial limits

Thus far, we have considered the behaviour of the interacting particle system over time limits. We shall also study the limit as $n\rightarrow\infty$ , that is, as the number of particles in the system grows asymptotically.

Assumption 2.1. Kolmogorov’s strong law property holds:

(2.42) \begin{align}\mathcal{K} = \sum_{j=1}^{\infty}\dfrac{(\beta^{(j)})^2}{j^2} < \infty, \quad \text{where $\beta^{(j)}=\beta^{(j,n)}$ for all $j\leq n$.}\end{align}

As an example, for the simple ensemble average in (2.3) with $\beta^{(j)}=1$ , (2.42) holds with $\mathcal{K}=\pi^2/6$ .

Proposition 2.5. Keep the conditions of Proposition 2.1 where (2.42) holds. Then

(2.43) \begin{align}\lim_{n\rightarrow\infty} X^{(i,n)}_t = \xi^{(i)}_t\end{align}

exists for every $t\in\mathbb{T}$ and $i\in\mathcal{I}$ , and $\{\xi^{(i)}_t\}_{t\in\mathbb{T}}$ is time-continuous $\mathbb{P}\otimes\mathrm{d} t$ everywhere such that

(2.44) \begin{align}\xi^{(i)}_t = x + \sum_{l=1}^{k-1}\sigma^{(l)}\rho^{(l)} B^{(l)}_{T^{(l)}_{\mathrm{end}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_t + \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(i,k)}_s \biggr)\end{align}

for all $t\in\mathbb{T}^{(k)}$ and $i\in\mathcal{I}$ .

Proof. Using Kolmogorov’s strong law property in (2.42), the limit

(2.45) \begin{align}\lim_{n\rightarrow \infty}\dfrac{\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j,k)}_t= 0\end{align}

holds $\mathbb{P}$ -a.s., due to the strong law of large numbers. Then, using (2.5)–(2.6) and (2.10) recursively, and the $\mathbb{P}\otimes\mathrm{d} t$ everywhere time-continuity of $\{X^{(i)}_t\}_{t\in\mathbb{T}}$ , we get

(2.46) \begin{align}\lim_{n\rightarrow \infty} A^{(n)}_t &= A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_t \notag \\&= A^{(n)}_{T^{(k-1)}_{\mathrm{start}}} + \sigma^{(k-1)}\rho^{(k-1)} B^{(k-1)}_{T^{(k-1)}_{\mathrm{end}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_t \notag \\&\vdots \notag \\&=x + \sum_{l=1}^{k-1}\sigma^{(l)}\rho^{(l)} B^{(l)}_{T^{(l)}_{\mathrm{end}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_t \end{align}

for every $t\in\mathbb{T}^{(k)}$ for $k\in\mathcal{K}$ . In addition, employing Kolmogorov’s strong law property in (2.42), we further have

(2.47) \begin{align}\lim_{n\rightarrow \infty}\dfrac{\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}}{n}\sum_{j=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\beta^{(j,n)}\,{\mathrm{d}} W^{(j,k)}_s = 0, \quad \text{$\mathbb{P}$-a.s.} \end{align}

From (2.11), (2.46), and (2.47), we have the representation in (2.44), which also proves the existence of $\{\xi^{(i)}_t\}_{t\in\mathbb{T}}$ . Finally, the $\mathbb{P}\otimes\mathrm{d} t$ everywhere time-continuity of $\{\xi^{(i)}_t\}_{t\in\mathbb{T}}$ is due to the $\mathbb{P}\otimes\mathrm{d} t$ everywhere time-continuity of $\{X^{(i)}_t\}_{t\in\mathbb{T}}$ .

The convergence in (2.43) is pointwise; we shall leave it for future research to study the convergence rate.

Corollary 2.2. Keep the conditions of Proposition 2.5. If $\rho^{(k)}=0$ for all $k\in\mathcal{K}$ , then each $\{\xi^{(i)}_t\}_{t\in\mathbb{T}}$ is mutually independent.

Proof. If $\rho^{(k)}=0$ for every $k\in\mathcal{K}$ , we have from (2.44)

\begin{align*}\xi^{(i)}_t = x + \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(i,k)}_s \biggr)\end{align*}

for $t\in\mathbb{T}^{(k)}$ and $i\in\mathcal{I}$ , where each $\{W^{(i)}_t\}_{t\in\mathbb{T}}$ is mutually independent across $i\in\mathcal{I}$ for $k\in\mathcal{K}$ .

We are now in a position to prove an important consistency property: the time and space limits are commutative. More specifically, the limits $t\rightarrow T^{(k)}_{\mathrm{end}}$ and $n\rightarrow \infty$ are exchangeable; that is, the limiting values are the same irrespective of the order we take these limits. In Section 4 we shall discuss the interrelation of the limits $n\rightarrow \infty$ vs. $m\rightarrow \infty$ , for which commutativity may break.

Proposition 2.6. Keep the conditions of Proposition 2.1 where (2.42) holds. Then

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \lim_{n\rightarrow \infty} X^{(i,n)}_t &= \lim_{n\rightarrow \infty} \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} X^{(i,n)}_t \\&= x + \sum_{l=1}^{k}\sigma^{(l)}\rho^{(l)} B^{(l)}_{T^{(l)}_{\mathrm{end}}}\end{align*}

for all $t\in\mathbb{T}^{(k)}$ and $i\in\mathcal{I}$ , $\mathbb{P}$ -a.s.

Proof. Using Proposition 2.5 and (2.21), we have

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \lim_{n\rightarrow \infty} X^{(i,n)}_t& = \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}\xi^{(i)}_t \\& =x + \sum_{l=1}^{k-1}\sigma^{(l)}\rho^{(l)} B^{(l)}_{T^{(l)}_{\mathrm{end}}} + \sigma^{(k)}\rho^{(k)}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} B^{(k)}_t \\& \quad + \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}\biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(i,k)}_s \biggr) \\& = x + \sum_{l=1}^{k}\sigma^{(l)}\rho^{(l)} B^{(l)}_{T^{(l)}_{\mathrm{end}}} \end{align*}

for all $t\in\mathbb{T}^{(k)}$ and $i\in\mathcal{I}$ , $\mathbb{P}$ -a.s. Exchanging the order of the limits and using (2.23) and the strong law of large numbers in (2.45) due to (2.42), we have the following:

\begin{align*}&\lim_{n\rightarrow \infty} \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} X^{(i,n)}_t \\&\ = \lim_{n\rightarrow \infty} A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_{T^{(k)}_{\mathrm{end}}} + \lim_{n\rightarrow \infty}\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\dfrac{1}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j,k)}_{T^{(k)}_{\mathrm{end}}}\Biggr) \\&\ = \lim_{n\rightarrow \infty} A^{(n)}_{T^{(k-1)}_{\mathrm{start}}} + \sigma^{(k-1)}\rho^{(k-1)} B^{(k-1)}_{T^{(k-1)}_{\mathrm{end}}} + \sigma^{(k-1)}\sqrt{1 - (\rho^{(k-1)})^2}\lim_{n\rightarrow \infty}\dfrac{1}{n}\Biggl(\sum_{j=1}^n \beta^{(j,n)} W^{(j,k-1)}_t\Biggr) \\&\ \quad +\sigma^{(k)}\rho^{(k)} B^{(k)}_{T^{(k)}_{\mathrm{end}}} + \lim_{n\rightarrow \infty}\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\dfrac{1}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j,k)}_{T^{(k)}_{\mathrm{end}}}\Biggr) \\ &\ =x + \sum_{l=1}^{k}\sigma^{(l)}\rho^{(l)} B^{(l)}_{T^{(l)}_{\mathrm{end}}}\end{align*}

for all $t\in\mathbb{T}^{(k)}$ and $i\in\mathcal{I}$ , $\mathbb{P}$ -a.s., which proves the commutativity of the double limits.

Proposition 2.7. Keep the conditions of Proposition 2.3 with (2.28), where $\beta^{(i,n)}=1$ for every $i\in\mathcal{I}$ . Then the following holds:

(2.48) \begin{align}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \lim_{n\rightarrow \infty} C^{(i,j,n)}_t &= \lim_{n\rightarrow \infty}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} C^{(i,j,n)}_t \notag \\&= \sum_{l=1}^{k} (\sigma^{(l)})^2\bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr)(\rho^{(l)})^2 \end{align}

for $t\in\mathbb{T}^{(k)}$ and $i,j\in\mathcal{I}$ .

Proof. First, $\beta^{(i,n)}=1$ for every $i\in\mathcal{I}$ implies $\|\boldsymbol{\beta}^{(n)}\|^{2}_{L^2}=n$ . From Proposition 2.3, (2.38), and the strong law of large numbers, we have

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \lim_{n\rightarrow \infty} C^{(i,j,n)}_t &= \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}(\sigma^{(l)})^2\bigl(t - T^{(l)}_{\mathrm{start}}\bigr)(\rho^{(l)})^2 + \sum_{l=1}^{k-1} (\sigma^{(l)})^2\bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr)(\rho^{(l)})^2 \\&\quad + \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}(\sigma^{(k)})^2(1-(\rho^{(k)})^2)\mathbf{1}_{i,j}\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)^2}{\gamma^{(k)}(s)^2}\,{\mathrm{d}} s, \\&=\sum_{l=1}^{k} (\sigma^{(l)})^2\bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr)(\rho^{(l)})^2\end{align*}

for $t\in\mathbb{T}^{(k)}$ and $i,j\in\mathcal{I}$ . Using Proposition 2.4 and the strong law of large numbers, we get

\begin{align*}\lim_{n\rightarrow \infty} \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} C^{(i,j,n)}_t &= \lim_{n\rightarrow \infty} V^{(k,n)} \\&= \sum_{l=1}^{k} (\sigma^{(l)})^2\bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr)(\rho^{(l)})^2\end{align*}

for $t\in\mathbb{T}^{(k)}$ and $i,j\in\mathcal{I}$ .

We shall highlight an important subclass of the proposed family of interacting particles; the case $\rho^{(k)}=0$ for every $k\in\mathcal{K}$ such that there is no common noise component in the system. The statements below follow directly from Propositions 2.6 and 2.7.

Corollary 2.3. Keep the conditions in Proposition 2.6 and let $\rho^{(k)}=0$ for every $k\in\mathcal{K}$ . Then the following holds:

\begin{equation*}\rho^{(k)}=0 \quad \textit{for all}\ \text{$k\in\mathcal{K}$} \quad \Rightarrow \quad \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \lim_{n\rightarrow \infty} X^{(i,n)}_t = \lim_{n\rightarrow \infty} \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} X^{(i,n)}_t = x\end{equation*}

for all $t\in\mathbb{T}^{(k)}$ and $i\in\mathcal{I}$ , $\mathbb{P}$ -a.s.

Corollary 2.4. Keep the conditions in Proposition 2.7 with (2.48), and let $\rho^{(k)}=0$ for every $k\in\mathcal{K}$ . Then the following holds:

\begin{equation*}\rho^{(k)}=0 \quad \textit{for all}\ \text{$k\in\mathcal{K}$} \quad \Rightarrow \quad \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \lim_{n\rightarrow \infty} C^{(i,j,n)}_t = \lim_{n\rightarrow \infty}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} C^{(i,j,n)}_t = 0\end{equation*}

for all $t\in\mathbb{T}^{(k)}$ and $i\in\mathcal{I}$ , $\mathbb{P}$ -a.s.

Corollaries 2.3 and 2.4 indicate a key property of the proposed system of interacting particles: when there is no common noise, as the size of the system grows asymptotically with $n\rightarrow \infty$ , each and every particle in the system converges to their initial value x at every termination time-point in $\mathbb{T}^{\mathrm{end}}$ . More precisely, each particle becomes an independent diffusion that converges to x at each $\mathbb{T}^{\mathrm{end}}$ , where the covariance of the system vanishes, only for the system to be reanimated back to its stochastic behaviour between $\mathbb{T}^{\mathrm{start}}$ and $\mathbb{T}^{\mathrm{end}}$ .

Thus far, we have kept our presentation fairly abstract; we shall now provide a concrete example of such a system. First, using (2.11), we combine the piecewise representation over the entire $\mathbb{T}$ using indicator functions as follows:

\begin{align*}&X^{(i,n)}_t \\&= \sum_{k\in\mathcal{K}}\Biggl(A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_t + \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\dfrac{1}{n}\sum_{j=1}^n \beta^{(j,n)}W^{(j,k)}_t\Biggr)\Biggr)\mathbf{1}_k \\&\quad + \sum_{k\in\mathcal{K}}\Biggl(\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\,{\mathrm{d}} W^{(i,k)}_s - \dfrac{1}{n}\sum_{j=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\gamma^{(k)}(t)}{\gamma^{(k)}(s)}\beta^{(j,n)}\,{\mathrm{d}} W^{(j,k)}_s \Biggr)\Biggr)\mathbf{1}_k \\&\triangleq \sum_{k\in\mathcal{K}}\Biggl(A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_t + \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\Biggl(\dfrac{1}{n}\sum_{j=1}^n \beta^{(j,n)}W^{(j,k)}_t\Biggr) + \mathcal{R}(i,k,t)\Biggr)\mathbf{1}_k \end{align*}

for $t\in\mathbb{T}$ , where $\mathbf{1}_k \triangleq \mathbf{1}(t\in\mathbb{T}^{(k)})$ is the indicator function.

Example 2.1. Let $f^{(k)}\in F^{(k)}$ be given by

(2.49) \begin{equation}f^{(k)}(t) = \dfrac{\alpha^{(k)}}{T^{(k)}_{\mathrm{end}} - t}, \quad t\in\mathbb{T}^{(k)},\end{equation}

for some $\alpha^{(k)}\in(0,\infty)$ . Then we have

\begin{align*}\gamma^{(k)}(t) &= \exp\biggl( - \int_{T^{(k)}_{\mathrm{start}}}^t \alpha^{(k)}\bigl(T^{(k)}_{\mathrm{end}} - u\bigr)^{-1}\,{\mathrm{d}} u \biggr)\\& =\exp\Biggl( -\alpha^{(k)}\log\Biggl( \dfrac{T^{(k)}_{\mathrm{end}} - T^{(k)}_{\mathrm{start}}}{T^{(k)}_{\mathrm{end}}} \Biggr) +\alpha^{(k)}\log\Biggl( \dfrac{T^{(k)}_{\mathrm{end}} - t}{T^{(k)}_{\mathrm{end}}} \Biggr) \Biggr) \\&= \Biggl( \dfrac{T^{(k)}_{\mathrm{end}} - t}{T^{(k)}_{\mathrm{end}}} \Biggr)^{\alpha^{(k)}}\Biggl( \dfrac{T^{(k)}_{\mathrm{end}} - T^{(k)}_{\mathrm{start}}}{T^{(k)}_{\mathrm{end}}} \Biggr)^{-\alpha^{(k)}}.\end{align*}

Therefore we have

\begin{align*}\mathcal{R}(i,k,t) & = \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\\&\quad \times\Biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\bigl( T^{(k)}_{\mathrm{end}} - t \bigr)^{\alpha^{(k)}}}{\bigl( T^{(k)}_{\mathrm{end}} - s \bigr)^{\alpha^{(k)}}}\,{\mathrm{d}} W^{(i,k)}_s - \dfrac{1}{n}\sum_{j=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\bigl( T^{(k)}_{\mathrm{end}} - t \bigr)^{\alpha^{(k)}}}{\bigl( T^{(k)}_{\mathrm{end}} - s \bigr)^{\alpha^{(k)}}}\beta^{(j,n)}\,{\mathrm{d}} W^{(j,k)}_s \Biggr).\end{align*}

In Example 2.1, if we take $n\rightarrow\infty$ and $\rho^{(k)}=0$ for all $k\in\mathcal{K}$ , each particle behaves as a collection of $\alpha$ -Wiener bridges that are continuously glued to each other across successive time segments. We shall clarify that the process $\{Q_t\}_{t\in\mathbb{T}}$ given by

\begin{align*}Q_t &= \int_0^t \dfrac{(T-t)^\alpha}{(T-s)^\alpha}\,{\mathrm{d}} W_s\end{align*}

is an $\alpha$ -Wiener bridge with initial and terminal values at zero, with $\{W_t\}_{t\in\mathbb{T}}$ a standard $(\mathbb{P},\{\mathcal{F}_{t}\})$ -Brownian motion and $\alpha\in(0,\infty)$ .

3. Additional analysis

3.1. Numerical illustration

We shall provide a numerical exercise for Example 2.1 for demonstration purposes, where we choose $f^{(k)}\in F^{(k)}$ as given in (2.49) and use the Euler–Maruyama method to approximate (2.5)–(2.6) over the mesh $0 = t_0 \leq t_1 \leq \cdots \leq t_E \leq T$ for some $E\in\mathbb{N}_+$ . For parsimony, we set $\alpha^{(k)}=1$ , $\sigma^{(k)}=\sigma$ and $\rho^{(k)}=\rho$ for every $k\in\mathcal{K}$ . Finally, we let $\{\hat{X}^{(i,n)}_{t_d}\}_{t_d\in\mathbb{T}}$ represent the discretised version of $\{X^{(i,n)}_t\}_{t\in\mathbb{T}}$ for $i\in\mathcal{I}$ , where

\begin{align*}\hat{X}^{(i,n)}_{t_{d+1}} = \hat{X}^{(i,n)}_{t_d} + f(t_d)\bigl(A^{(n)}_{t_d} - \hat{X}^{(i,n)}_{t_d}\bigr)\delta + \sigma\Bigl(\rho\bigl(B_{t_{d+1}} - B_{t_d}\bigr) + \sqrt{1-\rho^2}\bigl(W^{(i,n)}_{t_{d+1}} - W^{(i,n)}_{t_d}\bigr)\Bigr)\end{align*}

for every $i\in\mathcal{I}$ , where we set $\hat{X}^{(i,n)}_{t_0} = x = 0$ , $\beta^{(i,n)}=1$ for every $i\in\mathcal{I}$ , $\delta=T/E$ , $t_d = \delta d$ and $\mathcal{K} = \{1,2,3\}$ . For the simulations below, we set $\rho=0$ to exclude common noise; the case where $\rho \neq 0$ can equally be studied, but we shall omit it from this paper to focus on demonstrating how the system converges to $x = 0$ at each $\mathbb{T}^{\mathrm{end}}$ as we increase the number of particles n.

Figure 1. (a, b) $n=10$ and $n=50$ , (c, d) $n=100$ and $n=1000$ .

For the scenarios below, we set $T=3$ , $E=3000$ , and $\sigma=1$ . In addition, we set $\mathbb{T}^{\mathrm{start}}=\{0, 1, 2\}$ and $\mathbb{T}^{\mathrm{end}}=\{1, 2, 3\}$ ; hence the termination time-points coincide with the revival time-points. In Figure 1 we show samples of the system for $n=10$ , $n=50$ , $n=100$ , and $n=1000$ . At every termination time-point, the system converges to the ensemble average; that is, we get $X^{(i,n)}_{e}=A^{(n)}_{e}$ for every $e\in\mathbb{T}^{\mathrm{end}}$ and every $i\in\mathcal{I}$ , as proved in Proposition 2.1. We also see that the convergence points tend to 0 as we increase the number of particles, since we have

(3.1) \begin{align}A^{(n)}_{T^{(k)}_{\mathrm{end}}} \sim \mathcal{N}(0, V^{(k,n)} ), \quad \text{where} \quad \lim_{n \rightarrow \infty} V^{(k,n)} \rightarrow 0\end{align}

from Proposition 2.2, which means that (3.1) converges to the Dirac measure centred at 0. With a slight modification on the terminal and revival time-points, we can produce what we call adjourned coalescent mean-field interacting particles, where the system gets suspended at its ensemble average over longer time periods before it is restored back to its stochastic nature. Accordingly, in Figure 2, we set $\mathbb{T}^{\mathrm{start}}=\{0, 1.1, 2.1\}$ and $\mathbb{T}^{\mathrm{end}}=\{1, 2, 3\}$ , again for $n=10$ , $n=50$ , $n=100$ , and $n=1000$ . The system sticks to its ensemble average between $T^{(1)}_{\mathrm{end}}=1$ until $T^{(2)}_{\mathrm{start}}=1.1$ as well as between $T^{(2)}_{\mathrm{end}}=2$ until $T^{(3)}_{\mathrm{start}}=2.1$ ; this suspension is centred at 0 as $n \rightarrow \infty$ .

Figure 2. (a, b) $n=10$ and $n=50$ , (c, d) $n=100$ and $n=1000$ .

These processes can be useful in modelling multivariate systems where each particle requires a non-zero amount of time before they get reanimated; for example, for interacting particle collision systems, the environment may need to accumulate enough energy after each collision time before the system can become active again.

3.2. Changing particles across time segments

We are interested in a scenario where we incrementally add particles into the system as we move forward from one time segment $\mathbb{T}^{(k)}$ to $\mathbb{T}^{(k+1)}$ , without ever surpassing a finite number of particle inventory n in total, while ensuring a fixed confidence of convergence within a given epsilon ball. Before clarifying this further, we shall first extend our framework to account for a transformation from n to n(k), where the latter specifies the number of particles that we have over the time segment $\mathbb{T}^{(k)}$ such that

\begin{align*}n = \sum_{k=1}^m n(k), \quad n(k)>0.\end{align*}

In this spirit, we rewrite the ensemble average process as

\begin{align*}A_t^{(n(k))} = \dfrac{1}{n(k)}\sum_{j=1}^{n(k)} \beta^{(j,n(k))} X^{(j,n(k))}_t\end{align*}

for $t\in\mathbb{T}^{(k)}$ , with $|\beta^{(i,n(k))}| <\infty$ and $\sum_{i=1}^{n(k)} \beta^{(i,n(k))} = n(k)$ , and the SDE in (2.5)–(2.6) as follows:

\begin{align*}\,{\mathrm{d}} X^{(i,n(k))}_t &= f^{(k)}(t)\bigl(A^{(n(k))}_t - X^{(i,n(k))}_t \bigr)\,{\mathrm{d}} t + \sigma^{(k)}\Bigl(\rho^{(k)} \,{\mathrm{d}} B_t^{(k)} + \sqrt{1 -(\rho^{(k)})^2}\,{\mathrm{d}} W^{(i,k)}_t\Bigr)\end{align*}

for $t\in\mathbb{T}^{(k)}$ , $k\in\mathcal{K}$ , and

\begin{align*}X^{(i,n(k))}_{e} &= A_{T^{(k-1)}_{\mathrm{end}}}^{(n(k-1))} \quad \text{for all $e\in\bigl[T^{(k-1)}_{\mathrm{end}},T^{(k)}_{\mathrm{start}}\bigr]$, $k\in\mathcal{K}$,}\end{align*}

for all $i\in\mathcal{I}(k)=\{1,\ldots,n(k)\}$ , with $T^{(0)}_{\mathrm{end}}=T^{(1)}_{\mathrm{start}}=0$ and $n(0)=n(1)$ . We demonstrate how this interacting particle system behaves under different n(k) sequences, when we choose $f^{(k)}\in F^{(k)}$ as in (2.49) as before. We set the initial value $\hat{X}^{(i,n)}_{t_0} = x = 0$ as before. In Figure 3, we see examples of different particle numbers across different time segments. Figure 3(a) (resp. Figure 3(b)) represents an interacting system that scatters into more (resp. fewer) particles after each collision. Figures 3(c) and 3(d) model situations where the number of particles changes significantly from one regime to another.

The following result extends Proposition 2.1 with a similar proof that we shall omit to avoid repetition. We shall note that Proposition 3.1 below provides us with a universality statement for our proposed framework.

Proposition 3.1. Let $f^{(k)}\in F^{(k)}$ for every $k\in\mathcal{K}$ . Then each $\{X^{(i,n(k))}_t\}_{t\in\mathbb{T}}$ is time-continuous $\mathbb{P}\otimes\mathrm{d} t$ everywhere, such that

\begin{align*}X^{(i,n(k))}_{e}=A^{(n(k))}_{e} \quad \textit{for all}\ e\in\mathbb{T}^{\mathrm{end}}, \ i\in\mathcal{I}(k), \ k\in\mathcal{K} ,\end{align*}

where the ensemble average at each $e\in\mathbb{T}^{\mathrm{end}}$ is given by

\begin{align*}A^{(n(k))}_{T^{(k)}_{\mathrm{end}}} = A^{(n(k))}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_{T^{(k)}_{\mathrm{end}}} + \dfrac{\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}}{n(k)}\sum_{j=1}^{n(k)} \beta^{(j,n(k))} W^{(j,k)}_{T^{(k)}_{\mathrm{end}}}.\end{align*}

By virtue of Proposition 3.1, we can now take the space limits $n(k)\rightarrow\infty$ only over chosen time segments $\mathbb{T}^{(k)}$ . Surely, if we take $n(k)\rightarrow\infty$ for every $k\in\mathcal{K}$ , we recover the limiting behaviours as proved in the previous section. In essence, every result in the previous section can be generalised to account for this extended framework. For example, Proposition 2.2 becomes the following.

Proposition 3.2. Let every time-shifted Brownian motion $\{B^{(k)}_t\}_{t\in\mathbb{T}}$ and $\{W^{(i,k)}_t\}_{t\in\mathbb{T}}$ in the system be mutually independent for every $i\in\mathcal{I}(k)$ and $k\in\mathcal{K}$ . Then

\begin{align*}A^{(n(k))}_{T^{(k)}_{\mathrm{end}}} \sim \mathcal{N}\bigl(x, V^{(k,n(k))} \bigr),\end{align*}

given that $\mathcal{N}(.)$ stands for the Gaussian distribution, where

\begin{align*}V^{(k,n(k))} = \sum_{l=1}^k (\sigma^{(l)})^2\bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr)\biggl((\rho^{(l)})^2 + \dfrac{1 - (\rho^{(l)})^2}{n(l)^2}\| \boldsymbol{\beta}^{(n(l))}\|^{2}_{L^2}\biggr).\end{align*}

In this extended model, we are specifically interested in understanding if we can construct a system where we only need to add fewer additional particles $n(k+1) - n(k)$ into the system as we move forward from $\mathbb{T}^{(k)}$ to $\mathbb{T}^{(k+1)}$ , while sustaining a fixed confidence interval that all the particles will converge within a fixed epsilon ball around the initial value x at every $T^{(k)}_{\mathrm{end}}$ for $k\in\mathcal{K}$ . That is, for $B_{\epsilon}(x)=(-\epsilon, \epsilon)$ being the epsilon ball around x, with $\epsilon > 0$ , we would like to achieve

\begin{align*}n_*(k) = \mathrm{argmin}_{n(k)}\mathbb{P}\biggl(\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} X^{(i,n(k))}_t \in B_{\epsilon}(x)\biggr) \geq \lambda \quad \text{for $t\in\mathbb{T}^{(k)}$}, \ \text{$\lambda\in(0,1)$}.\end{align*}

Proposition 3.3. Let $\rho^{(l)}=0$ , $\beta^{(i,n(l))}=1$ for every $i\in\mathcal{I}(l)$ and $l=1,\ldots,k$ . Also, let $\sigma^{(l)}=\sigma^{(k)}$ , $\bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr)=\bigl(T^{(k)}_{\mathrm{end}} - T^{(k)}_{\mathrm{start}}\bigr)$ and $n(l)=n(k)$ for $l=1,\ldots,k$ . Then $n_*(k)$ exists for every $k\in\mathcal{K}$ .

Proof. The result follows from Propositions 3.1 and 3.2, since $\lambda < 1$ .

In addition, defining $\delta_*^{(k)} = n_*(k+1) - n_*(k)$ , we also aim to keep the following order:

\begin{align*}\delta_*^{(1)} \geq \cdots \geq \delta_*^{(m-1)},\end{align*}

which we shall show is possible when $\sigma^{(k)}$ is inversely proportional to n(k), such that

\begin{align*}\sigma^{(l)} = \dfrac{a}{n(k)^b} \quad \text{for $l=1,\ldots,k$ and $0 < a,b < \infty$}.\end{align*}

In the analysis below; $x=0$ , $\epsilon=0.01$ , $\lambda = 0.99$ , $a=50$ , $b=1/2$ , $\rho^{(k)}=0$ , $\beta^{(i,n(k))}=1$ , $\mathbb{T}^{\mathrm{start}}=\{0, 1, 2\}$ , and $\mathbb{T}^{\mathrm{end}}=\{1, 2, 3\}$ . In Figure 4(a), for each $\mathbb{T}^{(k)}$ , we see the probabilities that the system will converge to a value between $(-0.01, 0.01)$ across different numbers of particles within that time segment, and in Figure 4(b) we see the difference between these probability curves as we move from $\mathbb{T}^{(k)}$ to $\mathbb{T}^{(k+1)}$ . As expected, we see these probabilities increase as we increase the number of particles within that $\mathbb{T}^{(k)}$ , and the rate of convergence is slower as we move from $\mathbb{T}^{(k)}$ to $\mathbb{T}^{(k+1)}$ ; we need more particles in total for the system to converge to $(-\epsilon, \epsilon)$ as k increases. However, the additional number of particles we require to reach the confidence interval of $\lambda=0.99$ reduces as we move from $\mathbb{T}^{(k)}$ to $\mathbb{T}^{(k+1)}$ , such that $\delta_*^{(1)} > \cdots > \delta_*^{(6)}$ . Essentially, the more time segments we add to the system (as we increase k), the fewer additional particles $\delta_*^{(k)}$ are required in each consecutive time segment to achieve the same confidence of convergence within $B_{\epsilon}(x)$ .

Figure 4. (a) Probability curves for $(-\epsilon, \epsilon)$ ; (b) difference of probability curves.

3.3. Number of particles vs. time segments

Thus far, we have analysed double limits across the number of particles n vs. time t, which we have shown to commute. We shall now direct our attention to the relationship between the number of particles n vs. the number of time segments m that define the cardinality of $\mathcal{K}$ . It turns out that the double limits between n and m are not necessarily commutative, where we may have

(3.2) \begin{align}\lim_{n\rightarrow\infty}\lim_{k\rightarrow\infty}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} X^{(i,n)}_t \neq \lim_{k\rightarrow\infty}\lim_{n\rightarrow\infty}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} X^{(i,n)}_t\end{align}

for $t\in\mathbb{T}^{(k)}$ . To show this, we set $\beta^{(i,n)}=1$ for every $i\in\mathcal{I}$ , $\rho^{(k)}=0$ , $\sigma^{(k)}=\sigma>0$ for all $k\in\mathcal{K}$ , and fix the distance between each termination and revival time-points:

\begin{align*}\bigl(T^{(k)}_{\mathrm{end}} - T^{(k)}_{\mathrm{start}}\bigr)=\Delta > 0\end{align*}

for any $k\in\mathcal{K}$ . Then, using Proposition 2.2, we observe

\begin{align*}&\lim_{n\rightarrow\infty}\lim_{k\rightarrow\infty} V^{(k,n)} = \lim_{n\rightarrow\infty}\lim_{k\rightarrow\infty} \sum_{l=1}^k \sigma^2\dfrac{\Delta}{n} = \text{{undefined}}, \\&\lim_{k\rightarrow\infty}\lim_{n\rightarrow\infty} V^{(k,n)} = \lim_{k\rightarrow\infty}\lim_{n\rightarrow\infty} \sum_{l=1}^k \sigma^2\dfrac{\Delta}{n} = 0,\end{align*}

and since from Proposition 2.1 we know that

\begin{align*}X^{(i,n)}_t \rightarrow A^{(n)}_{T^{(k)}_{\mathrm{end}}} \quad \text{as $t\rightarrow T^{(k)}_{\mathrm{end}}$} \quad \text{for $t\in\mathbb{T}^{(k)}$},\end{align*}

we have constructed an example where the non-commutativity in (3.2) manifests. We can still maintain a commutative relationship between the limits across n vs. m; that is, if we do not fix $\sigma$ and $\Delta$ as above, and construct a sequence where

\begin{align*}\lim_{k\rightarrow\infty} \sum_{l=1}^k (\sigma^{(l)})^2\bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr) = c < \infty,\end{align*}

while keeping $\beta^{(i,n)}=1$ for every $i\in\mathcal{I}$ and $\rho^{(k)}=0$ for all $k\in\mathcal{K}$ , we can still achieve

\begin{align*}\lim_{n\rightarrow\infty}\lim_{k\rightarrow\infty}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} X^{(i,n)}_t = \lim_{k\rightarrow\infty}\lim_{n\rightarrow\infty}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} X^{(i,n)}_t = 0.\end{align*}

3.4. Covariance waves

We shall show how our proposed framework gives rise to what we call covariance waves, which are continuous curves across time generated by the covariance function in (2.28), whose trajectories resemble wave-like behaviour. First, using Proposition 2.3, we have

\begin{align*}C^{(i,j,n)}_{T^{(k)}_{\mathrm{start}}} = V^{(k-1,n)} \quad \text{with $V^{(0,n)}=0$}, \ k\in K,\end{align*}

and using Proposition 2.4, we also have

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}C^{(i,j,n)}_t = V^{(k,n)} \quad \text{for $t\in \mathbb{T}^{(k)}$}, \ k\in K.\end{align*}

These results confirm that the covariance function is continuous in time, where the dependence structure at any termination point becomes the dependence structure of the revival time-point of the next time segment. Therefore we can naturally reach covariance trajectories that are pinned at each $\mathbb{T}^{\mathrm{end}}$ , which can in turn provide wave-like covariance trajectories. We demonstrate this via Example 2.1 with $\sigma^{(k)}=\sigma$ , $\rho^{(k)}=0$ , $\alpha^{(k)}=\alpha$ for all $k\in\mathcal{K}$ and $\beta^{(i,n)}=1$ for all $i\in\mathcal{I}$ , so that

\begin{align*}X^{(i,n)}_t &= \sum_{k\in\mathcal{K}}\Biggl(A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \dfrac{\sigma}{n}\sum_{j=1}^n W^{(j,k)}_t \Biggr)\mathbf{1}(t\in\mathbb{T}^{(k)}) \\&\quad + \sum_{k\in\mathcal{K}}\sigma\Biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\bigl( T^{(k)}_{\mathrm{end}} - t \bigr)^{\alpha}}{\bigl( T^{(k)}_{\mathrm{end}} - s \bigr)^{\alpha}}\,{\mathrm{d}} W^{(i,k)}_s - \dfrac{1}{n}\sum_{j=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\bigl( T^{(k)}_{\mathrm{end}} - t \bigr)^{\alpha}}{\bigl( T^{(k)}_{\mathrm{end}} - s \bigr)^{\alpha}}\,{\mathrm{d}} W^{(j,k)}_s \Biggr)\mathbf{1}(t\in\mathbb{T}^{(k)})\end{align*}

for $t\in\mathbb{T}$ for $i\in\mathcal{I}$ . Using Proposition 2.3, for $\alpha \neq \frac12$ we have

\begin{align*}&C^{(i,j,n)}_t \\[3pt]& =\dfrac{\sigma^2(t-T^{(k)}_{\mathrm{start}})}{n} \\[3pt]&\quad + \mathbf{1}_{i,j}\sigma^2\!\left(\dfrac{T^{(k)}_{\mathrm{end}}\Bigl(1 - \Bigl({\frac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}}\Bigr)^{2\alpha}\Bigr) - t}{2\alpha -1} - \dfrac{\bigl(T^{(k)}_{\mathrm{end}} - t\bigr)^{2\alpha}}{\bigl(T^{(k)}_{\mathrm{end}} - T^{(k)}_{\mathrm{start}}\bigr)^{2\alpha}}\dfrac{T^{(k)}_{\mathrm{end}}\Bigl(1 - \Bigl({\frac{T^{(k)}_{\mathrm{end}}-T^{(k)}_{\mathrm{start}}}{T^{(k)}_{\mathrm{end}}}}\Bigr)^{2\alpha}\Bigr) - T^{(k)}_{\mathrm{start}}}{2\alpha -1}\right)\\[3pt]&\quad +V^{(k-1,n)} \\[3pt]&\quad - \dfrac{\sigma^2}{n}\!\left(\dfrac{T^{(k)}_{\mathrm{end}}\Bigl(1 - \Bigl({\frac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}}\Bigr)^{2\alpha}\Bigr) - t}{2\alpha -1} - \dfrac{\bigl(T^{(k)}_{\mathrm{end}} - t\bigr)^{2\alpha}}{\bigl(T^{(k)}_{\mathrm{end}} - T^{(k)}_{\mathrm{start}}\bigr)^{2\alpha}}\dfrac{T^{(k)}_{\mathrm{end}}\Bigl(1 - \Bigl({\frac{T^{(k)}_{\mathrm{end}}-T^{(k)}_{\mathrm{start}}}{T^{(k)}_{\mathrm{end}}}}\Bigr)^{2\alpha}\Bigr) - T^{(k)}_{\mathrm{start}}}{2\alpha -1}\right)\!,\end{align*}

and for $\alpha = \frac12$ we have

\begin{align*}C^{(i,j,n)}_t & =\dfrac{\sigma^2(t-T^{(k)}_{\mathrm{start}})}{n} \\&\quad + \mathbf{1}_{i,j}\sigma^2\Biggl(\bigl(t-T^{(k)}_{\mathrm{end}}\bigr)\log\Biggl(\dfrac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}\Biggr) -\bigl(t-T^{(k)}_{\mathrm{end}}\bigr) \log\Biggl(\dfrac{T^{(k)}_{\mathrm{end}}-T^{(k)}_{\mathrm{start}}}{T^{(k)}_{\mathrm{end}}}\Biggr) \Biggr) \\&\quad +V^{(k-1,n)} - \dfrac{\sigma^2}{n}\Biggl(\bigl(t-T^{(k)}_{\mathrm{end}}\bigr)\log\Biggl(\dfrac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}\Biggr) -\bigl(t-T^{(k)}_{\mathrm{end}}\bigr) \log\Biggl(\dfrac{T^{(k)}_{\mathrm{end}}-T^{(k)}_{\mathrm{start}}}{T^{(k)}_{\mathrm{end}}}\Biggr) \Biggr)\end{align*}

for $t\in\mathbb{T}^{(k)}$ . We shall now check the properties we expect from $C^{(i,j,n)}_t$ given above. For the case where $\alpha\neq \frac{1}{2}$ , we have the following:

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \dfrac{T^{(k)}_{\mathrm{end}}\Bigl(1 - \Bigl({\frac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}}\Bigr)^{2\alpha}\Bigr) - t}{2\alpha -1} &= \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \dfrac{T^{(k)}_{\mathrm{end}}-t}{2\alpha -1} - \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}\dfrac{T^{(k)}_{\mathrm{end}}\Bigl({\frac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}}\Bigr)^{2\alpha}}{2\alpha -1} \\&= 0.\end{align*}

For the case where $\alpha = \frac{1}{2}$ , we use L’Hôpital’s rule to get

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \bigl(t-T^{(k)}_{\mathrm{end}}\bigr)\log\Biggl(\dfrac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}\Biggr) &= \dfrac{\log\Bigl({\frac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}}\Bigr)}{\bigl(t-T^{(k)}_{\mathrm{end}}\bigr)^{-1}} =\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \dfrac{{\frac{\partial}{\partial t}}\log\Bigl({\frac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}}\Bigr)}{{\frac{\partial}{\partial t}}\bigl(t-T^{(k)}_{\mathrm{end}}\bigr)^{-1}} \\&= \lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \dfrac{\bigl(t-T^{(k)}_{\mathrm{end}}\bigr)^{-1}}{-\bigl(t-T^{(k)}_{\mathrm{end}}\bigr)^{-2}} =\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}} \bigl(T^{(k)}_{\mathrm{end}}-t\bigr) = 0.\end{align*}

Hence, for any $\alpha\in(0,\infty)$ , we have the limit

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}C^{(i,j,n)}_t = \sum_{l=1}^{k-1} \sigma^2\dfrac{\bigl(T^{(l)}_{\mathrm{end}} - T^{(l)}_{\mathrm{start}}\bigr)}{n} + \sigma^2\dfrac{\bigl(T^{(k)}_{\mathrm{end}}-T^{(k)}_{\mathrm{start}}\bigr)}{n} = V^{(k,n)},\end{align*}

as expected. In addition, for any $\alpha\in(0,\infty)$ ,

\begin{align*}C^{(i,j,n)}_{T^{(k)}_{\mathrm{start}}} = V^{(k-1,n)},\end{align*}

as expected. Note that we also have, for $\alpha\neq \frac{1}{2}$ ,

\begin{align*}&\lim_{n\rightarrow\infty}C^{(i,j,n)}_t\\&\ =\mathbf{1}_{i,j}\sigma^2\!\left(\rule{0pt}{21pt}\right.\! \dfrac{T^{(k)}_{\mathrm{end}}\Bigl(1 - \Bigl({\frac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}}\Bigr)^{2\alpha}\Bigr) - t}{2\alpha -1} - \dfrac{\bigl(T^{(k)}_{\mathrm{end}} - t\bigr)^{2\alpha}}{\bigl(T^{(k)}_{\mathrm{end}} - T^{(k)}_{\mathrm{start}}\bigr)^{2\alpha}}\dfrac{T^{(k)}_{\mathrm{end}}\Bigl(1 - \Bigl({\frac{T^{(k)}_{\mathrm{end}}-T^{(k)}_{\mathrm{start}}}{T^{(k)}_{\mathrm{end}}}}\Bigr)^{2\alpha}\Bigr) - T^{(k)}_{\mathrm{start}}}{2\alpha -1}\!\left.\rule{0pt}{21pt}\right)\! \!,\end{align*}

and for $\alpha = \frac{1}{2}$ ,

\begin{equation*}\lim_{n\rightarrow\infty}C^{(i,j,n)}_t =\mathbf{1}_{i,j}\sigma^2\Biggl(\bigl(t-T^{(k)}_{\mathrm{end}}\bigr)\log\Biggl(\dfrac{T^{(k)}_{\mathrm{end}}-t}{T^{(k)}_{\mathrm{end}}}\Biggr) -\bigl(t-T^{(k)}_{\mathrm{end}}\bigr) \log\Biggl(\dfrac{T^{(k)}_{\mathrm{end}}-T^{(k)}_{\mathrm{start}}}{T^{(k)}_{\mathrm{end}}}\Biggr) \Biggr) .\end{equation*}

Finally, this leads us to the commutativity relation

\begin{align*}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}\lim_{n\rightarrow\infty}C^{(i,j,n)}_t = \lim_{n\rightarrow\infty}\lim_{t\rightarrow T^{(k)}_{\mathrm{end}}}C^{(i,j,n)}_t = 0,\end{align*}

as Proposition 2.7 provides. We shall now demonstrate how the covariance process studied above produces wave-like behaviour across time. In Figure 5 we set $\sigma=1$ , $\mathbb{T}^{\mathrm{start}}=\{0, 1, 2\}$ , and $\mathbb{T}^{\mathrm{end}}=\{1, 2, 3\}$ .

Figure 5. (a) $n=10\,000$ across different $\alpha$ , (b) $\alpha=1$ across different n.

In Figure 5 we see wave-like behaviour of covariance trajectories due to the multi-pinning property of the interacting system. In Figure 5(a) we see the trajectories for a fixed $n=10\,000$ across different $\alpha$ values that change the shape of the waves, and in Figure 5(b) we see the influence of the number of particles for a fixed $\alpha$ . From Corollary 2.2, we know that as $n\rightarrow\infty$ , the limiting particles are mutually independent, and accordingly, from Figure 5(b), we see how the end points of the covariance waves approach zero as n increases (from 20 to $10\,000$ ), as expected; also, we know this directly from Corollary 2.4. Building up on the extension given in Proposition 3.1, we can also consider different n(k) per each time segment that would influence wave magnitudes.

3.5. A view from partial differential equations

Denoting $\boldsymbol{X}^{(n)}_t = \bigl[X^{(1,n)}_t, \ldots, X^{(n,n)}_t\bigr]^{\top}$ for every $t\in\mathbb{T}$ , one can work with a system of conditional expectation problems

\begin{align*}v^{(k)}(t, \boldsymbol{x}^{(n)}) = \mathbb{E}\biggl[ \exp\biggl(-\int_t^{T^{(k)}_{\mathrm{end}}}h^{(k)}(s)\,{\mathrm{d}} s\biggr) \varsigma^{(k)}\Bigl(A^{(n)}_{T^{(k)}_{\mathrm{end}}}\Bigr) \biggm| \boldsymbol{X}_t^{(n)} = \boldsymbol{x}^{(n)} \biggr], \quad t\in\mathbb{T}^{(k)} \bigcup T^{(k)}_{\mathrm{end}}\end{align*}

for some $\varsigma^{(k)}\colon \mathbb{R}\rightarrow\mathbb{R}$ and integrable function $h^{(k)}\colon \mathbb{T}^{(k)}\rightarrow\mathbb{R}$ , that can be computed by solving the sequence of partial differential equations given by

\begin{align*}&\dfrac{\partial v^{(k)}(t,\boldsymbol{x}^{(n)})}{\partial t} - h^{(k)}(t)v^{(k)}(t,\boldsymbol{x}^{(n)}) + \sum_{i\in\mathcal{I}}\dfrac{\partial v^{(k)}(t,\boldsymbol{x}^{(n)})}{\partial x^{(i,n)}}f^{(k)}(t)\Biggl(\dfrac{1}{n}\sum_{i\in\mathcal{I}}\beta^{(i,n)}x^{(i,n)} - x^{(i,n)} \Biggr) \\&\quad + \dfrac{1}{2}(\sigma^{(k)})^2\sum_{i\in\mathcal{I}}\dfrac{\partial^2 v^{(k)}(t,\boldsymbol{x}^{(n)})}{\partial x^{(i,n)}\partial x^{(i,n)}} + \dfrac{1}{2}(\sigma^{(k)})^2(\rho^{(k)})^2\sum_{i\in\mathcal{I}}\sum_{j\in\mathcal{I}, j\neq i}\dfrac{\partial^2 v^{(k)}(t,\boldsymbol{x}^{(n)})}{\partial x^{(i,n)}\partial x^{(j,n)}} = 0,\end{align*}

where the sequence of boundary conditions are given by

\begin{align*}v^{(k)}\bigl(T^{(k)}_{\mathrm{end}},\boldsymbol{x}^{(n)}\bigr) = \varsigma^{(k)}\Biggl(\dfrac{1}{n}\sum_{i\in\mathcal{I}}\beta^{(i,n)}x^{(i,n)}\Biggr)\end{align*}

for $k\in\mathcal{K}$ . Also note that we have

\begin{align*}v^{(k)}\bigl(T^{(k)}_{\mathrm{start}}, \,.\, \bigr) = \mathbb{E}\biggl[ \exp\biggl(-\int_{T^{(k)}_{\mathrm{start}}}^{T^{(k)}_{\mathrm{end}}}h^{(k)}(s)\,{\mathrm{d}} s\biggr) \varsigma^{(k)}\Bigl(A^{(n)}_{T^{(k)}_{\mathrm{end}}}\Bigr) \biggm| A_{T^{(k-1)}_{\mathrm{end}}}^{(n)} \biggr], \quad k\in\mathcal{K}.\end{align*}

If we set $h^{(k)} = 0$ and $\varsigma^{(k)}(x) = x$ as the identity map, then

\begin{align*}v^{(k)}\bigl(T^{(k)}_{\mathrm{start}}, \,.\, \bigr) = \mathbb{E}\bigl[A^{(n)}\bigl(T^{(k)}_{\mathrm{end}}\bigr) \mid A^{(n)}\bigl(T^{(k-1)}_{\mathrm{end}}\bigr)\bigr]\end{align*}

for $k\in\mathcal{K}$ ; such computations are straightforward via the results presented in this paper.

3.6. Connection to random n-bridges

A wide family of stochastic processes called random n-bridges (RnBs) were introduced in [Reference Mengütürk and Mengütürk29] and have led to a programme of producing stochastic Schrödinger dynamics on a complex Hilbert space for explaining sequential quantum reduction of commutative observables. Although RnBs and the family of interacting particle systems studied in this paper are not the same, we shall nonetheless discuss an intriguing connection between these two, since both exhibit convergence behaviour to a set of random variables over a set of predetermined time-points. First, we recall the definition of RnBs (with notational adjustments to fit this paper) and leave any missing detail for the reader to fill in from [Reference Mengütürk and Mengütürk29].

Definition 3.1. Let $\{G^{(i)}_t\}_{t\geq0}$ be an $\mathbb{R}$ -valued càdlàg stochastic process and let

\begin{align*}\textbf{X} = [X^{(1)}, \ldots, X^{(m)}]^{\top}\end{align*}

be a vector of $\mathbb{R}$ -valued random variables $X^{(k)}\in\mathcal{L}^2(\Omega,\mathcal{F},\mathbb{P})$ with a joint probability $\boldsymbol{\nu}(\mathrm{d} \boldsymbol{x})$ , for $i\in\mathcal{I}$ and $k\in\mathcal{K}$ . A stochastic process $\{\xi^{(i)}_{t}\}_{t\in\mathbb{T}}$ is said to be a random n-bridge (RnB) to $\textbf{X}$ over $\mathbb{T}^{\mathrm{end}}$ if the following hold.

  1. (i) for $k\in\mathcal{K}$ such that

    \begin{align*}\mathbb{P}\Bigl(\xi^{(i)}_{T^{(1)}_{\mathrm{end}}} \in \mathrm{d} x_1, \ldots, \xi^{(i)}_{T^{(k)}_{\mathrm{end}}} \in \mathrm{d} x_k, \ldots, \xi^{(i)}_{T^{(m)}_{\mathrm{end}}} \in \mathrm{d} x_m \Bigr) = \boldsymbol{\nu}(\mathrm{d} x_1,\ldots,\mathrm{d} x_k, \ldots, \mathrm{d} x_m).\end{align*}
  2. (ii) For all $m(k)\in\mathbb{N}_+$ , every $T^{(k-1)}_{\mathrm{end}}<t_{k,1}<\cdots<t_{k,m(k)}<T^{(k)}_{\mathrm{end}}$ , every $(y_{k,1},\ldots,y_{k,m(k)})\in\mathbb{R}^{m(k)}$ for $k\in\mathcal{K}$ , and for all $\boldsymbol{x}$ such that $\boldsymbol{\nu}(\mathrm{d} \boldsymbol{x})>0$ , it holds that

    \begin{align*}&\mathbb{P}\Bigl(\bigl\{\xi^{(i)}_{t_{k,1}}\leq y_{k,1},\ldots,\xi^{(i)}_{t_{k,m(k)}}\leq y_{k,m(k)} \colon k\in\mathcal{K} \bigr\} \bigm| \\&\quad \quad \xi^{(i)}_{T^{(1)}_{\mathrm{end}}} =x_1, \ldots, \xi^{(i)}_{T^{(k)}_{\mathrm{end}}} =x_k, \ldots, \xi^{(i)}_{T^{(m)}_{\mathrm{end}}} =x_m \Bigr) \nonumber \\&\quad =\mathbb{P}\Bigl(\bigl\{G^{(i)}_{t_{k,1}}\leq y_{k,1},\ldots,G^{(i)}_{t_{k,m(k)}}\leq y_{k,m(k)} \colon k\in\mathcal{K} \bigr\} \bigm| \\&\quad \quad \qquad G^{(i)}_{T^{(1)}_{\mathrm{end}}} =x_1, \ldots, G^{(i)}_{T^{(k)}_{\mathrm{end}}} =x_k, \ldots, G^{(i)}_{T^{(m)}_{\mathrm{end}}} =x_m\Bigr). \nonumber\end{align*}

In this setup, $T^{(k)}_{\mathrm{end}}=T^{(k+1)}_{\mathrm{start}}$ . Note that there is no (mean-field) interaction between $\{\xi^{(i)}_{t}\}_{t\in\mathbb{T}}$ s for $i\in\mathcal{I}$ , but each $\{\xi^{(i)}_{t}\}_{t\in\mathbb{T}}$ is probabilistically conditioned to take the law of $\textbf{X}$ over $\mathbb{T}^{\mathrm{end}}$ . As an example, if we choose $\{G^{(i)}_t\}_{t\geq0}$ (and thus $\{\xi^{(i)}_{t}\}_{t\in\mathbb{T}}$ ) to be a purely continuous process and let $\mathbb{P}( X^{(k)} = x) = 1$ , we can interpret each $\{\xi^{(i)}_{t}\}_{t\in\mathbb{T}}$ to be continuously pinned to the value x at each $T^{(k)}_{\mathrm{end}}$ for $k\in\mathcal{K}$ . In fact, if we choose

\begin{align*}X^{(k)} \triangleq A^{(n)}_{T^{(k)}_{\mathrm{end}}} = A^{(n)}_{T^{(k)}_{\mathrm{start}}} + \sigma^{(k)}\rho^{(k)} B^{(k)}_{T^{(k)}_{\mathrm{end}}} + \dfrac{\sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j,k)}_{T^{(k)}_{\mathrm{end}}}\end{align*}

and set $X^{(0)}=x$ , let $\{G^{(i)}_t\}_{t\geq0}$ be the càdlàg process given by

\begin{align*}G^{(i)}_t = \sum_{k\in\mathcal{K}} X^{(k-1)}\mathbf{1}\bigl(t=T^{(k)}_{\mathrm{start}}\bigr) + \sum_{k\in\mathcal{K}} W^{(i,k)}_t \mathbf{1}(t\in\mathbb{T}^{(k)}) + X^{(m)}\mathbf{1}\bigl(t=T^{(m)}_{\mathrm{end}}\bigr),\end{align*}

we reach a system of RnBs having certain similarities to the aforementioned family of processes studied in this paper – e.g. pinning points having the same distribution – but we still cannot recover the mean-field interaction property or the same system of SDEs as given in (2.5)–(2.6). However, the SDEs we would get from the above RnB would still have some functional similarities to (2.5), for which we shall leave a more detailed analysis for future research. Our foresight here stems from an observation made in [Reference Mengütürk26], where there is only a single time segment with $m=1$ , and where we see a discussion of an SDE of the form

(3.3) \begin{align}\,\mathrm{d} \xi^{(i)}_t &= \dfrac{1}{T^{(1)}_{\mathrm{end}}-t}\bigl(\mathbb{E}\bigl[ X^{(1)}\mid \mathcal{F}^{\xi^{(i)}}_t \bigr] - \xi^{(i)}_t \bigr)\,\mathrm{d} t + \,\mathrm{d} W^{(i,1)}_{t} \quad \text{for $t\in\mathbb{T}^{(1)}$} \notag \\&= \dfrac{1}{T^{(1)}_{\mathrm{end}}-t}\bigl(\mathbb{E}\bigl[X^{(1)} \mid \xi^{(i)}_t \bigr] - \xi^{(i)}_t \bigr)\,\mathrm{d} t + \,\mathrm{d} W^{(i,1)}_{t} \quad \text{for $t\in\mathbb{T}^{(1)}$}, \end{align}

with $\mathcal{F}^{\xi^{(i)}}_t = \sigma(\{\xi^{(i,n)}_s\}\colon 0\leq s \leq t)$ for the so-called Brownian random bridge $\{\xi^{(i)}_t\}_{t\in\mathbb{T}}$ for $i\in\mathcal{I}$ (see [Reference Brody and Hughston4Reference Brody, Hughston and Macrina6]), satisfying the following anticipative representation:

(3.4) \begin{align}\xi^{(i)}_t = X^{(1)}\dfrac{t}{T^{(1)}_{\mathrm{end}}} + Z^{(i)}_{tT^{(1)}_{\mathrm{end}}},\end{align}

where each $\{Z^{(i)}_{tT}\}_{t\in\mathbb{T}}$ is a mutually independent standard Brownian bridge. From (3.4), we can see that the following hold:

\begin{align*}\xi^{(i)}_t \rightarrow X^{(1)} \quad \text{as $t\rightarrow T^{(1)}_{\mathrm{end}}$}, \quad \text{since} \quad Z^{(i)}_{tT^{(1)}_{\mathrm{end}}} \rightarrow 0 \quad \text{as $t\rightarrow T^{(1)}_{\mathrm{end}}$}.\end{align*}

The SDE in (3.3) has similarities to the SDE in (2.5) when we choose $m=1$ , $x=0$ , $\rho^{(1)}=0$ , $\sigma^{(1)}=1$ , and $f^{(1)}(t)=1/\bigl(T^{(1)}_{\mathrm{end}}-t\bigr)$ . In this specific case, the main difference between (2.5) and (3.3) remains the presence of the ensemble average $\{A_t^{(n)}\}_{t\in\mathbb{T}}$ in (2.5) that is replaced by a marginal conditional expectation process $\{\mathbb{E}[X^{(1)} \mid \xi^{(i)}_t ]\}_{t\in\mathbb{T}}$ in (3.3), which essentially loses the mean-field interaction component. As for the case of $m>1$ , any such connection will be of a more complicated nature, which deserves a separate study.

4. Conclusion

In this paper we have introduced and studied a family of coalescent interacting stochastic processes where each particle continuously converges to the ensemble average of the system over a set of successive sequence of fixed time-points. We proved numerous results that show how this random system behaves under space and time limits, and provided a universality statement for our framework when studying aspects of a different number of particles across different time segments. In addition, we illustrate several intriguing properties of these systems through numerical simulations and what we call covariance waves. One of our main narratives goes that, in any finite time segment, the more particles you throw into the system the more independent and decoupled each particle becomes in the interim, while at the same time, the more assuredly one tends to secure all particles to converge to, and recouple at, the same deterministic value, simultaneously, at each terminus.

We envision several directions for future research. For instance, one can model the end points of the time segments as an increasing sequence of random variables, which would produce anticipative SDE representations when the limiting behaviour of $f^{(k)}\in F^{(k)}$ is defined in terms of such random sequences. In addition, it would be interesting to study whether the framework can be generalised to include pure-jump processes with pinning properties. Finally, one can construct examples of such systems and study their specific properties; we have already provided Example 2.1 within the main body of this paper, and shall provide another one below to motivate the interested reader.

Example 4.1. Let $f^{(k)}\in F^{(k)}$ be given by

\begin{equation*}f^{(k)}(t) = \dfrac{\theta^t\log(\theta)}{\theta^{T^{(k)}_{\mathrm{end}}}-\theta^t}, \quad t\in\mathbb{T}^{(k)}.\end{equation*}

Then we have

\begin{align*}\gamma^{(k)}(t) &= \exp\biggl( - \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\theta^u\log(\theta)}{\theta^{T^{(k)}_{\mathrm{end}}}-\theta^u}\,{\mathrm{d}} u \biggr)\\& =\exp\bigl(-\log\bigl( \theta^{T^{(k)}_{\mathrm{end}}} - \theta^{T^{(k)}_{\mathrm{start}}}\bigr)+\log\bigl( \theta^{T^{(k)}_{\mathrm{end}}} - \theta^{t} \bigr) \bigr) \\&= \dfrac{\theta^{T^{(k)}_{\mathrm{end}}} - \theta^{t}}{\theta^{T^{(k)}_{\mathrm{end}}} - \theta^{T^{(k)}_{\mathrm{start}}}}.\end{align*}

Therefore we have

\begin{align*}\mathcal{R}(i,k,t) &= \sigma^{(k)}\sqrt{1 - (\rho^{(k)})^2}\\&\quad \times \Biggl(\int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\theta^{T^{(k)}_{\mathrm{end}}} - \theta^{t}}{\theta^{T^{(k)}_{\mathrm{end}}} - \theta^{s}}\,{\mathrm{d}} W^{(i,k)}_s - \dfrac{1}{n}\sum_{j=1}^n \int_{T^{(k)}_{\mathrm{start}}}^t \dfrac{\theta^{T^{(k)}_{\mathrm{end}}} - \theta^{t}}{\theta^{T^{(k)}_{\mathrm{end}}} - \theta^{s}}\beta^{(j,n)}\,{\mathrm{d}} W^{(j,k)}_s \Biggr).\end{align*}

We emphasise that Examples 2.1 and 4.1 are certainly far from being exhaustive; many others can be constructed and studied within the proposed framework.

Acknowledgements

The authors are grateful to the anonymous referees for their very valuable suggestions and insightful comments that led to a significant improvement of this paper.

Funding information

There are no funding bodies to thank relating to the creation of this article.

Competing interests

There are no competing interests to declare which arose during the preparation or publication process of this article.

References

Barczy, M. and Kern, P. (2011). General $\alpha$ -Wiener bridges. Commun. Stoch. Anal. 5, 8.Google Scholar
Barczy, M. and Pap, G. (2010). $\alpha$ -Wiener bridges: Singularity of induced measures and sample path properties. Stoch. Anal. Appl. 28, 447466.10.1080/07362991003704985CrossRefGoogle Scholar
Bolley, F., Cañizo, J. A. and Carillo, J. A. (2012). Mean-field limit for the stochastic Vicsek model. Appl. Math. Lett. 25, 339343.10.1016/j.aml.2011.09.011CrossRefGoogle Scholar
Brody, D. C. and Hughston, L. P. (2005). Finite-time stochastic reduction models. J. Math. Phys. 46, 082101.10.1063/1.1990108CrossRefGoogle Scholar
Brody, D. C. and Hughston, L. P. (2006). Quantum noise and stochastic reduction. J. Phys. A 39, 833.10.1088/0305-4470/39/4/008CrossRefGoogle Scholar
Brody, D. C., Hughston, L. P. and Macrina, A. (2008). Information-based asset pricing. Internat. J. Theoret. Appl. Finance 11, 107142.10.1142/S0219024908004749CrossRefGoogle Scholar
Budhiraja, A., Dupuis, P. and Fischer, M. (2012). Large deviation properties of weakly interacting processes via weak convergence methods. Ann. Prob. 40, 74102.10.1214/10-AOP616CrossRefGoogle Scholar
Carmona, R., Fouque, J.-P. and Sun, L.-H. (2015). Mean field games and systemic risk. Commun. Math. Sci. 13, 911933.10.4310/CMS.2015.v13.n4.a4CrossRefGoogle Scholar
Degond, P. and Motsch, S. (2008). Continuum limit of self-driven particles with orientation interaction. Math. Models Methods Appl. Sci. 18, 11931215.10.1142/S0218202508003005CrossRefGoogle Scholar
Del Moral, P. and Rio, E. (2011). Concentration inequalities for mean field particle models. Ann. Appl. Prob. 21, 10171052.10.1214/10-AAP716CrossRefGoogle Scholar
Einstein, A. and Rosen, N. (1935). The particle problem in the general theory of relativity. Phys. Rev. 48, 7377.10.1103/PhysRev.48.73CrossRefGoogle Scholar
Gartner, J. (1988). On the McKean–Vlasov limit for interacting diffusions. Math. Nachr. 137, 197248.10.1002/mana.19881370116CrossRefGoogle Scholar
Gompper, G., Ihle, T., Kroll, D. M. and Winkler, R. G. (2009). Multi-particle collision dynamics: A particle-based mesoscale simulation approach to the hydrodynamics of complex fluids. In Advanced Computer Simulation Approaches for Soft Matter Sciences III (Advances in Polymer Science 221). Springer, Berlin.Google Scholar
Hildebrandt, F. and Roelly, S. (2020). Pinned diffusions and Markov bridges. J. Theoret. Prob. 33, 906917.10.1007/s10959-019-00954-5CrossRefGoogle Scholar
Hoyle, E., Macrina, A. and Mengütürk, L. A. (2020). Modulated information flows in financial markets. Internat. J. Theoret. Appl. Finance 23, 2050026.10.1142/S0219024920500260CrossRefGoogle Scholar
Huang, H., Liu, J. G. and Pickl, P. (2020). On the mean-field limit for the Vlasov–Poisson–Fokker–Planck system. J. Statist. Phys. 181, 1915.10.1007/s10955-020-02648-3CrossRefGoogle Scholar
Huang, M. Y., Malhamé, R. P. and Caines, P. E. (2006). Large population stochastic dynamic games: Closed-loop McKean–Vlasov systems and the Nash certainty equivalence principle. Commun. Inform. Systems 6, 221252.Google Scholar
Jovanovic, B. and Rosenthal, R. W. (1988). Anonymous sequential games. J. Math. Economics 17, 7787.10.1016/0304-4068(88)90029-8CrossRefGoogle Scholar
Lacker, D. (2015). Mean field games via controlled Martingale problems: Existence of Markovian equilibria. Stoch. Process. Appl. 125, 28562894.10.1016/j.spa.2015.02.006CrossRefGoogle Scholar
Li, X.-M. (2018). Generalised Brownian bridges: Examples. Markov Process. Relat. Fields 24, 151163.Google Scholar
Malevanets, A. and Kapral, R. (1999). Mesoscopic model for solvent dynamics. J. Chem. Phys. 110, 86058613.10.1063/1.478857CrossRefGoogle Scholar
Malevanets, A. and Kapral, R. (2000). Solute molecular dynamics in a mesoscale solvent. J. Chem. Phys. 112, 72607269.10.1063/1.481289CrossRefGoogle Scholar
Mansuy, R. (2004). On a one-parameter generalization of the Brownian bridge and associated quadratic functionals. J. Theoret. Prob. 17, 10211029.10.1007/s10959-004-0588-8CrossRefGoogle Scholar
Mengütürk, L. A. (2016). Stochastic Schrödinger evolution over piecewise enlarged filtrations. J. Math. Phys. 57, 032106.10.1063/1.4944626CrossRefGoogle Scholar
Mengütürk, L. A. (2018). Gaussian random bridges and a Geometric model for information equilibrium. Phys. A 494, 465483.10.1016/j.physa.2017.12.040CrossRefGoogle Scholar
Mengütürk, L. A. (2021). A family of interacting particle systems pinned to their ensemble average. J. Phys. A 54, 435001.10.1088/1751-8121/ac2715CrossRefGoogle Scholar
Mengütürk, L. A. (2023). Time-convergent random matrices from mean-field pinned interacting systems. J. Appl. Prob. 60, 394417.10.1017/jpr.2022.53CrossRefGoogle Scholar
Mengütürk, L. A. (2024). On Doob h-transformations for finite-time quantum state reduction. J. Math. Phys. 65, 032103.10.1063/5.0162658CrossRefGoogle Scholar
Mengütürk, L. A. and Mengütürk, M. C. (2020). Stochastic sequential reduction of commutative Hamiltonians. J. Math. Phys. 61, 102104.10.1063/5.0004810CrossRefGoogle Scholar
Nourian, M. and Caines, P. E. (2013). $\epsilon$ -Nash mean field game theory for nonlinear stochastic dynamical systems with major and minor agents. SIAM J. Control Optim. 51, 33023331.10.1137/120889496CrossRefGoogle Scholar
Raine, D. and Thomas, E. (2009). Black Holes: An Introduction, 2nd edn. Imperial College Press.10.1142/p637CrossRefGoogle Scholar
Sznitman, A. S. (1991). Topics in propagation of chaos. In École d’Eté de Probabilités de Saint-Flour XIX–1989 (Lecture Notes in Mathematics 1464), ed. P. L. Hennequin, pp. 165251. Springer, Berlin, Heidelberg.Google Scholar
Figure 0

Figure 1. (a, b) $n=10$ and $n=50$, (c, d) $n=100$ and $n=1000$.

Figure 1

Figure 2. (a, b) $n=10$ and $n=50$, (c, d) $n=100$ and $n=1000$.

Figure 2

Figure 3. (a, b) [2, 25, 100] and [100, 25, 2], (c, d) [1000, 2, 1000] and [2, 1000, 2].

Figure 3

Figure 4. (a) Probability curves for $(-\epsilon, \epsilon)$; (b) difference of probability curves.

Figure 4

Figure 5. (a) $n=10\,000$ across different $\alpha$, (b) $\alpha=1$ across different n.