We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The phenomenological theory proposed by Einstein for interpreting the phenomenon of Brownian motion is described in detail. The alternative approaches due to Langevin and Fokker–Planck are also illustrated. The theory of Markov chains is also reported as a basic mathematical approach to stochastic processes in discrete space and time; various of its applications, for example, the Monte Carlo method, are also illustrated. The theory of stochastic equations, as a representation of stochastic processes in continuous space–time, is discussed and used for obtaining a generalized, rigorous formulation of the Langevin and Fokker–Planck equations for generalized fluctuating observables. The Arrhenius formula as an example of the first exit-time problem is also derived.
Statistical mechanics is hugely successful when applied to physical systems at thermodynamic equilibrium; however, most natural phenomena occur in nonequilibrium conditions and more sophisticated techniques are required to address this increased complexity. This second edition presents a comprehensive overview of nonequilibrium statistical physics, covering essential topics such as Langevin equations, Lévy processes, fluctuation relations, transport theory, directed percolation, kinetic roughening, and pattern formation. The first part of the book introduces the underlying theory of nonequilibrium physics, the second part develops key aspects of nonequilibrium phase transitions, and the final part covers modern applications. A pedagogical approach has been adopted for the benefit of graduate students and instructors, with clear language and detailed figures used to explain the relevant models and experimental results. With the inclusion of original material and organizational changes throughout the book, this updated edition will be an essential guide for graduate students and researchers in nonequilibrium thermodynamics.
In this chapter, we present the microscopic (Langevin equation) and macroscopic (Fokker–Planck equation) descriptions of Brownian motion and confirm their consistency. Furthermore, we provide a detailed introduction to the Poisson process, which forms the foundation of chemical reactions. Subsequently, we introduce the chemical Langevin equation and its corresponding Fokker–Planck equation, which are utilized in modeling molecular number fluctuations in chemical reactions. We also explain stochastic differential equations with both the Ito and Stratonovich types of integrals. Exploring mechanisms arising from the presence of noise, we discuss noise-induced transitions and attractor selection and adaptation in dynamical systems, elucidating their functional significance in cells. Finally, as an advanced topic, we introduce adiabatic elimination in stochastic systems.
This chapter quantitatively examines molecule numbers and reaction rates within a cell, along with thermal fluctuations and Brownian motion, from a mesoscopic perspective. Thermal fluctuations of molecules are pivotal in chemical reactions, protein folding, molecular motor systems, and so on. We introduce estimations of cell size and molecule numbers within cells, highlighting the possible significance of the minority of molecules. Describing their behaviors necessitates dealing with stochastic fluctuations, and the Gillespie algorithm, widely employed in Monte Carlo simulations for stochastic chemical reactions, is described. We elaborate on extrinsic and intrinsic noise in cells, and on why understanding how cells process fluctuations for sensing is crucial. To facilitate this comprehension, we revisit the fundamentals of statistics, including the law of large numbers and the central limit theorem. We derive the diffusion equation from random walk and confirm the dimensionality dependence of random walks, and elucidate Brownian motion as the continuous limit of random walk and explain the Einstein relation. As examples of the physiological significance of fluctuations in cell biology, we estimate the diffusion constant of proteins inside cells, diffusion-limited reactions, and introduce bacterial random walks and chemotaxis, and amoeboid movements of eukaryotic cells.
For the Brownian motion of a particle in a fluid, the Langevin equation for its momentum is introduced phenomenologically. The strength of the noise is shown to be related to friction, and, in a second step, to the diffusion coefficient. Excellent agreement with experiments on a levitated particle in gas is demonstrated. This phenomenological Langevin equation is then shown to follow from a general projection approach to the underlying Hamiltonian dynamics of the full system in the limit of an infinite mass ratio between Brownian particles and fluid molecules. For Brownian motion in liquids, additional time-scales enter that are discussed phenomenologically and illustrated with experiments.
Stochastic thermodynamics has emerged as a comprehensive theoretical framework for a large class of non-equilibrium systems including molecular motors, biochemical reaction networks, colloidal particles in time-dependent laser traps, and bio-polymers under external forces. This book introduces the topic in a systematic way, beginning with a dynamical perspective on equilibrium statistical physics. Key concepts like the identification of work, heat and entropy production along individual stochastic trajectories are then developed and shown to obey various fluctuation relations beyond the well-established linear response regime. Representative applications are then discussed, including simple models of molecular motors, small chemical reaction networks, active particles, stochastic heat engines and information machines involving Maxwell demons. This book is ideal for graduate students and researchers of physics, biophysics, and physical chemistry, with an interest in non-equilibrium phenomena.
The payoff in the Chow–Robbins coin-tossing game is the proportion of heads when you stop. Stopping to maximize expectation was addressed by Chow and Robbins (1965), who proved there exist integers ${k_n}$ such that it is optimal to stop at n tosses when heads minus tails is ${k_n}$. Finding ${k_n}$ was unsolved except for finitely many cases by computer. We prove an $o(n^{-1/4})$ estimate of the stopping boundary of Dvoretsky (1967), which then proves ${k_n} = \left\lceil {\alpha \sqrt n \,\, - 1/2\,\, + \,\,\frac{{\left( { - 2\zeta (\! -1/2)} \right)\sqrt \alpha }}{{\sqrt \pi }}{n^{ - 1/4}}} \right\rceil $ except for n in a set of density asymptotic to 0, at a power law rate. Here, $\alpha$ is the Shepp–Walker constant from the Brownian motion analog, and $\zeta$ is Riemann’s zeta function. An $n^{ - 1/4}$ dependence was conjectured by Christensen and Fischer (2022). Our proof uses moments involving Catalan and Shapiro Catalan triangle numbers which appear in a tree resulting from backward induction, and a generalized backward induction principle. It was motivated by an idea of Häggström and Wästlund (2013) to use backward induction of upper and lower Value bounds from a horizon, which they used numerically to settle a few cases. Christensen and Fischer, with much better bounds, settled many more cases. We use Skorohod’s embedding to get simple upper and lower bounds from the Brownian analog; our upper bound is the one found by Christensen and Fischer in another way. We use them first for yet many more examples and a conjecture, then algebraically in the tree, with feedback to get much sharper Value bounds near the border, and analytic results. Also, we give a formula that gives the exact optimal stop rule for all n up to about a third of a billion; it uses the analytic result plus terms arrived at empirically.
Brownian motion is a continuous-time process obtained by taking the limit of a scaled random walk. Alternatively, a Brownian motion can be defined in an axiomatic way, using a set of fundamental properties including the normal distribution feature. We consider various transforms of the latter, including scaling, shifting, and the exponential transform. The latter gives rise to the geometric Brownian motion, which is often used to model asset prices or to build Radon–Nikodym derivatives processes. We conclude the chapter by proving Girsanovs theorem. We recall that the distributions of random variables depend on the probability measure at hand, hence, the distributional properties of a stochastic process are impacted by a change of measure. Consequently, a process may display different properties (e.g., different distributions) under different measures. In particular, a process may display the properties of Brownian motions under one measure, but not under another measure. Girsanovs theorem explains how Brownian motion properties are impacted when changing the probability measure using an exponential martingale as the Radon–Nikodym derivative process.
The chapter develops probabilistic methods for pricing and dynamic replication in an arbitrage-free market, including an introduction to continuous-time methodology.
The other prime example of a fine topology is the fine topology of potential theory (in the usual sense of electromagnetism, gravitation, etc.) This is finer than the Euclidean topology but coarser than the density topology. Each of these three topologies has its σ-ideal of small sets: the meagre sets for the Euclidean case, the polar sets for the fine topology of potential theory, and the (Lebesgue-)null sets for the density topology. The polar sets have been extensively studied, not only in potential theory as above but in probabilistic potential theory; pioneers here include P.-A. Meyer and J. L. Doob. Relevant here are the links between martingales and harmonic functions (likewise their sub- and super-versions), Green functions, Green domains, Markov processes, Brownian motion, Dirichlet forms, energy and capacity. The general theory of such fine topologies involves such things as analytically heavy topologies, base operators, density operators and lifting.
This chapter introduces detailed mathematical modelling for diffusion-based molecular communication systems. Mathematical and physical aspects of diffusion are covered, such as the Wiener process, drift, first arrival time distributions, the effect of concentration, and Fick’s laws. Simulation of molecular communication systems is also discussed.
We present a closed-form solution to a discounted optimal stopping zero-sum game in a model based on a generalised geometric Brownian motion with coefficients depending on its running maximum and minimum processes. The optimal stopping times forming a Nash equilibrium are shown to be the first times at which the original process hits certain boundaries depending on the running values of the associated maximum and minimum processes. The proof is based on the reduction of the original game to the equivalent coupled free-boundary problem and the solution of the latter problem by means of the smooth-fit and normal-reflection conditions. We show that the optimal stopping boundaries are partially determined as either unique solutions to the appropriate system of arithmetic equations or unique solutions to the appropriate first-order nonlinear ordinary differential equations. The results obtained are related to the valuation of the perpetual lookback game options with floating strikes in the appropriate diffusion-type extension of the Black–Merton–Scholes model.
We are interested in the law of the first passage time of an Ornstein–Uhlenbeck process to time-varying thresholds. We show that this problem is connected to the laws of the first passage time of the process to members of a two-parameter family of functional transformations of a time-varying boundary. For specific values of the parameters, these transformations appear in a realisation of a standard Ornstein–Uhlenbeck bridge. We provide three different proofs of this connection. The first is based on a similar result for Brownian motion, the second uses a generalisation of the so-called Gauss–Markov processes, and the third relies on the Lie group symmetry method. We investigate the properties of these transformations and study the algebraic and analytical properties of an involution operator which is used in constructing them. We also show that these transformations map the space of solutions of Sturm–Liouville equations into the space of solutions of the associated nonlinear ordinary differential equations. Lastly, we interpret our results through the method of images and give new examples of curves with explicit first passage time densities.
In this paper, we consider a joint drift rate control and two-sided impulse control problem in which the system manager adjusts the drift rate as well as the instantaneous relocation for a Brownian motion, with the objective of minimizing the total average state-related cost and control cost. The system state can be negative. Assuming that instantaneous upward and downward relocations take a different cost structure, which consists of both a setup cost and a variable cost, we prove that the optimal control policy takes an $\left\{ {\!\left( {{s^{\ast}},{q^{\ast}},{Q^{\ast}},{S^{\ast}}} \right),\!\left\{ {{\mu ^{\ast}}(x)\,:\,x \in [ {{s^{\ast}},{S^{\ast}}}]} \right\}} \right\}$ form. Specifically, the optimal impulse control policy is characterized by a quadruple $\left( {{s^{\ast}},{q^{\ast}},{Q^{\ast}},{S^{\ast}}} \right)$, under which the system state will be immediately relocated upwardly to ${q^{\ast}}$ once it drops to ${s^{\ast}}$ and be immediately relocated downwardly to ${Q^{\ast}}$ once it rises to ${S^{\ast}}$; the optimal drift rate control policy will depend solely on the current system state, which is characterized by a function ${\mu ^{\ast}}\!\left( \cdot \right)$ for the system state staying in $[ {{s^{\ast}},{S^{\ast}}}]$. By analyzing an associated free boundary problem consisting of an ordinary differential equation and several free boundary conditions, we obtain these optimal policy parameters and show the optimality of the proposed policy using a lower-bound approach. Finally, we investigate the effect of the system parameters on the optimal policy parameters as well as the optimal system’s long-run average cost numerically.
We consider De Finetti’s control problem for absolutely continuous strategies with control rates bounded by a concave function and prove that a generalized mean-reverting strategy is optimal in a Brownian model. In order to solve this problem, we need to deal with a nonlinear Ornstein–Uhlenbeck process. Despite the level of generality of the bound imposed on the rate, an explicit expression for the value function is obtained up to the evaluation of two functions. This optimal control problem has, as special cases, those solved in Jeanblanc-Picqué and Shiryaev (1995) and Renaud and Simard (2021) when the control rate is bounded by a constant and a linear function, respectively.
We establish higher moment formulae for Siegel transforms on the space of affine unimodular lattices as well as on certain congruence quotients of $\mathrm {SL}_d({\mathbb {R}})$. As applications, we prove functional central limit theorems for lattice point counting for affine and congruence lattices using the method of moments.
This chapter is dedicated to the elementary problem, which concerns interactions between a single particle and the surrounding fluid. First, we explore the drag force, which is the most common interaction. It is shown how this force is derived and applied in practice. This topic is further expanded upon by introducing Basset and added mass force – both are crucial for unsteady cases such as accelerating particles. Next, lift forces (Magnus and Saffman) are shown that may result in the particle’s motion in the lateral direction. To some extent, this is associated with the next issue explained in the chapter: the torque acting on a particle. The following sections pay attention to other interactions: Brownian motion, rarefied gases and the thermophoretic force. These interactions play a role for tiny particles, perhaps of nano-size. Ultimately, we deliberate heat effects when the particle and fluid have different temperatures. Thus, this last section scrutinise convective and radiative heat transfer.
In the classical gambler’s ruin problem, the gambler plays an adversary with initial capitals z and $a-z$, respectively, where $a>0$ and $0< z < a$ are integers. At each round, the gambler wins or loses a dollar with probabilities p and $1-p$. The game continues until one of the two players is ruined. For even a and $0<z\leq {a}/{2}$, the family of distributions of the duration (total number of rounds) of the game indexed by $p \in [0,{\frac{1}{2}}]$ is shown to have monotone (increasing) likelihood ratio, while for ${a}/{2} \leq z<a$, the family of distributions of the duration indexed by $p \in [{\frac{1}{2}}, 1]$ has monotone (decreasing) likelihood ratio. In particular, for $z={a}/{2}$, in terms of the likelihood ratio order, the distribution of the duration is maximized over $p \in [0,1]$ by $p={\frac{1}{2}}$. The case of odd a is also considered in terms of the usual stochastic order. Furthermore, as a limit, the first exit time of Brownian motion is briefly discussed.
In this chapter we present dynamical systems and their probabilistic description. We distinguish between system descriptions with discrete and continuous state-spaces as well as discrete and continuous time. We formulate examples of statistical models including Markov models, Markov jump processes, and stochastic differential equations. In doing so, we describe fundamental equations governing the evolution of the probability of dynamical systems. These equations include the master equation, Langevin equation, and Fokker–Plank equation. We also present sampling methods to simulate realizations of a stochastic dynamical process such as the Gillespie algorithm. We end with case studies relevant to chemistry and physics.