We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As we realize that random walks, chain reactions, and recurrent events are all Markov chains, i.e., correlated processes without memory, in this chapter we derive a general theory, including classification and properties of the single states and of chains. In particular, we focus on the building blocks of the theory, i.e., irreducible chains, presenting and proving a number of fundamental and useful theorems. We end up deriving the balance equation for the limit probability and the approach to the limit for long times, developing and applying the Perron–Frobenius theory for non-negative matrices and the spectral decomposition for non-Hermitian matrices. Among the applications of the theory, we underline the sorting of Web pages by search engines.
We consider the problem of sequential matching in a stochastic block model with several classes of nodes and generic compatibility constraints. When the probabilities of connections do not scale with the size of the graph, we show that under the Ncond condition, a simple max-weight type policy allows us to attain an asymptotically perfect matching while no sequential algorithm attains perfect matching otherwise. The proof relies on a specific Markovian representation of the dynamics associated with Lyapunov techniques.
The gambler’s ruin problem for correlated random walks (CRWs), both with and without delays, is addressed using the optional stopping theorem for martingales. We derive closed-form expressions for the ruin probabilities and the expected game duration for CRWs with increments $\{1,-1\}$ and for symmetric CRWs with increments $\{1,0,-1\}$ (CRWs with delays). Additionally, a martingale technique is developed for general CRWs with delays. The gambler’s ruin probability for a game involving bets on two arbitrary patterns is also examined.
We consider a stochastic model, called the replicator coalescent, describing a system of blocks of k different types that undergo pairwise mergers at rates depending on the block types: with rate $C_{ij}\geq 0$ blocks of type i and j merge, resulting in a single block of type i. The replicator coalescent can be seen as a generalisation of Kingman’s coalescent death chain in a multi-type setting, although without an underpinning exchangeable partition structure. The name is derived from a remarkable connection between the instantaneous dynamics of this multi-type coalescent when issued from an arbitrarily large number of blocks, and the so-called replicator equations from evolutionary game theory. By dilating time arbitrarily close to zero, we see that initially, on coming down from infinity, the replicator coalescent behaves like the solution to a certain replicator equation. Thereafter, stochastic effects are felt and the process evolves more in the spirit of a multi-type death chain.
For many systems, the full information of an underlying Markovian decription is not accessible due to limited spatial or temporal resolution. We first show that such an often inevitable coarse-graining implies that, rather than the full entropy production, only a lower bound can be retrieved from coarse-grained data. As a technical tool, it is derived that the Kullback–Leibler divergence decreases under coarse-graining. For a discrete time-series obtained from an underlying time-continuous Markov dynamics, it is shown how the analysis of n-tuples leads to a better estimate with increasing length of the tuples. Finally, state-lumping as one strategy for coarse-graining an underlying Markov model is shown explicitly to yield a lower bound for the entropy production. However, in general, it does not yield a consistent interpretation of the first law along coarse-grained trajectories as exemplified with a simple model.
Consider nested subdivisions of a bounded real set into intervals defining the digits $X_1,X_2,\ldots$ of a random variable X with a probability density function f. If f is almost everywhere lower semi-continuous, there is a non-negative integer-valued random variable N such that the distribution of $R=(X_{N+1},X_{N+2},\ldots)$ conditioned on $S=(X_1,\ldots,X_N)$ does not depend on f. If also the lengths of the intervals exhibit a Markovian structure, $R\mid S$ becomes a Markov chain of a certain order $s\ge0$. If $s=0$ then $X_{N+1},X_{N+2},\ldots$ are independent and identically distributed with a known distribution. When $s>0$ and the Markov chain is uniformly geometric ergodic, there is a random time M such that the chain after time $\max\{N,s\}+M-s$ is stationary and M follows a simple known distribution.
This text examines Markov chains whose drift tends to zero at infinity, a topic sometimes labelled as 'Lamperti's problem'. It can be considered a subcategory of random walks, which are helpful in studying stochastic models like branching processes and queueing systems. Drawing on Doob's h-transform and other tools, the authors present novel results and techniques, including a change-of-measure technique for near-critical Markov chains. The final chapter presents a range of applications where these special types of Markov chains occur naturally, featuring a new risk process with surplus-dependent premium rate. This will be a valuable resource for researchers and graduate students working in probability theory and stochastic processes.
We consider the moments and the distribution of hitting times on the lollipop graph which is the graph exhibiting the maximum expected hitting time among all the graphs having the same number of nodes. We obtain recurrence relations for the moments of all order and we use these relations to analyze the asymptotic behavior of the hitting time distribution when the number of nodes tends to infinity.
For a continuous-time phase-type (PH) distribution, starting with its Laplace–Stieltjes transform, we obtain a necessary and sufficient condition for its minimal PH representation to have the same order as its algebraic degree. To facilitate finding this minimal representation, we transform this condition equivalently into a non-convex optimization problem, which can be effectively addressed using an alternating minimization algorithm. The algorithm convergence is also proved. Moreover, the method we develop for the continuous-time PH distributions can be used directly for the discrete-time PH distributions after establishing an equivalence between the minimal representation problems for continuous-time and discrete-time PH distributions.
A Markov chain with transition probabilities pij(θ) that are functions of a parameter vector θ is defined. One of t input values is delivered to the chain on each trial. Under the hypothesis H0 the parameter vector is independent of the input; under the hypothesis H1 the vector is in general different for each different input. A likelihood ratio test for a single observation on a chain of great length is given for testing H0 against H1, given that the distribution of the inputs depends at most on the previous input and the present state of the chain. The test is therefore one of stationarity of the transition probabilities against a specific alternative form of nonstationarity. Application of the approach to a statistical test of lumpability of the states of a chain is indicated. Tests for other related hypotheses are suggested. Application of the test to “within-subject effects” for individual subjects is considered. Finally, some applications of the results in psychophysical contexts are suggested.
The embedding problem of Markov chains examines whether a stochastic matrix$\mathbf{P} $ can arise as the transition matrix from time 0 to time 1 of a continuous-time Markov chain. When the chain is homogeneous, it checks if $ \mathbf{P}=\exp{\mathbf{Q}}$ for a rate matrix $ \mathbf{Q}$ with zero row sums and non-negative off-diagonal elements, called a Markov generator. It is known that a Markov generator may not always exist or be unique. This paper addresses finding $ \mathbf{Q}$, assuming that the process has at most one jump per unit time interval, and focuses on the problem of aligning the conditional one-jump transition matrix from time 0 to time 1 with $ \mathbf{P}$. We derive a formula for this matrix in terms of $ \mathbf{Q}$ and establish that for any $ \mathbf{P}$ with non-zero diagonal entries, a unique $ \mathbf{Q}$, called the ${\unicode{x1D7D9}}$-generator, exists. We compare the ${\unicode{x1D7D9}}$-generator with the one-jump rate matrix from Jarrow, Lando, and Turnbull (1997), showing which is a better approximate Markov generator of $ \mathbf{P}$ in some practical cases.
We study the quasi-ergodicity of compact strong Feller semigroups $U_t$, $t> 0$, on $L^2(M,\mu )$; we assume that M is a locally compact Polish space equipped with a locally finite Borel measue $\mu $. The operators $U_t$ are ultracontractive and positivity preserving, but not necessarily self-adjoint or normal. We are mainly interested in those cases where the measure $\mu $ is infinite and the semigroup is not intrinsically ultracontractive. We relate quasi-ergodicity on $L^p(M,\mu )$ and uniqueness of the quasi-stationary measure with the finiteness of the heat content of the semigroup (for large values of t) and with the progressive uniform ground state domination property. The latter property is equivalent to a variant of quasi-ergodicity which progressively propagates in space as $t \uparrow \infty $; the propagation rate is determined by the decay of . We discuss several applications and illustrate our results with examples. This includes a complete description of quasi-ergodicity for a large class of semigroups corresponding to non-local Schrödinger operators with confining potentials.
We review criteria for comparing the efficiency of Markov chain Monte Carlo (MCMC) methods with respect to the asymptotic variance of estimates of expectations of functions of state, and show how such criteria can justify ways of combining improvements to MCMC methods. We say that a chain on a finite state space with transition matrix P efficiency-dominates one with transition matrix Q if for every function of state it has lower (or equal) asymptotic variance. We give elementary proofs of some previous results regarding efficiency dominance, leading to a self-contained demonstration that a reversible chain with transition matrix P efficiency-dominates a reversible chain with transition matrix Q if and only if none of the eigenvalues of $Q-P$ are negative. This allows us to conclude that modifying a reversible MCMC method to improve its efficiency will also improve the efficiency of a method that randomly chooses either this or some other reversible method, and to conclude that improving the efficiency of a reversible update for one component of state (as in Gibbs sampling) will improve the overall efficiency of a reversible method that combines this and other updates. It also explains how antithetic MCMC can be more efficient than independent and identically distributed sampling. We also establish conditions that can guarantee that a method is not efficiency-dominated by any other method.
Edited by
R. A. Bailey, University of St Andrews, Scotland,Peter J. Cameron, University of St Andrews, Scotland,Yaokun Wu, Shanghai Jiao Tong University, China
This is an introduction to representation theory and harmonic analysis on finite groups. This includes, in particular, Gelfand pairs (with applications to diffusion processes à la Diaconis) and induced representations (focusing on the little group method of Mackey and Wigner). We also discuss Laplace operators and spectral theory of finite regular graphs. In the last part, we present the representation theory of GL(2, Fq), the general linear group of invertible 2 × 2 matrices with coefficients in a finite field with q elements. More precisely, we revisit the classical Gelfand–Graev representation of GL(2, Fq) in terms of the so-called multiplicity-free triples and their associated Hecke algebras. The presentation is not fully self-contained: most of the basic and elementary facts are proved in detail, some others are left as exercises, while, for more advanced results with no proof, precise references are provided.
A mathematical discrete-time population model is presented, which leads to a system of two interlinked, or coupled, recurrence equations. We then turn to the general issue of how to solve such systems. One approach is to reduce the two coupled equations to a single second-order equation and solve using the techniques already developed, but there is another more sophisticated way. To this end, we introduce eigenvalues and eigenvectors, show how to find them and explain how they can be used to diagonalise a matrix.
We consider a Poisson autoregressive process whose parameters depend on the past of the trajectory. We allow these parameters to take negative values, modelling inhibition. More precisely, the model is the stochastic process $(X_n)_{n\ge0}$ with parameters $a_1,\ldots,a_p \in \mathbb{R}$, $p\in\mathbb{N}$, and $\lambda \ge 0$, such that, for all $n\ge p$, conditioned on $X_0,\ldots,X_{n-1}$, $X_n$ is Poisson distributed with parameter $(a_1 X_{n-1} + \cdots + a_p X_{n-p} + \lambda)_+$. This process can be regarded as a discrete-time Hawkes process with inhibition and a memory of length p. In this paper we initiate the study of necessary and sufficient conditions of stability for these processes, which seems to be a hard problem in general. We consider specifically the case $p = 2$, for which we are able to classify the asymptotic behavior of the process for the whole range of parameters, except for boundary cases. In particular, we show that the process remains stochastically bounded whenever the solution to the linear recurrence equation $x_n = a_1x_{n-1} + a_2x_{n-2} + \lambda$ remains bounded, but the converse is not true. Furthermore, the criterion for stochastic boundedness is not symmetric in $a_1$ and $a_2$, in contrast to the case of non-negative parameters, illustrating the complex effects of inhibition.
In this chapter the basic theory of Markov chains is developed, with a focus on irreducible chains.The transition matrix is introduced as well as the notions of irreducibility, periodicity, recurrence (null and positive), and transience.The theory is applied to the relationship of a random walk on a group to the random walk on a finite-index subgroup induced by the "hitting measure."
We present a data-driven emulator, a stochastic weather generator (SWG), suitable for estimating probabilities of prolonged heat waves in France and Scandinavia. This emulator is based on the method of analogs of circulation to which we add temperature and soil moisture as predictor fields. We train the emulator on an intermediate complexity climate model run and show that it is capable of predicting conditional probabilities (forecasting) of heat waves out of sample. Special attention is payed that this prediction is evaluated using a proper score appropriate for rare events. To accelerate the computation of analogs, dimensionality reduction techniques are applied and the performance is evaluated. The probabilistic prediction achieved with SWG is compared with the one achieved with a convolutional neural network (CNN). With the availability of hundreds of years of training data, CNNs perform better at the task of probabilistic prediction. In addition, we show that the SWG emulator trained on 80 years of data is capable of estimating extreme return times of order of thousands of years for heat waves longer than several days more precisely than the fit based on generalized extreme value distribution. Finally, the quality of its synthetic extreme teleconnection patterns obtained with SWG is studied. We showcase two examples of such synthetic teleconnection patterns for heat waves in France and Scandinavia that compare favorably to the very long climate model control run.
We consider a discrete-time population growth system called the Bienaymé–Galton–Watson stochastic branching system. We deal with a noncritical case, in which the per capita offspring mean $m\neq1$. The famous Kolmogorov theorem asserts that the expectation of the population size in the subcritical case $m<1$ on positive trajectories of the system asymptotically stabilizes and approaches ${1}/\mathcal{K}$, where $\mathcal{K}$ is called the Kolmogorov constant. The paper is devoted to the search for an explicit expression of this constant depending on the structural parameters of the system. Our argumentation is essentially based on the basic lemma describing the asymptotic expansion of the probability-generating function of the number of individuals. We state this lemma for the noncritical case. Subsequently, we find an extended analogue of the Kolmogorov constant in the noncritical case. An important role in our discussion is also played by the asymptotic properties of transition probabilities of the Q-process and their convergence to invariant measures. Obtaining the explicit form of the extended Kolmogorov constant, we refine several limit theorems of the theory of noncritical branching systems, showing explicit leading terms in the asymptotic expansions.
In the benchmark New Keynesian (NK) model, I introduce the real cost channel to study government spending multipliers and provide simple Markov chain closed-form solutions. This model departs fundamentally from most previous interpretations of the nominal cost channel by flattening the NK Phillips Curve in liquidity traps. At the zero lower bound, I show analytically that following positive government spending shocks, the real cost channel can make inflation rise less than in a model without this channel. This then causes a smaller drop in real interest rates, resulting in a lower output gap multiplier. Finally, I confirm the robustness of the real cost channel’s effect on multipliers using extensions of two models.