Hostname: page-component-cb9f654ff-nr592 Total loading time: 0 Render date: 2025-08-04T08:07:37.118Z Has data issue: false hasContentIssue false

Stationary measures of continuous-time Markov chains with applications to stochastic reaction networks

Published online by Cambridge University Press:  14 May 2025

Mads Chr. Hansen*
Affiliation:
University of Copenhagen
Carsten Wiuf*
Affiliation:
University of Copenhagen
Chuang Xu*
Affiliation:
University of Hawai’i at Mānoa
*
*Postal address: University of Copenhagen, Universitetsparken 5, 2150 Copenhagen, Denmark.
*Postal address: University of Copenhagen, Universitetsparken 5, 2150 Copenhagen, Denmark.
****Postal address: University of Hawai’i at Mānoa, Honolulu, 96822, HI, USA. Email: chuangxu@hawaii.edu

Abstract

We study continuous-time Markov chains on the nonnegative integers under mild regularity conditions (in particular, the set of jump vectors is finite and both forward and backward jumps are possible). Based on the so-called flux balance equation, we derive an iterative formula for calculating stationary measures. Specifically, a stationary measure $\pi(x)$ evaluated at $x\in\mathbb{N}_0$ is represented as a linear combination of a few generating terms, similarly to the characterization of a stationary measure of a birth–death process, where there is only one generating term, $\pi(0)$. The coefficients of the linear combination are recursively determined in terms of the transition rates of the Markov chain. For the class of Markov chains we consider, there is always at least one stationary measure (up to a scaling constant). We give various results pertaining to uniqueness and nonuniqueness of stationary measures, and show that the dimension of the linear space of signed invariant measures is at most the number of generating terms. A minimization problem is constructed in order to compute stationary measures numerically. Moreover, a heuristic linear approximation scheme is suggested for the same purpose by first approximating the generating terms. The correctness of the linear approximation scheme is justified in some special cases. Furthermore, a decomposition of the state space into different types of states (open and closed irreducible classes, and trapping, escaping and neutral states) is presented. Applications to stochastic reaction networks are well illustrated.

Information

Type
Original Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Applied Probability Trust

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

Abramowitz, M. and Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, volume 55 of Applied Mathematics Series. National Bureau of Standards, 1972.Google Scholar
Anderson, D.F, Craciun, G. and Kurtz, T.G. Product-form stationary distributions for deficiency zero chemical reaction networks. Bulletin Mathematical Biology, 72:19471970, 2010.CrossRefGoogle ScholarPubMed
Anderson, D.F. and Kurtz, T.G. Stochastic Analysis of Biochemical Systems, volume 1.2 of Mathematical Biosciences Institute Lecture Series. Springer International Publishing, Switzerland, 2015.CrossRefGoogle Scholar
Anderson, W.J. Continuous-Time Markov Chains: An Applications–Oriented Approach . Springer Series in Statistics: Probability and its Applications. Springer-Verlag, New York, 1991.Google Scholar
Azimzadeh, P. A fast and stable test to check if a weakly diagonally dominant matrix is a nonsingular M-matrix. Mathematics of Computation, 88(316):783800, 2019.CrossRefGoogle Scholar
Boyd, S.P. and Vandenberghe, L. Convex Optimization. Cambridge University Press, 2004.CrossRefGoogle Scholar
Derman, C. Some contributions to the theory of denumerable Markov chains. Transactions of the American Mathematical Society, 79(2):541555, 1955.CrossRefGoogle Scholar
Douc, R., Moulines, E., Priouret, P. and Soulier, P. Markov Chains . Springer Series in Operations Research and Financial Engineering. Springer 2018.Google Scholar
Erban, R., Chapman, S.J. and Maini, P.K. A practical guide to stochastic simulations of reaction-diffusion processes. arXiv:0704.1908, 2017.Google Scholar
Ethier, S.N. and Kurtz, T.G. Markov Processes . Wiley Series in Probability and Mathematical Statistics. Wiley, 1986.Google Scholar
Falk, J., Mendler, M. and Drossel, B. A minimal model of burst-noise induced bistability. PloS one, 12(4):e0176410, 2017.CrossRefGoogle ScholarPubMed
Gillespie, D.T. Exact stochastic simulation of coupled chemical reactions. Journal of Physical Chemistry, 81:23402361, 1977.CrossRefGoogle Scholar
Gillespie, D.T. A rigorous derivation of the chemical master equation. Physica A: Stat. Mech. Appl., 188:404425, 1992.CrossRefGoogle Scholar
Gupta, A., Mikelson, J. and Khammash, M. A finite state projection algorithm for the stationary solution of the chemical master equation. The Journal of Chemical Physics, 147(15):154101, 2017.CrossRefGoogle ScholarPubMed
Harris, T.E. Transient Markov chains with stationary measures. Proceedings of the American Mathematical Society, 8(5):937–942, 1957.CrossRefGoogle Scholar
Jahnke, T. and Huisinga, W. Solving the chemical master equation for monomolecular reaction systems analytically. J. Math. Biol., 54:126, 2007.CrossRefGoogle ScholarPubMed
Kelly, F.P. Reversibility and Stochastic Networks . Applied Stochastic Methods Series. Cambridge University Press, 2011.Google Scholar
Khinchin, A. Ya. Continued Fractions. Dover Publications, revised edition, 1997.Google Scholar
Kuntz, J., Thomas, P., Stan, Guy-Bart and Barahona, M. Stationary distributions of continuous-time Markov chains: a review of theory and truncation-based approximations. SIAM Review, 63(1):364, 2021.CrossRefGoogle Scholar
López-Caamal, F. and Marquez-Lago, T. T. Exact probability distributions of selected species in stochastic chemical reaction networks. Bulletin of Mathematical Biology, 76:23342361, 2014.CrossRefGoogle ScholarPubMed
Meyn, S. P. and Tweedie, R. L. Markov Chains and Stochastic Stability. Cambridge University Press, 2nd edition, 2009.CrossRefGoogle Scholar
Miller, R. Jr G. Stationary equations in continuous-time Markov chains. Transactions of the American Mathematical Society, 109(1):3544, 1963.Google Scholar
Norris, J.R. Markov Chains . Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2009.Google Scholar
Pastor-Satorras, R., Castellano, C., Van Mieghem, P. and Vespignani, A. Epidemic processes in complex networks. Rev. Mod. Phys., 87:925979, 2015.CrossRefGoogle Scholar
Shahrezaei, V. and Swain, P.S. Analytical distributions for stochastic gene expression. PNAS, 105:1725617261, 2008.CrossRefGoogle ScholarPubMed
Whittle, P. Systems in Stochastic Equilibrium. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John Wiley $\&$ Sons, Inc., Chichester, 1986.Google Scholar
Xu, C., Hansen, M.C. and Wiuf, C. Structural classification of continuous time Markov chains with applications. Stochastics, 94:10031030, 2022.CrossRefGoogle Scholar
Xu, C, Hansen, M.C. and Wiuf, C. Full classification of dynamics for one-dimensional continuous-time Markov chains with polynomial transition rates. Advances in Applied Probability, 55:321355, 2023.CrossRefGoogle Scholar
Xu, C, Hansen, M.C. and Wiuf, C. The asymptotic tails of limit distributions of continuous-time Markov chains. Advances in Applied Probability, 56:693734, 2024.CrossRefGoogle Scholar