Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-ttngx Total loading time: 0 Render date: 2024-06-01T00:04:53.521Z Has data issue: false hasContentIssue false

6 - Time evolution and finite Markov chains

Published online by Cambridge University Press:  05 July 2014

Ubaldo Garibaldi
Affiliation:
Università degli Studi di Genova
Enrico Scalas
Affiliation:
Università degli Studi del Piemonte Orientale Amedeo Avogadro
Get access

Summary

Up to now, index set I for a stochastic process can indifferently denote either successive draws from an urn or a time step in a time evolution. In this chapter, a probabilistic dynamics for n objects divided into g categories will be defined. The simplest case is Markovian dynamics in discrete time, and so this chapter is devoted to the theory of finite Markov chains.

After studying this chapter you should be able to:

  1. • define Markov chains;

  2. • discuss some properties of Markov chains such as irreducibility, periodicity, stationarity and reversibility;

  3. • determine the invariant distribution for Markov chains either by means of the Markov equation or by using Monte Carlo simulations;

  4. • understand the meaning of statistical equilibrium for aperiodic irreducible Markov chains;

  5. • use the detailed balance principle to compute the invariant distribution, if possible;

  6. • exemplify the above properties using the Ehrenfest urn as a prototypical Markov chain.

From kinematics to Markovian probabilistic dynamics

6.1.1 Kinematics

Consider a population composed of n elements and g categories which partition the variate under study. In economics, the n elements may represent economic agents and the g categories can be seen as strategies, but other interpretations are possible. For instance, one can speak of n individuals working in g factories or n firms active in g economic sectors, and so on. In physics, the n elements may be particles and the g categories energy levels or, following Boltzmann, one can consider n energy elements to be divided into g particles, etc.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2010

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

P., Ehrenfest and T., Ehrenfest, Begriffliche Grundlagen der statistischen Auffassung in der Mechanik, in F., Klein and C., Müller (eds), Encyclopädie der Mathematischen Wissenschaften mit Einschluß ihrer Anwendungen 4, 3-90, Teubner, Leipzig (1911). English translation in P. Ehrenfest and T. Ehrenfest, The Conceptual Foundations of the Statistical Approach in Mechanics, Dover, New York (1990).Google Scholar
M., Kac, Probability and Related Topics in Physical Science, Interscience, London (1959).Google Scholar
H., Wio, An Introduction to Stochastic Processes and Nonequilibrium Statistical Physics, World Scientific, Singapore (1994).Google Scholar
A.A., Markov, Rasprostranenie predel'nyh teorem ischisleniya veroyatnostej na summu velichin svyazannyh v cep', Zapiski Akademii Nauk po Fiziko-matematicheskomu otde-leniyu, VIII seriya, tom 25(3) (1908). English translation: A.A. Markov, Extension of the Limit Theorems of Probability Theory to a Sum of Variables Connected in a Chain, in R.A. Howard (ed.), Dynamic Probabilistic Systems, volume 1: Markov Chains, Wiley, New York (1971).Google Scholar
G.P., Basharin, A.N., Langville and V.A., Naumov, The Life and Work of A. A. Markov, in A.N., Langville and W.J., Stewart (eds.), Proceedings of the 2003 conference on The Numerical Solution of Markov Chains, Linear Algebra and its Applications, 386, 3-26 (2004).Google Scholar
S. N., Bernstein, Sur l'extension du thèoréme du calcul de probabilités aux sommes de quantités dépendantes, Mathematische Annalen, 97, 1-59 (1926).Google Scholar
J.G., Kemeny and J.L., Snell, Finite Markov Chains, Van Nostrand, Princeton, NJ (1960).Google Scholar
P.G., Hoel, S.C., Port and C.J., Stone, Introduction to Stochastic Processes, Houghton Mifflin, Boston, MA (1972).Google Scholar
P., Suppes, Weak and Strong Reversibility of Causal Processes, in M.C., Gallavotti, P., Suppes and D., Costantini (eds.), Stochastic Causality, CSLI Publications, Stanford, CA (2001).Google Scholar
S.P., Meyn and R.L., Tweedie, Markov Chains and Stochastic Stability, Springer, Berlin (1993).Google Scholar
S.P., Meyn and R.L., Tweedie, Markov Chains and Stochastic Stability, 2nd edition, Cambridge University Press, Cambridge, UK (2009).Google Scholar
[1] O., Penrose, Foundations of Statistical Mechanics, Pergamon Press, Oxford (1970).Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×