Skip to main content Accessibility help
Internet Explorer 11 is being discontinued by Microsoft in August 2021. If you have difficulties viewing the site on Internet Explorer 11 we recommend using a different browser such as Microsoft Edge, Google Chrome, Apple Safari or Mozilla Firefox.

Chapter 6: Dynamics of countable-state Markov models

Chapter 6: Dynamics of countable-state Markov models

pp. 167-205

Authors

, University of Illinois, Urbana-Champaign
Resources available Unlock the full potential of this textbook with additional resources. There are Instructor restricted resources available for this textbook. Explore resources
  • Add bookmark
  • Cite
  • Share

Summary

Markov processes are useful for modeling a variety of dynamical systems. Often questions involving the long-time behavior of such systems are of interest, such as whether the process has a limiting distribution, or whether time averages constructed using the process are asymptotically the same as statistical averages.

Examples with finite state space

Recall that a probability distribution π on S is an equilibrium probability distribution for a time-homogeneous Markov process X if π = πH(t) for all t. In the discrete-time case, this condition reduces to π = πP. We shall see in this section that under certain natural conditions, the existence of an equilibrium probability distribution is related to whether the distribution of X(t) converges as t → ∞. Existence of an equilibrium distribution is also connected to the mean time needed for X to return to its starting state. To motivate the conditions that will be imposed, we begin by considering four examples of finite state processes. Then the relevant definitions are given for finite or countably infinite state space, and propositions regarding convergence are presented.

Example 6.1 Consider the discrete-time Markov process with the one-step probability diagram shown in Figure 6.1. Note that the process can't escape from the set of states S1 = {a, b, c, d, e}, so that if the initial state X(0) is in S1 with probability one, then the limiting distribution is supported by S1. Similarly if the initial state X(0) is in S2 = {f, g, h} with probability one, then the limiting distribution is supported by S2. Thus, the limiting distribution is not unique for this process. The natural way to deal with this problem is to decompose the original problem into two problems. That is, consider a Markov process on S1, and then consider a Markov process on S2.

Does the distribution of X(0) necessarily converge if X(0) ∈ S1 with probability one? The answer is no.

About the book

Access options

Review the options below to login to check your access.

Purchase options

eTextbook
US$101.00
Hardback
US$101.00

Have an access code?

To redeem an access code, please log in with your personal login.

If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.

Also available to purchase from these educational ebook suppliers