Home
• This chapter is unavailable for purchase
• Print publication year: 2005
• Online publication date: June 2012

7 - Markov Chains and Random Walks

Summary

Markov chains provide a simple but powerful framework for modeling random processes. We start this chapter with the basic definitions related to Markov chains and then show how Markov chains can be used to analyze simple randomized algorithms for the 2-SAT and 3-SAT problems. Next we study the long-term behavior of Markov chains, explaining the classifications of states and conditions for convergence to a stationary distribution. We apply these techniques to analyzing simple gambling schemes and a discrete version of a Markovian queue. Of special interest is the limiting behavior of random walks on graphs. We prove bounds on the covering time of a graph and use this bound to develop a simple randomized algorithm for the st connectivity problem. Finally, we apply Markov chain techniques to resolve a subtle probability problem known as Parrondo's paradox.

Markov Chains: Definitions and Representations

A stochastic processX = {X(t): tT} is a collection of random variables. The index t often represents time, and in that case the process X models the value of a random variable X that changes over time.

We call X(t) the state of the process at time t. In what follows, we use Xt interchangeably with Xt. If, for all t, Xt assumes values from a countably infinite set, then we say that X is a discrete space process. If Xt assumes values from a finite set then the process is finite.

Recommend this book