To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Markov chains are the simplest mathematical models for random phenomena evolving in time. Their simple structure makes it possible to say a great deal about their behaviour. At the same time, the class of Markov chains is rich enough to serve in many applications. This makes Markov chains the first and most important examples of random processes. Indeed, the whole of the mathematical study of random processes can be regarded as a generalization in one way or another of the theory of Markov chains.
This book is an account of the elementary theory of Markov chains, with applications. It was conceived as a text for advanced undergraduates or master's level students, and is developed from a course taught to undergraduates for several years. There are no strict prerequisites but it is envisaged that the reader will have taken a course in elementary probability. In particular, measure theory is not a prerequisite.
The first half of the book is based on lecture notes for the undergraduate course. Illustrative examples introduce many of the key ideas. Careful proofs are given throughout. There is a selection of exercises, which forms the basis of classwork done by the students, and which has been tested over several years. Chapter 1 deals with the theory of discrete-time Markov chains, and is the basis of all that follows. You must begin here. The material is quite straightforward and the ideas introduced permeate the whole book.
In the first three chapters we have given an account of the elementary theory of Markov chains. This already covers a great many applications, but is just the beginning of the theory of Markov processes. The further theory inevitably involves more sophisticated techniques which, although having their own interest, can obscure the overall structure. On the other hand, the overall structure is, to a large extent, already present in the elementary theory. We therefore thought it worth while to discuss some features of the further theory in the context of simple Markov chains, namely, martingales, potential theory, electrical networks and Brownian motion. The idea is that the Markov chain case serves as a guiding metaphor for more complicated processes. So the reader familiar with Markov chains may find this chapter helpful alongside more general higher-level texts. At the same time, further insight is gained into Markov chains themselves.
Martingales
A martingale is a process whose average value remains constant in a particular strong sense, which we shall make precise shortly. This is a sort of balancing property. Often, the identification of martingales is a crucial step in understanding the evolution of a stochastic process.
Applications of Markov chains arise in many different areas. Some have already appeared to illustrate the theory, from games of chance to the evolution of populations, from calculating the fair price for a random reward to calculating the probability that an absent-minded professor is caught without an umbrella. In a real-world problem involving random processes you should always look for Markov chains. They are often easy to spot. Once a Markov chain is identified, there is a qualitative theory which limits the sorts of behaviour that can occur – we know, for example, that every state is either recurrent or transient. There are also good computational methods – for hitting probabilities and expected rewards, and for long-run behaviour via invariant distributions.
In this chapter we shall look at five areas of application in detail: biological models, queueing models, resource management models, Markov decision processes and Markov chain Monte Carlo. In each case our aim is to provide an introduction rather than a systematic account or survey of the field. References to books for further reading are given in each section.
Markov chains in biology
Randomness is often an appropriate model for systems of high complexity, such as are often found in biology. We have already illustrated some aspects of the theory by simple models with a biological interpretation. See Example 1.1.5 (virus), Exercise 1.1.6 (octopus), Example 1.3.4 (birth-and-death chain) and Exercise 2.5.1 (bacteria).
The material on continuous-time Markov chains is divided between this chapter and the next. The theory takes some time to set up, but once up and running it follows a very similar pattern to the discrete-time case. To emphasise this we have put the setting-up in this chapter and the rest in the next. If you wish, you can begin with Chapter 3, provided you take certain basic properties on trust, which are reviewed in Section 3.1. The first three sections of Chapter 2 fill in some necessary background information and are independent of each other. Section 2.4 on the Poisson process and Section 2.5 on birth processes provide a gentle warm-up for general continuous-time Markov chains. These processes are simple and particularly important examples of continuous-time chains. Sections 2.6–2.8, especially 2.8, deal with the heart of the continuous-time theory. There is an irreducible level of difficulty at this point, so we advise that Sections 2.7 and 2.8 are read selectively at first. Some examples of more general processes are given in Section 2.9. As in Chapter 1 the exercises form an important part of the text.
Q-matrices and their exponentials
In this section we shall discuss some of the basic properties of Q-matrices and explain their connection with continuous-time Markov chains.
The stochastic maximum principle and dynamic programming are among the main methods stochastic control theory. It is also possible to develop such methods for partially observable systems. This is a priori expectable since the stochastic control problem with partial observation can be reduced to a stochastic control problem with complete observation, with the reservation that the system with full observation (to be controlled) is not finite dimensional, but infinite dimensional. With this remark in mind, we see that we are led to use the maximum principle or dynamic programming for an infinite dimensional system. The situation is very similar to that for systems governed by partial differential equations.
We shall not attempt to cover all possible cases in one theorem. Our approach will be to reduce the problem to the control with full observation of a stochastic PDE, namely the Zakai equation. This equation will be formulated as a differential equation in a Hilbert space, using variational techniques (see Section 4.7). In this framework, it is convenient to use variational techniques to derive necessary conditions of optimality. One advantage of this approach is that it is mostly analytic. On the other hand, the case of unbounded coefficients (for instance the linear quadratic case) is not easily covered in this formulation, without substantial technical transformations. However, the result is formally applicable without any difficulty.
The present developments improve previous results of mine (see Bensoussan 1983 and simplify the derivation. Besides the application to the LQG problem, we consider the situation of the separation principle and obtain it in a new way, through the stochastic maximum principle.