To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we analyze the conditions under which the aggregated process constructed from a homogeneous Markov chain over a given partition of its state space is also a homogeneous Markov chain. The results of this chapter mainly come from [89] and [91] for irreducible discrete-time Markov chains, in [94] for irreducible continuous-time Markov chains, and [58] for absorbing Markov chains. A necessary condition to obtain a Markovian aggregated process has been obtained in [58]. These works are based on the pioneering works of [50] and [1]. They are themselves the basis of other extensions such as [56], which deals with infinite state spaces, or [57] in which the author derives new results for the lumpability of reducible Markov chains and obtains spectral properties associated with lumpability.
State aggregation in irreducible DTMC
Introduction and notation
In this first section we consider the aggregated process constructed from an irreducible and homogeneous discrete-time Markov chain (DTMC), X, over a given partition of the state space. We analyze the conditions under which this aggregated process is also Markov homogeneous and we give a characterization of this situation.
The Markov chain X = {Xn, n ∈ N} is supposed to be irreducible and homogeneous. The state space is supposed to be finite and denoted by S = {1, 2, …, N}. All the vectors used are row vectors except when specified. For instance, the column vector with all its entries equal to 1 is denoted by 1 and its dimension is defined by the context.
From the theoretical point of view, Markov chains are a fundamental class of stochastic processes. They are the most widely used tools for solving problems in a large number of domains. They allow the modeling of all kinds of systems and their analysis allows many aspects of those systems to be quantified. We find them in many subareas of operations research, engineering, computer science, networking, physics, chemistry, biology, economics, finance, and social sciences. The success of Markov chains is essentially due to the simplicity of their use, to the large set of theoretical associated results available, that is, the high degree of understanding of the dynamics of these stochastic processes, and to the power of the available algorithms for the numerical evaluation of a large number of associated metrics.
In simple terms, the Markov property means that given the present state of the process, its past and future are independent. In other words, knowing the present state of the stochastic process, no information about the past can be used to predict the future. This means that the number of parameters that must be taken into account to represent the evolution of a system modeled by such a process can be reduced considerably. Actually, many random systems can be represented by a Markov chain, and certainly most of the ones used in practice. The price to pay for imposing the Markov property on a random system consists of cleverly defining the present of the system or equivalently its state space.
One of the approaches that can be followed when dealing with very large models is to try to compute the bounds of the metrics of interest. This chapter describes bounding procedures that can be used to bound two important dependability metrics. We are concerned with highly dependable systems; the bounding techniques described here are designed to be efficient in that context.
First, we address the situation where the states of the model are weighted by real numbers, the model (the Markov chain) is irreducible and the metric is the asymptotic mean reward. The most important example of a metric of this type in dependability is the asymptotic availability, but many other metrics also fit this framework, for instance, the mean number of customers in a queuing system, or in part of a queuing network (in this case, the techniques must be considered in light traffic conditions, the equivalent to a highly dependable system in this queuing situation).
Second, we consider the evaluation of the Mean Time To Failure, that is, of the expectation of the delay from the initial instant to the occurrence of the first system's failure. The situation is that of a chain where all states but one are transient and the remaining state is absorbing; the MTTF corresponds to the mean absorption time of the chain.
Filtering and smoothing methods are used to produce an accurate estimate of the state of a time-varying system based on multiple observational inputs (data). Interest in these methods has exploded in recent years, with numerous applications emerging in fields such as navigation, aerospace engineering, telecommunications and medicine. This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework. Readers learn what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages. They also discover how state-of-the-art Bayesian parameter estimation methods can be combined with state-of-the-art filtering and smoothing algorithms. The book's practical and algorithmic approach assumes only modest mathematical prerequisites. Examples include Matlab computations, and the numerous end-of-chapter exercises include computational assignments. Matlab code is available for download at www.cambridge.org/sarkka, promoting hands-on work with the methods.
Communication networks underpin our modern world, and provide fascinating and challenging examples of large-scale stochastic systems. Randomness arises in communication systems at many levels: for example, the initiation and termination times of calls in a telephone network, or the statistical structure of the arrival streams of packets at routers in the Internet. How can routing, flow control and connection acceptance algorithms be designed to work well in uncertain and random environments? This compact introduction illustrates how stochastic models can be used to shed light on important issues in the design and control of communication networks. It will appeal to readers with a mathematical background wishing to understand this important area of application, and to those with an engineering background who want to grasp the underlying mathematical theory. Each chapter ends with exercises and suggestions for further reading.