To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapters 1 to 8 introduced optimal (and suboptimal) Bayes tracking recursions and associated approximations.
This chapter covers some points that are important when considering the practical implementation of object tracking. It can be viewed as a collection of separate sections, each section dealing with a specific practical issue. Although object existence is often mentioned in this chapter, with due diligence and prudence the material presented can also apply to, or provide infrastructure for, other algorithms.
Section 9.2 introduces the linear multi-target method for suboptimal multiobject tracking in clutter. As the name implies, the additional numerical complexity of the linear multi-target method is linear in the number of targets and the number of measurements. This is followed by some practical methods for the clutter measurement density estimation in Section 9.3. Bayes recursion needs to be initialized; in the absence of prior target information, tracks are initialized using available measurements. Some track initialization methods and trade-offs are discussed in Section 9.4. For various reasons, multiple tracks may end following the same sequence of measurements; in Section 9.5 the track merging procedure detects and solves this situation. Finally, Section 9.6 presents some (simulated) surveillance situations and automatic target tracking solutions.
Introduction
In complex situations, involving a large number of objects and/or heavy clutter, algorithms based on the optimal multi-object approach (Section 5.5.4) may not be feasible due to its excessive computational requirements. The linear multi-target procedure to efficiently convert single-object trackers into multi-object trackers is detailed in Section 9.2.
In many practical situations, the number and existence of objects that are supposed to be tracked are a priori unknown. This information is an important part of the tracking output. In this chapter we include the object existence in the track state. As in previous chapters, the track state pdf propagates between scans as a Markov process, and is updated using the Bayes formula.
Object existence is particularly important in the cluttered environment, when the origin of each measurement is a priori unknown. This chapter reveals the close relationship (generalization/specialization) of a number of object-existence-based target tracking filters, which have a common derivation and common update cycle.
Some of the algorithms mentioned here also appear in other chapters of this book. These include probabilistic data association (PDA) (Section 4.3), integrated PDA (IPDA) (Sections 5.4.4 and 6.4.4) and joint IPDA (JIPDA) (Section 6.4.5). The derivations of this chapter follow a different track, and the results are more general as they also cater for non-homogeneous clutter.
Introduction
Object tracking aims to estimate the states of a (usually moving) unknown number of objects, using measurements received from sensors, and based on assumptions and models of the objects and measurements.
The object tracking algorithms presented in this chapter are based on the following assumptions, unless stated otherwise:
Object:
– There are zero or more objects in the surveillance area. The number and the position of the objects are a priori unknown.
Power grids, flexible manufacturing, cellular communications: interconnectedness has consequences. This remarkable book gives the tools and philosophy you need to build network models detailed enough to capture essential dynamics but simple enough to expose the structure of effective control solutions. Core chapters assume only exposure to stochastic processes and linear algebra at undergraduate level; later chapters are for advanced graduate students and researchers/practitioners. This gradual development bridges classical theory with the state-of-the-art. The workload model at the heart of traditional analysis of the single queue becomes a foundation for workload relaxations used in the treatment of complex networks. Lyapunov functions and dynamic programming equations lead to the celebrated MaxWeight policy along with many generalizations. Other topics include methods for synthesizing hedging and safety stocks, stability theory for networks, and techniques for accelerated simulation. Examples and figures throughout make ideas concrete. Solutions to end-of-chapter exercises are available on a companion website.
Information propagation through peer-to-peer systems, online social systems, wireless mobile ad hoc networks and other modern structures can be modelled as an epidemic on a network of contacts. Understanding how epidemic processes interact with network topology allows us to predict ultimate course, understand phase transitions and develop strategies to control and optimise dissemination. This book is a concise introduction for applied mathematicians and computer scientists to basic models, analytical tools and mathematical and algorithmic results. Mathematical tools introduced include coupling methods, Poisson approximation (the Stein–Chen method), concentration inequalities (Chernoff bounds and Azuma–Hoeffding inequality) and branching processes. The authors examine the small-world phenomenon, preferential attachment, as well as classical epidemics. Each chapter ends with pointers to the wider literature. An ideal accompaniment for graduate courses, this book is also for researchers (statistical physicists, biologists, social scientists) who need an efficient guide to modern approaches to epidemic modelling on networks.
The estimation of noisily observed states from a sequence of data has traditionally incorporated ideas from Hilbert spaces and calculus-based probability theory. As conditional expectation is the key concept, the correct setting for filtering theory is that of a probability space. Graduate engineers, mathematicians and those working in quantitative finance wishing to use filtering techniques will find in the first half of this book an accessible introduction to measure theory, stochastic calculus, and stochastic processes, with particular emphasis on martingales and Brownian motion. Exercises are included. The book then provides an excellent users' guide to filtering: basic theory is followed by a thorough treatment of Kalman filtering, including recent results which extend the Kalman filter to provide parameter estimates. These ideas are then applied to problems arising in finance, genetics and population modelling in three separate chapters, making this a comprehensive resource for both practitioners and researchers.
Based on a lecture course given at Chalmers University of Technology, this 2002 book is ideal for advanced undergraduate or beginning graduate students. The author first develops the necessary background in probability theory and Markov chains before applying it to study a range of randomized algorithms with important applications in optimization and other problems in computing. Amongst the algorithms covered are the Markov chain Monte Carlo method, simulated annealing, and the recent Propp-Wilson algorithm. This book will appeal not only to mathematicians, but also to students of statistics and computer science. The subject matter is introduced in a clear and concise fashion and the numerous exercises included will help students to deepen their understanding.
The human brain contains billions of nerve cells whose activity plays a critical role in the way we behave, feel, perceive, and think. This two-volume set explains the basic properties of a neuron - an electrically active nerve cell - and develops mathematical theories for the way neurons respond to the various stimuli they receive. Volume 1 contains descriptions and analyses of the principal mathematical models that have been developed for neurons in the past thirty years. It provides a brief review of the basic neuroanatomical and neurophysiological facts that will form the focus of the mathematical treatment. Tuckwell discusses the mathematical theories, beginning with the theory of membrane potentials. He then goes on to treat the Lapicque model, linear cable theory, and time-dependent solutions of the cable equations. He concludes with a description of Rall's model nerve cell. Because the level of mathematics increases steadily upward from Chapter Two, some familiarity with differential equations and linear algebra is desirable.
When is a random network (almost) connected? How much information can it carry? How can you find a particular destination within the network? And how do you approach these questions - and others - when the network is random? The analysis of communication networks requires a fascinating synthesis of random graph theory, stochastic geometry and percolation theory to provide models for both structure and information flow. This book is the first comprehensive introduction for graduate students and scientists to techniques and problems in the field of spatial random networks. The selection of material is driven by applications arising in engineering, and the treatment is both readable and mathematically rigorous. Though mainly concerned with information-flow-related questions motivated by wireless data networks, the models developed are also of interest in a broader context, ranging from engineering to social networks, biology, and physics.
In this chapter we investigate the behaviour of two classical epidemic models on general graphs. We consider a closed population of n individuals, connected by a neighbourhood structure that is represented by an undirected, labelled graph G = (V, E) with node set V = {1, …, n} and edge set E. Each node can be in one of three possible states: susceptible (S), infective (I) or removed (R). The initial set of infectives at time 0 is assumed to be non empty, and all other nodes are assumed to be susceptible at time 0. We will focus on two classical epidemic models: the susceptible-infected-removed (SIR) and susceptible-infected-susceptible (SIS) epidemic processes.
In what follows we represent the graph by means of its adjacency matrix A, i.e. aij = 1 if (i, j) ∈ E and aij = 0 otherwise. Since the graph G is undirected, A is a symmetric, non-negative matrix, all its eigenvalues are real, the eigenvalue with the largest absolute value ρ is positive and its associated eigenvector has non-negative entries (by the Perron–Frobenius theorem). The value ρ is called the spectral radius. If the graph is connected, as we shall assume, then this eigenvalue has multiplicity one, the corresponding eigenvector is strictly positive and is the only one with all elements non-negative.
So far we have dealt with microscopic models of interaction and epidemic propagation. If we are interested in macroscopic characteristics, such as the time before a given fraction of the population is infected, a simpler analysis is often possible in which we can identify deterministic dynamic models, specified by differential equations, that reflect accurately the dynamics of the original system at a macroscopic level. Such macroscopic description is referred to as mean-field approximation.
Differential equations (macroscopic models) and Markov processes (microscopic models) are the basic models of dynamical systems in deterministic and probabilistic contexts, respectively. Since the analysis, both mathematical and computational, of differential equations is often more feasible and efficient, it is of interest to understand in some generality when the sample paths of a Markov process can be guaranteed to lie, with high probability, close to the solution of a differential equation.
We shall provide generic results applicable to all such contexts. In what follows we approximate certain families of jump processes depending on a parameter n usually interpreted as the total population size, and we approximate certain jump Markov processes as the parameter n becomes large. It is worth mentioning that the techniques presented here can be applied to a wide range of problems such as epidemic models, models for chemical reactions and population genetics, as well as other processes.
The Reed–Frost model is a particular example of an SIR (susceptible-infectiveremoved) epidemic process. It is one of the earliest stochastic SIR models to be studied in depth, because of its analytical tractability. In the general SIR model, the population initially consists of healthy individuals and a small number of infected individuals. Infected individuals encounter healthy individuals in a random fashion for a given period known as the infectious period and then are removed and cease spreading the epidemic. Alternatively, in the context of rumour spreading, healthy individuals correspond to nodes that ignore the rumour whereas infected individuals are nodes that initially hold the rumour and actively pass it on to others. Removed individuals correspond to nodes that cease spreading the rumour, or stiflers.
The Reed–Frost epidemic corresponds to a discrete-time version of the SIR model where the infectious period lasts one unit of time. Another commonly used model assumes that infectious periods are independent and identically distributed (i.i.d.) according to an exponential distribution, so that the system evolves as a continuous-time Markov process. This continuous-time SIR model is amenable to the analysis presented in Chapter 5 whereby the dynamics of the Markovian epidemic process is approximated by the solution of a set of differential equations.
The basic version of the Reed–Frost model is as follows. A set of n individuals is given, indexed by i ∈ {1, …, n}.
In Chapter 8 we tried to understand the impact of a network's topology on the behaviour of epidemics. In the present chapter, we focus on the role played by the initial condition in determining the size of the epidemic. Moreover, we adopt a different viewpoint, taking an algorithmic perspective. That is to say, we address the following question: given a set of individuals that form a network, how should one choose a subset of these individuals, of given size, to be infected initially, so as to maximise the size of an epidemic? The idea is that by carefully choosing such nodes we could trigger a cascade of infections that will result in a large number of ultimately infected individuals.
This problem finds its motivation in viral marketing. In this context, limited advertising budget is available for the purpose of convincing a small number of consumers (i.e. the size of the set of initial infectives) of the merits of some product. Such consumers may in turn convince others, and the aim is to maximise the ultimate reach of the advertisement by leveraging such “contaminations”.
We address this problem by considering the following version of the Reed–Frost epidemic. We assume that the network is described by a directed graph G. The potentially infected individuals constitute the set V ≔ {1, …, n}.
The branching process model was introduced by Sir Francis Galton in 1873 to represent the genealogical descendance of individuals. More generally it provides a versatile model for the growth of a population of reproducing individuals in the absence of external limiting factors. It is an adequate starting point when studying epidemics since, as we shall see in Chapter 2, it describes accurately the early stages of an epidemic outbreak. In addition, our treatment of so-called dual branching processes paves the way for the analysis of the supercritical phase in Chapter 2. Finally, the present chapter gives an opportunity to introduce large deviations inequalities (and notably the celebrated Chernoff bound), which is instrumental throughout the book.
A Galton–Watson branching process can be represented by a tree in which each node represents an individual, and is linked to its parent as well as its children. The “root” of the tree corresponds to the “ancestor” of the whole population. An example of such a tree is depicted in Figure 1.1.
In the following we consider three distinct ways of exploring the so-called Galton–Watson tree, each well suited to establishing specific properties.
In the depth-first view, we start by exploring one child in the first generation, then explore using the same method recursively the subtree of its descendants, before we move to the next child of the first generation.