To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Tackling the questions that systems designers care about, this book brings queueing theory decisively back to computer science. The book is written with computer scientists and engineers in mind and is full of examples from computer systems, as well as manufacturing and operations research. Fun and readable, the book is highly approachable, even for undergraduates, while still being thoroughly rigorous and also covering a much wider span of topics than many queueing books. Readers benefit from a lively mix of motivation and intuition, with illustrations, examples and more than 300 exercises – all while acquiring the skills needed to model, analyze and design large-scale systems with good performance and low cost. The exercises are an important feature, teaching research-level counterintuitive lessons in the design of computer systems. The goal is to train readers not only to customize existing analyses but also to invent their own.
The standard rules of probability can be interpreted as uniquely valid principles in logic. In this book, E. T. Jaynes dispels the imaginary distinction between 'probability theory' and 'statistical inference', leaving a logical unity and simplicity, which provides greater technical power and flexibility in applications. This book goes beyond the conventional mathematics of probability theory, viewing the subject in a wider context. New results are discussed, along with applications of probability theory to a wide variety of problems in physics, mathematics, economics, chemistry and biology. It contains many exercises and problems, and is suitable for use as a textbook on graduate level courses involving data analysis. The material is aimed at readers who are already familiar with applied mathematics at an advanced undergraduate level or higher. The book will be of interest to scientists working in any area where inference from incomplete information is necessary.
Meyn and Tweedie is back! The bible on Markov chains in general state spaces has been brought up to date to reflect developments in the field since 1996 - many of them sparked by publication of the first edition. The pursuit of more efficient simulation algorithms for complex Markovian models, or algorithms for computation of optimal policies for controlled Markov models, has opened new directions for research on Markov chains. As a result, new applications have emerged across a wide range of topics including optimisation, statistics, and economics. New commentary and an epilogue by Sean Meyn summarise recent developments and references have been fully updated. This second edition reflects the same discipline and style that marked out the original and helped it to become a classic: proofs are rigorous and concise, the range of applications is broad and knowledgeable, and key ideas are accessible to practitioners with limited mathematical background.
First-passage properties underlie a wide range of stochastic processes, such as diffusion-limited growth, neuron firing and the triggering of stock options. This book provides a unified presentation of first-passage processes, which highlights its interrelations with electrostatics and the resulting powerful consequences. The author begins with a presentation of fundamental theory including the connection between the occupation and first-passage probabilities of a random walk, and the connection to electrostatics and current flows in resistor networks. The consequences of this theory are then developed for simple, illustrative geometries including the finite and semi-infinite intervals, fractal networks, spherical geometries and the wedge. Various applications are presented including neuron dynamics, self-organized criticality, diffusion-limited aggregation, the dynamics of spin systems and the kinetics of diffusion-controlled reactions. First-passage processes provide an appealing way for graduate students and researchers in physics, chemistry, theoretical biology, electrical engineering, chemical engineering, operations research and finance to understand all of these systems.
This volume develops a unifying approach to population studies, emphasising the interplay between modelling and experimentation. Throughout, mathematicians and biologists are provided with a framework within which population dynamics can be fully explored and understood. Aspects of population dynamics covered include birth-death and logistic processes, competition and predator-prey relationships, chaos, reaction time-delays, fluctuating environments, spatial systems, velocities of spread, epidemics, and spatial branching structures. Both deterministic and stochastic models are considered. Whilst the more theoretically orientated sections will appeal to mathematical biologists, the material is presented so that readers with little mathematical expertise can bypass these without losing the main flow of the text.
Markov chains are central to the understanding of random processes. This is not only because they pervade the applications of random processes, but also because one can calculate explicitly many quantities of interest. This textbook, aimed at advanced undergraduate or MSc students with some background in basic probability theory, focuses on Markov chains and quickly develops a coherent and rigorous theory whilst showing also how actually to apply it. Both discrete-time and continuous-time chains are studied. A distinguishing feature is an introduction to more advanced topics such as martingales and potentials in the established context of Markov chains. There are applications to simulation, economics, optimal control, genetics, queues and many other topics, and exercises and examples drawn both from theory and practice. It will therefore be an ideal text either for elementary courses on random processes or those that are more oriented towards applications.
Probabilistic modelling is the most cost-effective means of performance and reliability evaluation of complex dynamic systems. This self-contained text will be welcomed by students and teachers for its no-nonsense treatment of the basic results and examples of their application. The only mathematical background that is assumed is basic calculus. The necessary fundamentals of probability theory are included, as well as an introduction to renewal, Poisson and Markov processes. Models arising in the fields of manufacturing, computing and communications, involving single or multiple service stations and one or more customer classes, are examined in some detail. Both exact and approximate solution methods are discussed, including recent techniques such as spectral expansion. Special attention is devoted to models of systems subject to breakdowns and repairs. Throughout the book, strong emphasis is placed on explaining the ideas behind the results and helping the reader to use them, making the book ideal for students in computer science, engineering or operations research taking courses in modern system design.
This book shows how densities arise in simple deterministic systems. There has been explosive growth in interest in physical, biological and economic systems that can be profitably studied using densities. Due to the inaccessibility of the mathematical literature there has been little diffusion of the applicable mathematics into the study of these 'chaotic' systems. This book will help to bridge that gap. The authors give a unified treatment of a variety of mathematical systems generating densities, ranging from one-dimensional discrete time transformations through continuous time systems described by integro-partial differential equations. They have drawn examples from many scientific fields to illustrate the utility of the techniques presented. The book assumes a knowledge of advanced calculus and differential equations, but basic concepts from measure theory, ergodic theory, the geometry of manifolds, partial differential equations, probability theory and Markov processes, and stochastic integrals and differential equations are introduced as needed.
This comprehensive volume on ergodic control for diffusions highlights intuition alongside technical arguments. A concise account of Markov process theory is followed by a complete development of the fundamental issues and formalisms in control of diffusions. This then leads to a comprehensive treatment of ergodic control, a problem that straddles stochastic control and the ergodic theory of Markov processes. The interplay between the probabilistic and ergodic-theoretic aspects of the problem, notably the asymptotics of empirical measures on one hand, and the analytic aspects leading to a characterization of optimality via the associated Hamilton–Jacobi–Bellman equation on the other, is clearly revealed. The more abstract controlled martingale problem is also presented, in addition to many other related issues and models. Assuming only graduate-level probability and analysis, the authors develop the theory in a manner that makes it accessible to users in applied mathematics, engineering, finance and operations research.
In this chapter we turn to the study of degenerate controlled diffusions. For the nondegenerate case the theory is more or less complete. This is not the case if the uniform ellipticity hypothesis is dropped. Indeed, the differences between the nondegenerate and the degenerate cases are rather striking. In the nondegenerate case, the state process X is strong Feller under a Markov control. This, in turn, facilitates the study of the ergodic behavior of the process. In contrast, in the degenerate case, under a Markov control, the Itô stochastic differential equation (2.2.1) is not always well posed. From an analytical viewpoint, in the nondegenerate case, the HJB equation is uniformly elliptic and the associated regularity properties benefit its study. The degenerate case, on the other hand, is approached via a particular class of weak solutions known as viscosity solutions. This approach does not yield as satisfactory results as in the case of classical solutions. In fact ergodic control of degenerate diffusions should not be viewed as a single topic, but rather as a class of problems, which are studied under various hypotheses.We first formulate the problem as a special case of a controlled martingale problem and then summarize those results from Chapter 6 that are useful here. Next, in Section 7.3, we study the HJB equations in the context of viscosity solutions for a specific class of problems that bears the name of asymptotically flat diffusions.
We conclude by highlighting a string of issues that still remain open.
In the controlled martingale problem with ergodic cost, we obtained existence of an optimal ergodic process and optimal Markov process separately, but not of an optimal ergodic Markov process, as one would expect from one's experience with the nondegenerate case. This issue still remains open. In particular it is unclear whether the Krylov selection procedure of Section 6.7, which has been used to extract an optimal Markov family for the discounted cost problem under nondegeneracy, can be similarly employed for the ergodic problem. The work in Bhatt and Borkar [22] claims such a result under very restrictive conditions, but the proof has a serious flaw.
The HJB equation was analyzed in two special cases. The general case remains open. In particular, experience with discrete state space problems gives some pointers:
(a) In the multichain case for Markov chains with finite state space S and finite action space A, a very general dynamic programming equation is available due to Howard [53], viz.,
for i ∈ S. Here the unknowns are the value function V and the state dependent optimal cost ϱ. An analog of this for the degenerate diffusion case could formally be written down as