To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Starting around the late 1950s, several research communities began relating the geometry of graphs to stochastic processes on these graphs. This book, twenty years in the making, ties together research in the field, encompassing work on percolation, isoperimetric inequalities, eigenvalues, transition probabilities, and random walks. Written by two leading researchers, the text emphasizes intuition, while giving complete proofs and more than 850 exercises. Many recent developments, in which the authors have played a leading role, are discussed, including percolation on trees and Cayley graphs, uniform spanning forests, the mass-transport technique, and connections on random walks on graphs to embedding in Hilbert space. This state-of-the-art account of probability on networks will be indispensable for graduate students and researchers alike.
This is a mathematically rigorous introduction to fractals which emphasizes examples and fundamental ideas. Building up from basic techniques of geometric measure theory and probability, central topics such as Hausdorff dimension, self-similar sets and Brownian motion are introduced, as are more specialized topics, including Kakeya sets, capacity, percolation on trees and the traveling salesman theorem. The broad range of techniques presented enables key ideas to be highlighted, without the distraction of excessive technicalities. The authors incorporate some novel proofs which are simpler than those available elsewhere. Where possible, chapters are designed to be read independently so the book can be used to teach a variety of courses, with the clear structure offering students an accessible route into the topic.
This book is written with the express purpose of providing the readership with a reasonably self-contained treatise on stochastic dynamical systems whilst emphasizing on a class of applications that pervade across several disciplines and thus has a general appeal. The targeted audience are doctoral students, researchers and practitioners in science and engineering who wish to understand and exploit the principles of stochastic dynamics as a tool to address their scientific or applied problems. In view of the stated aim, what one should not expect is a treatment that might be deemed rigorous enough by researchers in pure and applied mathematics. Moreover, this book deals only with diffusive stochastic processes and does not address non-Markovian and / or non-diffusive stochastic dynamical systems.
With these broad goals in mind, the question as to whether there are similar other books may naturally arise. Alternatively, one may ask if there are any aspects of originality that we, the authors, could lay our claim to. Indeed there are a number of very well-written mathematical texts on stochastic processes and calculus, some of which also cover applications to such areas as finance (e.g., stock options), biology (e.g., birth–death processes) and estimation or control. Talking of applications, there are several mathematical texts on stochastic filtering problems, even though the focus therein may not so much be on the applied aspects covering higher dimensional filtering and identification problems—an area given some prominence in this book. There are even monographs dedicated entirely to the exploitation of the theory of stochastic processes and calculus to numerical integration of stochastic differential equations. Despite such plentiful and laudatory compilations, there is, to the best of our knowledge, no book or monograph on stochastic processes that simultaneously furnishes a reasonably in-depth treatment of the problem of global optimization based on stochastic search, thereby foregrounding the role of stochastic processes and calculus in the development of robust optimization schemes. Another novel feature is the use of change of measures as the driving refrain in most of the applications covered in this book-from numerical solutions to stochastic differential equations to filtering to global optimization.
Obtaining accurate numerical solutions to SDEs has been the focus of numerous studies owing to their relevance in many fields of engineering and science [Platen 1987, Kloeden and Platen 1992, Milstein 1995, Higham et al. 2002]. In the context of stochastic filtering (Chapters 6 and 7), it was seen that the process model is often a set of non-linear SDEs and imprecise integration techniques for these SDEs may precipitate significant numerical errors in the predicted particle locations leading to possible degradation in the filter performance. Determining solutions—strong or weak—of stochastically driven non-linear oscillators by direct numerical integration of the associated SDEs has been dealt with in Chapters 4 and 5. In particular, a universal framework for integration schemes is provided in Chapter 5 through an MC approach. There it has been shown that the Ito–Taylor expansion, which is based on an iterated Ito's formula, helps to construct integration schemes for SDEs. The possibility of developing higher order numerical integration schemes, e.g., the Milstein method [Milstein 1995], numeric–analytical techniques of LTL (locally transversal linearization) type [Roy 2000, 2001, 2004] and the stochastic Newmark method [Roy 2006], has been demonstrated along with the estimation of the order of accuracy of the higher order numerical integration schemes. However, unlike ordinary DEs, deriving higher order numerical schemes for SDEs is generally hindered by the difficulty of computing higher order MSIs. On the other hand, avoidance of the higher order MSIs, which implies retaining fewer terms in the hierarchical stochastic Taylor's approximation used to construct the integration scheme, naturally achieves relatively lower order accuracy. Most of the lower order explicit schemes (for instance, the explicit version of the EM scheme) may lose stability, for instance in the case of stiff SDEs, thus requiring impracticably low time steps to get stable solutions. Thanks to its computational expedience and ease of implementation, a lower order scheme would be ideal, were it not for a loss of integration accuracy.
The efficacy of the concept of change of measures was demonstrated in the last few chapters in the context of non-linear stochastic filtering—a tool that also has considerable scientific usefulness in developing numerical schemes for system identification problems. This chapter also concerns an application of the same notion leading to a paradigm [Sarkar et al. 2014] on global optimization problems, wherein solutions are guided mainly through derivative-free directional information computable from the sample statistical moments of the design (state) variables within a MC setup. Before the ideas on this approach are presented in some detail, it is advisable to first focus on some of the available methodologies/strategies for solving such optimization problems.
In most cases of practical interest, the cost or objective functional, whose extremization solves the optimization problem, could be non-convex, non-separable and even non-smooth. Here separability means that the cost function can be additively split in terms of the component functions and the optimization problem may actually be split into a set of sub-problems. An optimization problem is convex if it involves minimization of a convex function (or maximization of a concave function) where the admissible state variables are in a convex set. For a convex problem, a fundamental result is that a locally optimal solution is also globally optimal. The classical methods [Fletcher and Reeves 1964, Fox 1971, Rao 2009] that mostly use directional derivatives are particularly useful in solving convex problems (Fig. 9.1). Non-convex problems, on the other hand, may have many local optima, and choosing the best one (i.e., the global extremum) could be an extremely hard task. In global optimization, we seek, in the design or state or parameter space, the extremal locations of nonconvex functions subject to (possibly) nonconvex constraints. Here the objective functional could be multivariate, multimodal and even non-differentiable, which together precludes applying a gradient-based Newton–step whilst solving the optimization problem.