To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Now in its second edition, this accessible text presents a unified Bayesian treatment of state-of-the-art filtering, smoothing, and parameter estimation algorithms for non-linear state space models. The book focuses on discrete-time state space models and carefully introduces fundamental aspects related to optimal filtering and smoothing. In particular, it covers a range of efficient non-linear Gaussian filtering and smoothing algorithms, as well as Monte Carlo-based algorithms. This updated edition features new chapters on constructing state space models of practical systems, the discretization of continuous-time state space models, Gaussian filtering by enabling approximations, posterior linearization filtering, and the corresponding smoothers. Coverage of key topics is expanded, including extended Kalman filtering and smoothing, and parameter estimation. The book's practical, algorithmic approach assumes only modest mathematical prerequisites, suitable for graduate and advanced undergraduate students. Many examples are included, with Matlab and Python code available online, enabling readers to implement algorithms in their own projects.
This text on the theory and applications of network science is aimed at beginning graduate students in statistics, data science, computer science, machine learning, and mathematics, as well as advanced students in business, computational biology, physics, social science, and engineering working with large, complex relational data sets. It provides an exciting array of analysis tools, including probability models, graph theory, and computational algorithms, exposing students to ways of thinking about types of data that are different from typical statistical data. Concepts are demonstrated in the context of real applications, such as relationships between financial institutions, between genes or proteins, between neurons in the brain, and between terrorist groups. Methods and models described in detail include random graph models, percolation processes, methods for sampling from huge networks, network partitioning, and community detection. In addition to static networks the book introduces dynamic networks such as epidemics, where time is an important component.
There is a close relationship between random graphs and percolation. In fact, percolation and random graphs have been viewed as “the same phenomenon expressed in different languages” (Albert and Barabási, ). Early ideas on percolation (although not under that name) in molecular chemistry can be found in the articles by Flory () and Stockmayer ().
Percolation can be defined more generally than as a process on , . In this chapter, we motivate the main ideas and theory of percolation on more general graphs by application to polymer gelation and amorphous computing.
In this chapter, we discuss various issues that arise when networks increase in size. What does it mean for a network to increase in size and how would we visualize that process? Can a sequence of networks, increasing in size, converge to a limit, and what would such a limit look like? We discuss the transformation of an adjacency matrix to a pixel picture and what it means for a sequence of pixel pictures to increase in size. If a limit exists, the resulting function is called a limit graphon, but it is not itself a network. Estimation of a graphon is also discussed and methods described include an approximation by SBM and a network histogram.
As we have seen, networks, such as the Internet and World Wide Web, social networks (e.g., Facebook and LinkedIn), biological networks (e.g., gene regulatory networks, PPI networks, networks of the brain), transportation networks, and ecological networks are becoming larger and larger in today’s interconnected world. Some of these networks are truly huge and difficult, if not impossible, to analyze completely and efficiently. In this chapter, we discuss some of the issues involving comparing networks for similarity or differences, including choice of similarity measures, exchangeable random structures of networks, and property testing in networks.
In this chapter, we introduce a number of parametric statistical models that have been used to model network data. The social network literature has named them , , and models, the last of which has also been referred to as an ERGM (exponential random graph model).
When a network is too large to study completely, we sample from that network just as we would sample from any large population. The structure of network data, however, is more complicated than that of standard statistical data. The main question is, how can one sample from a network that has nodes and edges? Should we sample the nodes? Or should we sample the edges? The answers to these questions depend upon the complexity of the network. In this chapter, we examine various methods of sampling a network.
Random graphs were introduced by the Hungarian mathematicians Erdős and Rényi (, ), who imposed a probabilistic framework on classical combinatorial graph theory. At the same time, Edgar N. Gilbert () also studied the theoretical properties of random graphs.