To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
State-space models (SSMs) are a mathematical abstraction of many real-life dynamic systems. They have been proven to be useful in a wide variety of fields, including robot tracking, speech processing, control systems, stock prediction, and bio-informatics, basically anywhere there is a dynamic system [70–75]. These models are not only of great practical relevance, but also a good illustration of the power of factor graphs and the SPA. The central idea behind an SSM is that the system at any given time can be described by a state, belonging to a state space. The state space can be either discrete or continuous. The state changes dynamically over time according to a known statistical rule. We cannot observe the state directly; the state is said to be hidden. Instead we observe another quantity (the observation), which has a known statistical relationship with the state. Once we have collected a sequence of observations, our goal is to infer the corresponding sequence of states.
This chapter is organized as follows.
In Section 6.2 we will describe the basic concepts of SSMs, create an appropriate factor graph, and show how the sum–product and max–sum algorithms can be executed on this factor graph. Then, we will consider three cases of SSM in detail.
In Section 6.3, we will cover models with discrete state spaces, known as hidden Markov models (HMMs), where we reformulate the well-known forward–backward and Viterbi algorithms using factor graphs.
In the previous two chapters we have introduced estimation theory and factor graphs. Although these two topics may seem disparate, they are closely linked. In this chapter we will use factor graphs to solve estimation problems and, more generally, inference problems. In the context of statistical inference, factor graphs are important for two reasons. First of all, they allow us to reformulate several important inference algorithms in a very elegant way with an all-encompassing, well-defined notation and terminology. As we will see in the future chapters, well-known algorithms such as the forward–backward algorithm, the Viterbi algorithm, the Kalman filter, and the particle filter can all be cast in the factor-graph framework in a very natural way. Secondly, deriving new, optimal (or near-optimal) inference algorithms is fairly straightforward in the factor-graph framework. Applying the SPA on a factor graph relies solely on local computations in basic building blocks. Once we understand the basic building blocks, we need remember only one rule: the sum–product rule. In this chapter, we will go into considerable detail on how to perform inference using factor graphs. Certain aspects of this chapter were inspired by [62].
This chapter is organized as follows.
We start by explaining the various problems of statistical inference in Section 5.2, and then provide a general factor-graph-based framework for solving these problems.
We will deal with how messages should be represented (a topic we glossed over in Chapter 4) in Section 5.3. The representation has many important implications, as will become apparent throughout this book.
We investigate the computational structure of the biological kinship assignment problem by abstracting away all biological details that are irrelevant to computation. The computational structure depends on phenotype space, which we formally define.We illustrate this approach by exhibiting an approximation algorithm forkinship assignment in the case of the Simpson index with a priori error bound andrunning time that is polynomial in the bit size of the population, but exponential in phenotype space size.This algorithm is based on a relaxed version of the assignment problem, where fractional assignments (over the reals) are permitted.
Factor graphs are a way to represent graphically the factorization of a function. The sum–product algorithm is an algorithm that computes marginals of that function by passing messages on its factor graph. The term and concept factor graph were originally introduced by Brendan Frey in the late 1990s, as a way to capture structure in statistical inference problems. They form an attractive alternative to Bayesian belief networks and Markov random fields, which have been around for many years. At the same time, factor graphs are strongly linked with coding theory, as a way to represent error-correcting codes graphically. They generalize concepts such Tanner graphs and trellises, which are the usual way to depict codes. The whole idea of seeing a code as a graph can be traced back to 1963, when Robert Gallager described low-density parity-check (LDPC) codes in his visionary PhD thesis at MIT. Although LDPC codes remained largely forgotten until fairly recently, the idea of representing codes on graphs was not, and led to the introduction of the concept trellis some ten years later, as well as Tanner graphs in 1981.
To get an idea of how factor graphs came about, let us take a look at the following timeline. It represents a selection of key contributions in the field.
As depicted in Fig. 11.1, in single-user, single-antenna transmission, both the receiver and the transmitter are equipped with a single antenna. There are no other transmitters. This is the most conventional and well-understood way of communicating. Many receivers for such a set-up have been designed during the past few decades. These receivers usually consist of a number of stages. The first stage is a conversion from the continuous-time received waveform to a suitable observation (to allow digital signal processing), followed by equalization (to counteract inter-symbol interference), demapping (where decisions with respect to the coded bits are taken), and finally decoding (where we attempt to recover the original information sequence). This is a one-shot approach, whereby no information flows back from the decoder to the demapper or to the equalizer. Here the terms decoder, demapper, and equalizer pertain to the more conventional receiver tasks, not to nodes in any factor graph. In a conventional mind-set it is hard to come up with a non-ad-hoc way of exploiting information from the decoder during the equalization process. In the factor-graph framework, the flow of information between the various blocks appears naturally and explicitly. These two approaches to receiver design are depicted in Fig. 11.2.
In this chapter we will see how to convert the received waveform into a suitable observation y. This conversion is exactly the same as in conventional receivers.
This book provides an introduction to Bluetooth wireless technology and Bluetooth programming, with a specific focus on the parts of Bluetooth that concern a software developer. While there is already a host of existing literature about Bluetooth, few of these texts are written for the programmer who is concerned primarily with creating Bluetooth software applications. Instead, they tell all about Bluetooth, when most of the time, the programmer is interested only in a tiny fraction of this information.
This book purposefully and happily leaves out a great deal of information about Bluetooth. Concepts are simplified and described in ways that make sense to a programmer, not necessarily the ways they're laid out in the Bluetooth specification. The approach is to start simply, allowing the reader to quickly master the basic concepts with the default parameters before addressing a few advanced features.
Despite these omissions, this book is a rigorous introduction to Bluetooth, albeit with a narrow focus. Applications can be developed without an understanding of the radio modulation techniques or the algorithms underlying the generation of Bluetooth encryption keys. Programmers, however, do need to understand issues such as the available transport protocols, the processes governing establishing connections, and the mechanisms for transferring data.
We strongly believe in learning by example and have included working programs that demonstrate the concepts and techniques introduced in the text.
Let G be a graph with no three independent vertices. How many edges of G can be packed with edge-disjoint copies of Kk? More specifically, let fk(n, m) be the largest integer t such that, for any graph with n vertices, m edges, and independence number 2, at least t edges can be packed with edge-disjoint copies of Kk. Turán's theorem together with Wilson's Theorem assert that if . A conjecture of Erdős states that for all plausible m. For any ε > 0, this conjecture was open even if . Generally, f_k(n,m) may be significantly smaller than . Indeed, for k=7 it is easy to show that for m ≈ 0.3n2. Nevertheless, we prove the following result. For every k≥ 3 there exists γ>0 such that if then . In the special case k=3 we obtain the reasonable bound γ ≥ 10−4. In particular, the above conjecture of Erdős holds whenever G has fewer than 0.2501n2 edges.