To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Channel coding lies at the heart of digital communication and data storage. Fully updated to include current innovations in the field, including a new chapter on polar codes, this detailed introduction describes the core theory of channel coding, decoding algorithms, implementation details, and performance analyses. This edition includes over 50 new end-of-chapter problems to challenge students and numerous new figures and examples throughout.
The authors emphasize a practical approach and clearly present information on modern channel codes, including polar, turbo, and low-density parity-check (LDPC) codes, as well as detailed coverage of BCH codes, Reed–Solomon codes, convolutional codes, finite geometry codes, and product codes for error correction, providing a one-stop resource for both classical and modern coding techniques.
Assuming no prior knowledge in the field of channel coding, the opening chapters begin with basic theory to introduce newcomers to the subject. Later chapters then begin with classical codes, continue with modern codes, and extend to advanced topics such as code ensemble performance analyses and algebraic LDPC code design.
300 varied and stimulating end-of-chapter problems test and enhance learning, making this an essential resource for students and practitioners alike.
Provides a one-stop resource for both classical and modern coding techniques.
Starts with the basic theory before moving on to advanced topics, making it perfect for newcomers to the field of channel coding.
180 worked examples guide students through the practical application of the theory.
Maximise student engagement and understanding of matrix methods in data-driven applications with this modern teaching package. Students are introduced to matrices in two preliminary chapters, before progressing to advanced topics such as the nuclear norm, proximal operators and convex optimization. Highlighted applications include low-rank approximation, matrix completion, subspace learning, logistic regression for binary classification, robust PCA, dimensionality reduction and Procrustes problems. Extensively classroom-tested, the book includes over 200 multiple-choice questions suitable for in-class interactive learning or quizzes, as well as homework exercises (with solutions available for instructors). It encourages active learning with engaging 'explore' questions, with answers at the back of each chapter, and Julia code examples to demonstrate how the mathematics is actually used in practice. A suite of computational notebooks offers a hands-on learning experience for students. This is a perfect textbook for upper-level undergraduates and first-year graduate students who have taken a prior course in linear algebra basics.
Intersymbol interference (ISI) occurs for linear dispersive channels (i.e., channels where the transfer function is not flat within the transmission band). Hence, an obvious strategy to avoid ISI is to divide the transmission band into a large number of subbands, which are used individually in parallel. If these bands are small enough, such fluctuations of the channel transfer function can be ignored and no linear distortions occur that would have to be equalized. In this chapter, we study this idea in the particular form of orthogonal frequency-division multiplexing (OFDM). It is shown that even starting from the frequency-division multiplexing idea, the key principle behind OFDM is blockwise transmission and the use of suitable transformations at transmitter and receiver. We analyze OFDM in detail and show how the resulting parallel data transmission can be used in an optimum way. OFDM is compared with the equalization schemes discussed in the previous chapter, and incorporated in the unified description framework.
In this chapter, we consider the central issue of minimality of the state-space system representation, as well as equivalences of representations. The question introduces important new basic operators and spaces related to the state-space description. In our time-variant context, what we call the Hankel operator plays the central role, via a minimal composition (i.e., product), of a reachability operator and an observability operator. Corresponding results for LTI systems (a special case) follow readily from the LTV case. In a later starred section and for deeper insights, the theory is extended to infinitely indexed systems, but this entails some extra complications, which are not essential for the main, finite-dimensional treatment offered, and can be skipped by students only interested in finite-dimensional cases.
The set of basic topics then continues with a major application domain of our theory: linear least-squares estimation (llse) of the state of an evolving system (aka Kalman filtering), which turns out to be an immediate application of the outer–inner factorization theory developed in Chapter 8. To complete this discussion, we also show how the theory extends naturally to cover the smoothing case (which is often considered “difficult”).
Several types of factorizations solve the main problems of system theory (e.g., identification, estimation, system inversion, system approximation, and optimal control). The factorization type depends on what kind of operator is factorized, and what form the factors should have. This and the following chapter are, therefore, devoted to the two main types of factorization: this chapter treats what is traditionally called coprime factorization, while the next is devoted to inner–outer factorization. Coprime factorization, here called “external factorization” for more generality, characterizes the system’s dynamics and plays a central role in system characterization and control issues. A remarkable result of our approach is the derivation of Bezout equations for time-variant and quasi-separable systems, obtained without the use of Euclidean divisibility theory. From a numerical point of view, all these factorizations reduce to recursively applied QR or LQ factorizations, applied on appropriately chosen operators.
In carrier-modulated (digital) communication, the transmit signal has spectral components in a band around a so-called carrier frequency. Here, a baseband transmit signal is upconverted to obtain the radio-frequency (RF) transmit signal and the RF receive signal is downconverted to obtain the baseband receive signal. The processing of transmit and receive signals is done as far as possible in the baseband domain. The aim of the chapter is to develop a mathematically precise compact representation of real-valued RF signals independent of the actual center frequency (or carrier frequency) by equivalent complex baseband (ECB) signals. In addition, transforms of corresponding systems and stochastic processes into the ECB domain and back are covered in detail. Conditions for wide-sense stationary and cyclic-stationary stochastic processes in the EBC domain are discussed.
This chapter starts developing our central linear time-variant (LTV) prototype environment, a class that coincides perfectly with linear algebra and matrix algebra, making the correspondence between system and matrix computations a mutually productive reality. People familiar with the classical approach, in which the z-transform or other types of transforms are used, will easily recognize the notational or graphic resemblance, but there is a major difference: everything stays in the context of elementary matrix algebra, no complex function calculus is involved, and only the simplest matrix operations, namely addition and multiplication of matrices, are needed. Appealing expressions for the state-space realization of a system appear, as well as the global representation of the input–output operator in terms of four block diagonal matrices and the (now noncommutative but elementary) causal shift Z. The consequences for and relation to linear time-invariant (LTI) systems and infinitely indexed systems are fully documented in *-sections, which can be skipped by students or readers more interested in numerical linear algebra than in LTI system control or estimation.
From this point on, main issues in system theory are tackled. The very first, considered in this chapter, is the all-important question of system identification. This is perhaps the most basic question in system theory and related linear algebra, with a large pedigree starting from Kronecker's characterization of rational functions to its elegant solution for time-variant systems presented here. Identification, often also called realization, is the problem of deriving the internal system’s equations (called state-space equations) from input–output data. In this chapter, we only consider the causal, or block-lower triangular case, although the theory applies just as well to an anti-causal system, for which one lets the time run backward, applying the same theory in a dual form.
What is a system? What is a dynamical system? Systems are characterized by a few central notions: their state and their behavior foremost, and then some derived notions such as reachability and observability. These notions pop up in many fields, so it is important to understand them in nontechnical terms. This chapter therefore introduces what people call a narrative that aims at describing the central ideas. In the remainder of the book, the ideas presented here are made mathematically precise in concrete numerical situations. It turns out that a sharp understanding of just the notion of state suffices to develop most if not the whole mathematical machinery needed to solve the main engineering problems related to systems and their dynamics.
In digital frequency modulation, in particular frequency-shift keying (FSK), information is represented solely by the instantaneous frequency, whereas the amplitude of the ECB signal and thus the envelope of the RF signal are constant. Therefore, efficient power amplification is possible, an important advantage of digital frequency modulation. Even though the frequency and phase of a carrier signal are tightly related (the instantaneous frequency is given by the derivative of the phase), differentially encoded PSK and FSK fall into different families. Moreover, in FSK, the continuity of the carrier phase plays an important role, resulting in continuous-phase FSK (CPFSK). A generalization of CPFSK leads to continuous-phase modulation (CPM), similar to the generalization of MSK to Gaussian MSK discussed in Chapter 4. A brief introduction to CPM is presented and we especially enlighten the inherent coding of CPFSK and CPM. For the characterization and analysis, the general signal space concept derived in Chapter 6 is applied.
This chapter considers the Moore–Penrose inversion of full matrices with quasi-separable specifications, that is, matrices that decompose into the sum of a block-lower triangular and a block-upper triangular matrix, whereby each has a state-space realization given. We show that the Moore–Penrose inverse of such a system has, again, a quasi-separable specification of the same order of complexity as the original and show how this representation can be recursively computed with three intertwined recursions. The procedure is illustrated on a 4 ? 4 (block) example.