To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This monograph extends the classical spectral theory of ordinary graphs to the broader framework of signed graphs. It integrates foundational results with recent advances, explores applications, and clarifies connections with related mathematical structures while indicating promising directions for future research. The exposition remains rigorous throughout, presenting core concepts, major developments, and emerging ideas in a coherent and accessible manner. Complementing the theoretical material, the monograph includes illustrative examples and problem sections to support understanding and encourage continued study. This monograph will serve as a reference for mathematicians working in the spectral theory of signed graphs as well as a tutorial for graduate students entering the subject area and computer scientists, chemists, physicists, biologists, electrical engineers and others whose work involves graph-based modelling.
Deep learning models are powerful, but often large, slow, and expensive to run. This book is a practical guide to accelerating and compressing neural networks using proven techniques such as quantization, pruning, distillation, and fast architectures. It explains how and why these methods work, fostering a comprehensive understanding. Written for engineers, researchers, and advanced students, the book combines clear theoretical insights with hands-on PyTorch implementations and numerical results. Readers will learn how to reduce inference time and memory usage, lower deployment costs, and select the right acceleration strategy for their task. Whether you're working with large language models, vision systems, or edge devices, this book gives you the tools and intuition needed to build faster, leaner AI systems, without sacrificing performance. It is perfect for anyone who wants to go beyond intuition and take a principled approach to optimizing AI systems
Erasure channel models and bounds. In Chapter 3, we introduce the memoryless erasure channel. The performance limits of the memoryless erasure channel are derived in terms of capacity, as well as in the form of finite-length upper and lower bounds on the block error probability. This chapter continues with a survey of models for erasure channels with memory.
In Chapter 11, we address an alternative paradigm for erasure coding that exploits feedback from (multiple) receivers to the transmitter. Here, the receivers are assumed to be interested in the same content, and the transmission takes place over a broadcast channel. The feedback is employed to adapt “on the fly” the coding rate to the channel conditions. This approach, which shares several similarities with the framework of rate-compatible codes with hybrid automatic retransmission query ARQ, relies on the so-called fountain codes. Two well-established classes of fountain codes are discussed, namely the class of LT codes (strongly related to LDPC codes) and the class of Raptor codes.
Weight distribution of LDPC codes: More advanced topics on LDPC codes are covered by Chapter 5, which is focused on structural properties of these codes related to their weight distributions. This chapter addresses weight distributions of unstructured and partially structured LDPC code ensembles as well as weight distribution exponents. It also addresses ensemble expurgation and minimum distance analysis for LDPC codes.
Decoding of LDPC codes on the erasure channel: In Chapter 6, we illustrate different decoding algorithms for LDPC codes over erasure channels, namely, iterative (IT) and maximum likelihood (ML) decoding. Decoding on the erasure channel can be strongly simplified with respect to decoding on other channels, since whenever a symbol is not erased, we know its value with full certainty. We illustrate that iterative decoding can be done by a peeling process which resolves one unknown per iteration. ML decoding can be performed efficiently by solving a sparse system of equations by variants of the Gaussian elimination algorithm.
Generalizations of LDPC codes: In Chapter 9, we present code ensembles that may be regarded as special instances, generalization, or modifications of LDPC code ensembles introduced in Chapter 4. This chapter starts with spatially coupled LDPC codes, introduced within a protograph-based framework, then addresses generalized LDPC codes, where some of the CNs impose multiple linear constraints, and finally describes low-density generator matrix codes. Erasure decoding algorithms are described for all code classes.
Among the most recent proposals for error corrections there are the Polar codes. These are described in Chapter 10. This chapter describes polar codes, which are block codes designed for simplifying the implementation of the decoder. Specifically, polar codes are designed assuming a successive cancellation (SC) decoder. Channel polarization and subchannel ranking are discussed in this chapter.
Basic definitions and tools for error correction: In Chapter 2, we provide the basic elements of classical error correcting codes, how to perform operations in finite fields, the decision rules, the structure and properties of classical block codes, and finally a description of the Reed–Solomon codes which are particularly important for the erasure channel.
Maximum-likelihood LDPC decoder analysis. In Chapter 8, the performance of LDPC codes under ML decoding is analyzed. ML decoding is intended here either as the block-wise or the symbol-wise decoding criterion (see Section 2.2). More specifically, the asymptotic analysis on the ML decoding threshold addresses the performance in terms of symbol-wise ML decoding, whereas finite-length bounds are provided for the block error probability under block-wise ML decoding. While the focus is on unstructured LDPC code ensembles, the results in this chapter can be considered to a large extent valid for other LDPC code ensembles.
Performance analysis for iterative decoders: In Chapter 7, we discuss the behavior of LDPC codes under iterative erasure decoding. When the blocklength goes to infinity, for many LDPC codes, the symbol error probability exhibits a so-called threshold phenomenon that is, there exists a certain channel erasure probability below which error-free communication is possible, while this is not guaranteed above it. We discuss how to compute this threshold for ensembles of LDPC codes on memoryless erasure channels. In the finite-length setting, one may observe a flattening of the symbol error rate curve owing to stopping sets – specific structures in the code’s bipartite graph. Knowing their number and size allows predicting this so-called error floor. Based on our findings, we discuss how to design good LDPC for memoryless erasure channels with extension to channels with memory.
Erasure codes find various applications: Each of those puts different constraints on the erasure code, for example, on the blocklength, code rate, decoding complexity, or number of decoding operations. This chapter discusses some of them.
In Chapter 4, we introduce low-density parity-check (LDPC) codes, a class of powerful and efficient channel codes which admit a graphical representation based on sparse bipartite graphs. This chapter introduces fundamental concepts in LDPC coding theory and guides the reader through unstructured and structured LDPC code ensembles, including the important case of protograph-based LDPC code ensembles.