To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 14 we first define a performance metric giving a full description of the binary hypothesis testing (BHT) problem. A key result in this theory, the Neyman–Pearson lemma, determines the form of the optimal test and at the same time characterizes the given performance metric. We then specialize to the setting of iid observations and consider two types of asymptotics: Stein’s regime (where type-I error is held constant) and Chernoff’s regime (where errors of both types are required to decay exponentially). In this chapter we only discuss Stein's regime and find out that fundamental limit is given by the KL divergence. Subsequent chapters will address the Chernoff's regime.
In the previous chapter we introduced the concept of variable-length compression and studied its fundamental limits (with and without the prefix-free condition). In some situations, however, one may desire that the output of the compressor always has a fixed length, say, k bits. Unless k is unreasonably large, then, this will require relaxing the losslessness condition. This is the focus of Chapter 11: compression in the presence of (typically vanishingly small) probability of error. It turns out allowing even very small error enables several beautiful effects: The possibility to compress data via matrix multiplication over finite fields (linear compression). The possibility to reduce compression length if side information is available at the decompressor (Slepian–Wolf). The possibility to reduce compression length if access to a compressed representation of side information is available at the decompressor (Ahlswede–Körner–Wyner).
In Chapter 19 we apply methods developed in the previous chapters (namely the weak converse and the random/maximal coding achievability) to compute the channel capacity. This latter notion quantifies the maximal amount of (data) bits that can be reliably communicated per single channel use in the limit of using the channel many times. Formalizing the latter statement will require introducing the concept of a communication channel. Then for special kinds of channels (the memoryless and the information-stable ones) we will show that computing the channel capacity reduces to maximizing the (sequence of the) mutual information. This result, known as Shannon’s noisy channel coding theorem, is very special as it relates the value of a (discrete, combinatorial) optimization problem over codebooks to that of a (convex) optimization problem over information measures. It builds a bridge between the abstraction of information measures (Part I) and practical engineering problems.
Chapter 1 introduces the first information measure – Shannon entropy. After studying its standard properties (chain rule, conditioning), we will briefly describe how one could arrive at its definition. We discuss axiomatic characterization, the historical development in statistical mechanics, as well as the underlying combinatorial foundation (“method of types”). We close the chapter with Han’s and Shearer’s inequalities, which both exploit the submodularity of entropy.
Chapter 2 is a study of divergence (also known as information divergence, Kullback–Leibler (KL) divergence, relative entropy), which is the first example of a dissimilarity (information) measure between a pair of distributions P and Q. Defining KL divergence and its conditional version in full generality requires some measure-theoretic acrobatics (Radon–Nikodym derivatives and Markov kernels) that we spend some time on. (We stress again that all this abstraction can be ignored if one is willing to work only with finite or countably infinite alphabets.) Besides definitions we prove the “main inequality” showing that KL divergence is non-negative. Coupled with the chain rule for divergence, this inequality implies the data-processing inequality, which is arguably the central pillar of information theory and this book. We conclude the chapter by studying the local behavior of divergence when P and Q are close. In the special case when P and Q belong to a parametric family, we will see that divergence is locally quadratic, with Hessian being the Fisher information, explaining the fundamental role of the latter in classical statistics.
This enthusiastic introduction to the fundamentals of information theory builds from classical Shannon theory through to modern applications in statistical learning, equipping students with a uniquely well-rounded and rigorous foundation for further study. The book introduces core topics such as data compression, channel coding, and rate-distortion theory using a unique finite blocklength approach. With over 210 end-of-part exercises and numerous examples, students are introduced to contemporary applications in statistics, machine learning, and modern communication theory. This textbook presents information-theoretic methods with applications in statistical learning and computer science, such as f-divergences, PAC-Bayes and variational principle, Kolmogorov’s metric entropy, strong data-processing inequalities, and entropic upper bounds for statistical estimation. Accompanied by additional stand-alone chapters on more specialized topics in information theory, this is the ideal introductory textbook for senior undergraduate and graduate students in electrical engineering, statistics, and computer science.
In Chapter 13 we will discuss how to produce compression schemes that do not require a priori knowledge of the generative distribution. It turns out that designing a compression algorithm able to adapt to an unknown distribution is essentially equivalent to the problem of estimating an unknown distribution, which is a major topic of statistical learning. The plan for this chapter is as follows: (1) We will start by discussing the earliest example of a universal compression algorithm (of Fitingof). It does not talk about probability distributions at all. However, it turns out to be asymptotically optimal simultaneously for all iid distributions and with small modifications for all finite-order Markov chains. (2) The next class of universal compressors is based on assuming that the true distribution belongs to a given class. These methods proceed by choosing a good model distribution serving as the minimax approximation to each distribution in the class. The compression algorithm for a single distribution is then designed as in previous chapters. (3) Finally, an entirely different idea are algorithms of Lempel–Ziv type. These automatically adapt to the distribution of the source, without any prior assumptions required.
In this chapter we introduce the problem of analyzing low-probability events, known as large deviation theory. It is usually solved by computing moment-generating functions and Fenchel-Legendre conjugation. It turns out, however, that these steps can be interpreted information-theoretically in terms of information projection. We show how to solve information projection in a special case of linear constraints, connecting the solution to exponential families.
In Chapter 20 we study data transmission with constraints on the channel input. For example, how many bits per channel use can we transmit under constraints on the codewords? To answer this question in general, we need to extend the setup and coding theorems to channels with input constraints. After doing that we will apply these results to compute the capacities of various Gaussian channels (memoryless, with intersymbol interference and subject to fading).
This study introduces an innovative methodology for mortality forecasting, which integrates signature-based methods within the functional data framework of the Hyndman–Ullah (HU) model. This new approach, termed the Hyndman–Ullah with truncated signatures (HUts) model, aims to enhance the accuracy and robustness of mortality predictions. By utilizing signature regression, the HUts model is able to capture complex, nonlinear dependencies in mortality data which enhances forecasting accuracy across various demographic conditions. The model is applied to mortality data from 12 countries, comparing its forecasting performance against variants of the HU models across multiple forecast horizons. Our findings indicate that overall the HUts model not only provides more precise point forecasts but also shows robustness against data irregularities, such as those observed in countries with historical outliers. The integration of signature-based methods enables the HUts model to capture complex patterns in mortality data, making it a powerful tool for actuaries and demographers. Prediction intervals are also constructed with bootstrapping methods.