To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapters 4 through 9, we studied reliable communication of independent messages over noisy single-hop networks (channel coding), and in Chapters 10 through 13, we studied the dual setting of reliable communication of uncompressed sources over noiseless single-hop networks (source coding). These settings are special cases of the more general information flow problem of reliable communication of uncompressed sources over noisy single-hop networks. As we have seen in Section 3.9, separate source and channel coding is asymptotically sufficient for communicating a DMS over a DMC. Does such separation hold in general for communicating a k-DMS over a DM single-hop network?
In this chapter, we show that such separation does not hold in general. Thus in some multiuser settings it is advantageous to perform joint source–channel coding. We demonstrate this breakdown in separation through examples of lossless communication of a 2-DMS over a DM-MAC and over a DM-BC.
For the DM-MAC case, we show that joint source–channel coding can help communication by utilizing the correlation between the sources to induce statistical cooperation between the transmitters. We present a joint source–channel coding scheme that outperforms separate source and channel coding. We then show that this scheme can be improved when the sources have a common part, that is, a source that both senders can agree on with probability one.
For the DM-BC case, we show that joint source–channel coding can help communication by utilizing the statistical compatibility between the sources and the channel. We first consider a separate source and channel coding scheme based on the Gray–Wyner source coding system and Marton's channel coding scheme. The optimal rate–region for the Gray–Wyner system naturally leads to several definitions of common information between correlated sources. We then describe a joint source–channel coding scheme that outperforms the separate Gray–Wyner and Marton coding scheme.
Finally, we present a general single-hop network that includes as special cases many of themultiuser source and channel settings we discussed in previous chapters. We describe a hybrid source–channel coding scheme for this network.
So far we have studied single-hop networks in which each node is either a sender or a receiver. In this chapter, we begin the discussion of multihop networks, where some nodes can act as both senders and receivers and hence communication can be performed over multiple rounds. We consider the limits on communication of independent messages over networks modeled by a weighted directed acyclic graph. This network model represents, for example, a wired network or a wireless mesh network operated in time or frequency division, where the nodes may be servers, handsets, sensors, base stations, or routers. The edges in the graph represent point-to-point communication links that use channel coding to achieve close to error-free communication at rates below their respective capacities. We assume that each node wishes to communicate a message to other nodes over this graphical network. The nodes can also act as relays to help other nodes communicate their messages. What is the capacity region of this network?
Although communication over such a graphical network is not hampered by noise or interference, the conditions on optimal information flow are not known in general. The difficulty arises in determining the optimal relaying strategies when several messages are to be sent to different destination nodes.
We first consider the graphical multicast network, where a source node wishes to communicate a message to a set of destination nodes. We establish the cutset upper bound on the capacity and show that it is achievable error-free via routing when there is only one destination, leading to the celebrated max-flow min-cut theorem. When there are multiple destinations, routing alone cannot achieve the capacity, however. We show that the cutset bound is still achievable, but using more sophisticated coding at the relays. The proof of this result involves linear network coding in which the relays perform simple linear operations over a finite field.
We then consider graphical networks with multiple independent messages. We show that the cutset bound is tight when the messages are to be sent to the same set of destination nodes (multimessage multicast), and is achieved again error-free using linear network coding. When each message is to be sent to a different set of destination nodes, however, neither the cutset bound nor linear network coding is optimal in general.
The source and network models we discussed so far capture many essential ingredients of real-world communication networks, including
• noise,
• multiple access,
• broadcast,
• interference,
• time variation and uncertainty about channel statistics,
• distributed compression and computing,
• joint source–channel coding,
• multihop relaying,
• node cooperation,
• interaction and feedback, and
• secure communication.
Although a general theory for information flow under these models remains elusive, we have seen that there are several coding techniques—some of which are optimal or close to optimal—that promise significant performance improvements over today's practice. Still, the models we discussed do not capture other key aspects of real-world networks.
• We assumed that data is always available at the communication nodes. In real-world networks, data is bursty and the nodes have finite buffer sizes.
• We assumed that the network has a known and fixed number of users. In real-world networks, users can enter and leave the network at will.
• We assumed that the network operation is centralized and communication over the network is synchronous. Many real-world networks are decentralized and communication is asynchronous.
• We analyzed performance assuming arbitrarily long delays. In many networking applications, delay is a primary concern.
• We ignored the overhead (protocol) needed to set up the communication as well as the cost of feedback and channel state information.
While these key aspects of real-world networks have been at the heart of the field of computer networks, they have not been satisfactorily addressed by network information theory, either because of their incompatibility with the basic asymptotic approach of information theory or because the resulting models are messy and intractable. There have been several success stories at the intersection of networking and network information theory, however. In this chapter we discuss three representative examples.
We first consider the channel coding problem for a DMC with random data arrival. We show that reliable communication is feasible provided that the data arrival rate is less than the channel capacity. Similar results can be established for multiuser channels and multiple data streams. A key new ingredient in this study is the notion of queue stability.
This chapter is primarily concerned with algorithms for efficient computation of the Discrete Fourier Transform (DFT). This is an important topic because the DFT plays an important role in the analysis, design, and implementation of many digital signal processing systems. Direct computation of the N-point DFT requires computational cost proportional to N2. The most important class of efficient DFT algorithms, known collectively as Fast Fourier Transform (FFT) algorithms, compute all DFT coefficients as a “block” with computational cost proportional to Nlog2N. However, when we only need a few DFT coefficients, a few samples of DTFT, or a few values of z-transform, it may be more efficient to use algorithms based on linear filtering operations, like the Goertzel algorithm or the chirp z-transform algorithm.
Although many computational environments provide FFT algorithms as built-in functions, the user should understand the fundamental principles of FFT algorithms to make effective use of these functions. The details of FFT algorithms are important to designers of real-time DSP systems in either software or hardware.
Study objectives
After studying this chapter you should be able to:
Understand the derivation, operation, programming, and use of decimation-in-time and decimation-in-frequency radix-2 FFT algorithms.
Understand the general principles underlying the development of FFT algorithms and use them to make effective use of existing functions, evaluate competing algorithms, or guide the selection of algorithms for a particular application or computer architecture.
In theory, all signal samples, filter coefficients, twiddle factors, other quantities, and the results of any computations, can assume any value, that is, they can be represented with infinite accuracy. However, in practice, any number must be represented in a digital computer or other digital hardware using a finite number of binary digits (bits), that is, with finite accuracy. In most applications, where we use personal computers or workstations with floating point arithmetic processing units, numerical precision is not an issue. However, in analog-to-digital converters, digital-to-analog converters, and digital signal processors that use fixed-point number representations, use of finite wordlength may introduce unacceptable errors. Finite wordlength effects are caused by nonlinear operations and are very complicated, if not impossible, to understand and analyze. Thus, the most effective approach to analyze finite wordlength effects is to simulate a specific filter and evaluate its performance. Another approach is to use statistical techniques to derive approximate results which can be used to make educated decisions in the design of A/D converters, D/A converters, and digital filters. In this chapter we discuss several topics related to the effects of finite wordlength in digital signal processing systems.
Study objectives
After studying this chapter you should be able to:
Understand the implications of binary fixed-point and floating-point representation of numbers for signal representation and DSP arithmetic operations.
Understand how to use a statistical quantization model to analyze the operation of A/D and D/A converters incorporating oversampling and noise shaping.
To use stochastic process models in practical signal processing applications, we need to estimate their parameters from data. In the first part of this chapter we introduce some basic concepts and techniques from estimation theory and then we use them to estimate the mean, variance, ACRS, and PSD of a stationary random process model. In the second part, we discuss the design of optimum filters for detection of signals with known shape in the presence of additive noise (matched filters), optimum filters for estimation of signals corrupted by additive noise (Wiener filters), and finite memory linear predictors for signal modeling and spectral estimation applications. We conclude with a discussion of the Karhunen–Loève transform, which is an optimum finite orthogonal transform for representation of random signals.
Study objectives
After studying this chapter you should be able to:
Compute estimates of the mean, variance, and covariance of random variables from a finite number of observations (data) and assess their quality based on the bias and variance of the estimators used.
Estimate the mean, variance, ACRS sequence, and PSD function of a stationary process from a finite data set by properly choosing the estimator parameters to achieve the desired quality in terms of bias–variance trade-offs.
Design FIR matched filters for detection of known signals corrupted by additive random noise, FIR Wiener filters that minimize the mean squared error between the output signal and a desired response, and finite memory linear predictors that minimize the mean squared prediction error.
The term “filter” is used for LTI systems that alter their input signals in a prescribed way. Frequency-selective filters, the subject of this chapter, are designed to pass a set of desired frequency components from a mixture of desired and undesired components or to shape the spectrum of the input signal in a desired way. In this case, the filter design specifications are given in the frequency domain by a desired frequency response. The filter design problem consists of finding a practically realizable filter whose frequency response best approximates the desired ideal magnitude and phase responses within specified tolerances.
The design of FIR filters requires finding a polynomial frequency response function that best approximates the design specifications; in contrast, the design of IIR filters requires a rational approximating function. Thus, the algorithms used to design FIR filters are different from those used to design IIR filters. In this chapter we concentrate on FIR filter design techniques while in Chapter 11 we discuss IIR filter design techniques. The design of FIR filters is typically performed either directly in the discrete-time domain using the windowing method or in the frequency domain using the frequency sampling method and the optimum Chebyshev approximation method via the Parks–McClellan algorithm.
Study objectives
After studying this chapter you should be able to:
Understand how to set up specifications for design of discrete-time filters.
Understand the conditions required to ensure linear phase in FIR filters and how to use them to design FIR filters by specifying their magnitude response. […]
This chapter is primarily concerned with the definition, properties, and applications of the Discrete Fourier Transform (DFT). The DFT provides a unique representation using N coefficients for any sequence of N consecutive samples. The DFT coefficients are related to the DTFS coefficients or to equally spaced samples of the DTFT of the underlying sequences. As a result of these relationships and the existence of efficient algorithms for its computation, the DFT plays a central role in spectral analysis, the implementation of digital filters, and a variety of other signal processing applications.
Study objectives
After studying this chapter you should be able to:
Understand the meaning and basic properties of DFT and how to use the DFT to compute the DTFS, DTFT, CTFS, and CTFT transforms.
Understand how to obtain the DFT by sampling the DTFT and the implications of this operation on how accurately the DFT approximates the DTFT and other transforms.
Understand the symmetry and operational properties of DFT and how to use the property of circular convolution for the computation of linear convolution.
Understand how to use the DFT to compute the spectrum of continuous-time signals and how to compensate for the effects of windowing the signal to finite-length using the proper window.
Computational Fourier analysis
The basic premise of Fourier analysis is that any signal can be expressed as a linear superposition, that is, a sum or integral of sinusoidal signals.
As we discussed in Chapter 2, any LTI can be implemented using three basic computational elements: adders, multipliers, and unit delays. For LTI systems with a rational system function, the relation between the input and output sequences satisfies a linear constant-coefficient difference equation. Such systems are practically realizable because they require a finite number of computational elements. In this chapter, we show that there is a large collection of difference equations corresponding to the same system function. Each set of equations describes the same input-output relation and provides an algorithm or structure for the implementation of the system. Alternative structures for the same system differ in computational complexity, memory, and behavior when we use finite precision arithmetic. In this chapter, we discuss the most widely used discrete-time structures and their implementation using Matlab. These include direct-form, transposed-form, cascade, parallel, frequency sampling, and lattice structures.
Study objectives
After studying this chapter you should be able to:
Develop and analyze practically useful structures for both FIR and IIR systems.
Understand the advantages and disadvantages of different filter structures and convert from one structure to another.
Implement a filter using a particular structure and understand how to simulate and verify the correct operation of that structure in Matlab.
Block diagrams and signal flow graphs
Every practically realizable LTI system can be described by a set of difference equations, which constitute a computational algorithm for its implementation.
This chapter is primarily concerned with the conversion of continuous-time signals into discrete-time signals using uniform or periodic sampling. The presented theory of sampling provides the conditions for signal sampling and reconstruction from a sequence of sample values. It turns out that a properly sampled bandlimited signal can be perfectly reconstructed from its samples. In practice, the numerical value of each sample is expressed by a finite number of bits, a process known as quantization. The error introduced by quantizing the sample values, known as quantization noise, is unavoidable. The major implication of sampling theory is that it makes possible the processing of continuous-time signals using discrete-time signal processing techniques.
Study objectives
After studying this chapter you should be able to:
Determine the spectrum of a discrete-time signal from that of the original continuous-time signal, and understand the conditions that allow perfect reconstruction of a continuous-time signal from its samples.
Understand how to process continuous-time signals by sampling, followed by discrete-time signal processing, and reconstruction of the resulting continuous-time signal.
Understand how practical limitations affect the sampling and reconstruction of continuous-time signals.
Apply the theory of sampling to continuous-time bandpass signals and two-dimensional image signals.
Ideal periodic sampling of continuous-time signals
In the most common form of sampling, known as periodic or uniform sampling, a sequence of samples x[n] is obtained from a continuous-time signal xc(t) by taking values at equally spaced points in time.
In Chapter 2 we discussed representation and analysis of LTI systems in the time-domain using the convolution summation and difference equations. In Chapter 3 we developed a representation and analysis of LTI systems using the z-transform. In this chapter, we use Fourier representation of signals in terms of complex exponentials and the polezero representation of the system function to characterize and analyze the effect of LTI systems on the input signals. The fundamental tool is the frequency response function of a system and the close relationship of its shape to the location of poles and zeros of the system function. Although the emphasis is on discrete-time systems, the last section explains how the same concepts can be used to analyze continuous-time LTI systems.
Study objectives
After studying this chapter you should be able to:
Determine the steady-state response of LTI systems to sinusoidal, complex exponential, periodic, and aperiodic signals using the frequency response function.
Understand the effects of ideal and practical LTI systems upon the input signal in terms of the shape of magnitude, phase, and group-delay responses.
Understand how the locations of poles and zeros of the system function determine the shape of magnitude, phase, and group-delay responses of an LTI system.
Develop and use algorithms for the computation of magnitude, phase, and group-delay responses of LTI systems described by linear constant-coefficient difference equations. […]
In this chapter we are concerned with probability models for the mathematical description of random signals. We start with the fundamental concepts of random experiment, random variable, and statistical regularity and we show how they lead into the concepts of probability, probability distributions, and averages, and the development of probabilistic models for random signals. Then, we introduce the concept of stationary random process as a model for random signals, and we explain how to characterize the average behavior of such processes using the autocorrelation sequence (time-domain) and the power spectral density (frequency-domain). Finally, we discuss the effect of LTI systems on the autocorrelation and power spectral density of stationary random processes.
Study objectives
After studying this chapter you should be able to:
Understand the concepts of randomness, random experiment, statistical variability, statistical regularity, random variable, probability distributions, and statistical averages like mean and variance.
Understand the concept of correlation between two random variables, its measurement by quantities like covariance and correlation coefficient, and the meaning of covariance in the context of estimating the value of one random variable using a linear function of the value of another random variable.
Understand the concept of a random process and the characterization of its average behavior by the autocorrelation sequence (time-domain) and power spectral density (frequency-domain), develop an insight into the processing of stationary processes by LTI systems, and be able to compute mean, autocorrelation, and power spectral density of the output sequence from that of the input sequence and the impulse response.