To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter covers digital information sources in some depth. It provides intuition on the information content of a digital source and introduces the notion of redundancy. As a simple but important example, discrete memoryless sources are described. The concept of entropy is defined as a measure of the information content of a digital information source. The properties of entropy are studied, and the source-coding theorem for a discrete memoryless source is given. In the second part of the chapter, practical data compression algorithms are studied. Specifically, Huffman coding, which is an optimal data-compression algorithm when the source statistics are known, and Lempel–Ziv (LZ) and Lempel–Ziv–Welch (LZW) coding schemes, which are universal compression algorithms (not requiring the source statistics), are detailed.
The basics of digital modulation over additive white Gaussian noise (AWGN) channels are studied. To facilitate a formal study, the concepts of signal space and signal constellations are introduced. The Gram–Schmidt orthonormalization procedure, a systematic method to obtain an orthogonal and normalized basis for a given set of signals, is described. Binary antipodal signaling is studied in detail; the MAP and ML receivers are derived, and the average probability of error is computed. The concepts are then generalized to the case of M-ary signaling, and the union bound is introduced as a performance analysis tool. Correlation-type and matched filter-type receivers are described. The properties of the matched filter are summarized. Different signal constellations are compared in terms of their error rate performance through a simplified (asymptotic) analysis. As specific examples, the details of two important digital modulation schemes, pulse amplitude modulation and orthogonal signaling, are given. Finally, timing recovery techniques are briefly studied.
Frequency-shift keying (FSK) is described as an alternative way of transmitting digital information. Specifically, orthogonal FSK with both coherent and non-coherent detection is studied. Minimum-shift keying is introduced as a special case of FSK, preserving phase continuity at the symbol boundaries. In addition, orthogonal frequency-division multiplexing (OFDM) is covered in some depth. It is shown that OFDM can be efficiently implemented using fast Fourier transform (FFT) and its inverse. The use of a cyclic prefix to avoid intersymbol interference over dispersive channels is also shown.
The fundamental limits of communication over a noisy channel, in particular, over an AWGN channel, are described, and channel coding is introduced as a way of approaching the ultimate information-theoretic limits of reliable communication. Linear block codes and convolutional codes are studied in some depth. Encoding and decoding algorithms, as well as basic performance analysis results, are developed. The Viterbi algorithm is introduced for both hard-decision decoding and soft-decision decoding of convolutional codes.
This chapter first provides an overview of a general communication system and then shifts the focus to a digital communication system. It describes elements of a digital communication system and explains the functionalities of source coding, channel coding, and digital modulation blocks for communicating over a noisy channel. It also highlights the differences between analog and digital communication systems.
Digital transmission over bandlimited channels is studied. The concept of intersymbol interference (ISI) is described, and the Nyquist criterion for no ISI is derived. The raised cosine pulse, a widely used example of a practical communication pulse resulting in no ISI, is introduced. Both ideal and non-ideal bandlimited channels are considered. In addition, the power spectral density of digitally modulated signals is derived, and the spectral efficiencies of different digital modulation schemes are computed.
The transmission of bandpass signals and the corresponding channel effects are introduced. Basic single-carrier bandpass modulation schemes – namely, bandpass pulse amplitude modulation, phase-shift keying, and quadrature amplitude modulation – are studied. Lowpass equivalents of bandpass signals are introduced, and the in-phase and quadrature components of a bandpass signal are described. It is shown that bandpass signals and systems can be studied through their lowpass equivalents. The π/4-QPSK and offset QPSK are presented as two practically motivated variations of quadrature phase-shift keying. Coherent, differentially coherent, and non-coherent receivers are described. Differential phase-shift keying is studied in some depth. Finally, carrier phase-synchronization methods, including the use of phase-locked loops, are described.
Deterministic signals and linear time-invariant systems are studied. The Fourier transform is introduced, and its properties are reviewed. The concepts of probability and random variables are developed. Conditional probability is defined, and the total probability theorem and Bayes’ rule are given. Random variables are studied through their cumulative distribution functions and probability density functions, and statistical averages, including the mean and variance, are defined. These concepts are extended to random vectors. In addition, the concept of random processes is covered in depth. The autocorrelation function, stationarity, and power spectral density are studied, along with extensions to multiple random processes. Particular attention is paid to wide-sense stationary processes, and the concept of power spectral density is introduced. Also explored is the filtering of wide-sense stationary random processes, including the essential properties of their autocorrelation function and power spectral density. Due to their significance in modeling noise in a communication system, Gaussian random processes are also covered.
Several issues in communication system design are highlighted. Specifically, the effects of transmission losses in a communication system and ways of addressing the related challenges are reviewed. A basic link budget analysis is performed. The effects of non-ideal amplifiers to combat transmission losses are demonstrated, and the loss in the signal-to-noise ratio at the amplifier output is quantified. The use of analog and regenerative repeaters for transmission over long distances is explored. Furthermore, time-division, frequency-division, and code-division multiple-access techniques are described.
A brief coverage of amplitude modulation (AM) and angle modulation techniques is provided. The basic principles of conventional AM, double-sideband suppressed carrier AM, single-sideband AM, and vestigial sideband AM are described both through time-domain and frequency-domain techniques. Frequency and phase modulation are described and their equivalence is argued. A comparison of different analog modulation techniques in terms of complexity, power, and bandwidth requirements is made. Conversion of analog signals into a digital form through sampling and quantization is studied. Proof of the sampling theorem is given. Scalar and vector quantizers are described. Uniform and non-uniform scalar quantizer designs are studied. The Lloyd-Max quantizer design algorithm is detailed. The amount of loss introduced by a quantizer is quantified by computing the mean square distortion, and the resulting signal-to-quantization noise ratio. Pulse code modulation (PCM) as a waveform coding technique, along with its variants – including differential PCM and delta modulation – is also studied.