To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A waveform channel is a channel whose inputs are continuous functions of time. A passband channel is a waveform channel suitable for an input waveform that has a spectrum confined to an appropriately narrow interval of frequencies centered about a nonzero reference frequency, f0. A complex baseband channel is a waveform channel whose input waveform is a complex function of time that has a spectrum confined to an interval of frequencies containing the zero frequency. We shall see that every passband channel can be converted to or from a complex baseband channel by using standard techniques in the modulator and demodulator.
The function of a digital modulator for a passband channel is to convert a digital datastream into a waveform representation of the data that can be accepted by the passband channel. The waveform from the modulator is designed to accommodate the spectral characteristics of the channel, to obtain high rates of data transmission, to minimize transmitted power, and to keep the bit error rate small.
A passband modulation waveform cannot be judged independently of the performance of the demodulator. To understand how a modem works, it is necessary to study both the passband modulation techniques of this chapter and the passband demodulation techniques of Chapter 6. The final test of a modem is in the ability of the demodulator to recover the input datastream from the signal received by the demodulator in the presence of noise, interference, distortion, and other impairments.
We have studied in great detail the effect of additive gaussian noise in a linear system because of its fundamental importance. Usually the ultimate limit on the performance of a digital communication system is set by its performance in gaussian noise. For this and other reasons, the demodulators studied in Chapter 3 presume that the received waveform has been contaminated only by additive gaussian noise. However, there are other important disturbances that should be understood. The demodulators studied in Chapter 4 extend the methods to include intersymbol interference in the received waveform. While additive gaussian noise and intersymbol interference are the most important channel impairments, the demodulator designer must be wary of other impairments that may affect the received signal. The demodulator must not be so rigid in its structure that unexpected impairments cause an undue loss of performance. This chapter describes a variety of channel impairments and methods to make the demodulator robust so that the performance will not collapse if the channel model is imperfect.
Most of the impairments in a system arise for reasons that are not practical to control, and so the waveform must be designed to be tolerant of them. Such impairments include both interference and nonlinearities. Sometimes nonlinearities may be introduced intentionally into the front end of the receiver because of a known, desired outcome. Then we must understand the effect of the nonlinearity in all its ramifications in order to anticipate undesirable side effects.
A waveform channel is a channel whose inputs are continuous functions of time. A baseband channel is a waveform channel suitable for an input waveform that has a spectrum confined to an interval of frequencies centered about the zero frequency. In this chapter, we shall study the design of waveforms and modulators for the baseband channel.
The function of a digital modulator is to convert a digital datastream into a waveform representation of the datastream that can be accepted by the waveform channel. The waveform formed by the modulator is designed to accommodate the spectral characteristics of the channel, to obtain high rates of data transmission, to minimize transmitted power, and to keep the bit error rate small.
A modulation waveform cannot be judged independently of the performance of the demodulator. To understand how a baseband communication system works, it is necessary to study both the baseband modulation techniques of this chapter and the baseband demodulation techniques of Chapter 3. The final test of a modem is in the ability of the demodulator to recover the symbols of the input datastream from the channel output signal when received in the presence of noise, interference, distortion, and other impairments.
Baseband and passband channels
A waveform channel is a channel whose input is a continuous function of time, here denoted c(t), and whose output is another function of time, here denoted v(t).
Communication waveforms in which the received pulses, after filtering, are not Nyquist pulses cannot be optimally demodulated one symbol at a time. The pulses will overlap and the samples will interact. This interaction is called intersymbol interference. Rather than use a Nyquist pulse to prevent intersymbol interference, one may prefer to allow intersymbol interference to occur and to compensate for it in the demodulation process.
In this chapter, we shall study ways to demodulate in the presence of intersymbol interference, ways to remove intersymbol interference, and in Chapter 9, ways to precode so that the intersymbol interference seems to disappear. We will start out in this chapter thinking of the interdependence in a sequence of symbols as undesirable, but once we have developed good methods for demodulating sequences with intersymbol interference, we will be comfortable in Chapter 9 with intentionally introducing some kinds of controlled symbol interdependence in order to improve performance.
The function of modifying a channel response to obtain a required pulse shape is known as equalization. If the channel is not predictable, or changes slowly with time, then the equalization may be designed to slowly adjust itself by observing its own channel output; in this case, it is called adaptive equalization.
This chapter studies such interacting symbol sequences, both unintentional and intentional. It begins with the study of intersymbol interference and ends with the subject of adaptive equalization.
Adaptive signal processing (ASP) and iterative signal processing (ISP) are important techniques in improving receiver performance in communication systems. Using examples from practical transceiver designs, this 2006 book describes the fundamental theory and practical aspects of both methods, providing a link between the two where possible. The first two parts of the book deal with ASP and ISP respectively, each in the context of receiver design over intersymbol interference (ISI) channels. In the third part, the applications of ASP and ISP to receiver design in other interference-limited channels, including CDMA and MIMO, are considered; the author attempts to illustrate how the two techniques can be used to solve problems in channels that have inherent uncertainty. Containing illustrations and worked examples, this book is suitable for graduate students and researchers in electrical engineering, as well as practitioners in the telecommunications industry.
In Chapters 2–6 we introduced turbo, LDPC and RA codes and their iterative decoding algorithms. Simulation results show that these codes can perform extremely closely to Shannon's capacity limit with practical implementation complexity. In this chapter we analyze the performance of iterative decoders, determine how close they can in fact get to the Shannon limit and consider how the design of the codes will impact on this performance.
Ideally, for a given code and decoder we would like to know for which channel noise levels the message-passing decoder will be able to correct the errors and for which it will not. Unfortunately this is still an open problem. Instead, we will consider the set, or ensemble, of all possible codes with certain parameters (for example, a certain degree distribution) rather than a particular choice of code having those parameters.
For example, a turbo code ensemble is defined by its component encoders and consists of the set of codes generated by all possible interleaver permutations while an LDPC code ensemble is specified by the degree distribution of the Tanner graph nodes and consists of the set of codes generated by all possible permutations of the Tanner graph edges.
When very long codes are considered, the extrinsic LLRs passed between the component decoders can be assumed to be independent and identically distributed. Under this assumption the expected iterative decoding performance of a particular ensemble can be determined by tracking the evolution of these probability density functions through the iterative decoding process, a technique called density evolution.
In this chapter we introduce convolutional codes, the building blocks of turbo codes. Our starting point is to introduce convolutional encoders and their trellis representation. Then we consider the decoding of convolutional codes using the BCJR algorithm for the computation of maximum a posteriori message probabilities and the Viterbi algorithm for finding the maximum likelihood (ML) codeword. Our aim is to enable the presentation of turbo codes in the following chapter, so this chapter is by no means a thorough consideration of convolutional codes – we shall only present material directly relevant to turbo codes.
Convolutional encoders
Unlike a block code, which acts on the message in finite-length blocks, a convolutional code acts like a finite-state machine, taking in a continuous stream of message bits and producing a continuous stream of output bits. The convolutional encoder has a memory of the past inputs, which is held in the encoder state. The output depends on the value of this state, as well as on the present message bits at the input, but is completely unaffected by any subsequent message bits. Thus the encoder can begin encoding and transmission before it has the entire message. This differs from block codes, where the encoder must wait for the entire message before encoding.
When discussing convolutional codes it is convenient to use time to mark the progression of input bits through the encoder.
The idea of concatenating two or more error correction codes in series in order to improve the overall decoding performance of a system was introduced by Forney in 1966. Applying random-like interleaving and iterative decoding to these codes gives a whole new class of turbo-like codes that straddle the gap between parallel concatenated turbo codes and LDPC codes.
Concatenating two convolutional codes in series gives serially concatenated convolutional codes (SC turbo codes). We arrive at turbo block codes by concatenating two block codes and at repeat–accumulate codes by concatenating a repetition code and a convolutional (accumulator) code.
This chapter will convey basic information about the encoding, decoding and design of serially concatenated (SC) turbo codes. Most of what we need for SC turbo codes has already been presented in Chapter 4. The turbo encoder uses two convolutional encoders, from Section 4.2, while the turbo decoder uses two copies of the log BCJR decoder from Section 4.3. The section on design principles will refer to information presented in Chapter 5 and the discussion of repeat–accumulate codes will use concepts presented in Chapter 2. A deeper understanding of SC turbo codes and their decoding process is explored in Chapters 7 and 8.
Serial concatenation
The first serial concatenation schemes concatenated a high-rate block code with a short convolutional code. The first code, called the outer code, encoded the source message and passed the resulting codeword to the second code, called the inner code, which re-encoded it to obtain the final codeword to be transmitted.
In the previous chapter we analyzed the performance of iterative codes by calculating their threshold and thus comparing their performance in high-noise channels with the channel's capacity. In that analysis we considered the iterative decoding of code ensembles with given component codes, averaging over all possible interleaver–edge permutations. In this chapter we will also use the concept of code ensembles but will turn our focus to low-noise channels and consider the error floor performance of iterative code ensembles. Except for the special case of the binary erasure channel, our analysis will consider the properties of the codes independently of their respective iterative decoding algorithms. In fact, we will assume maximum likelihood (ML) decoding, for which the performance of a code depends only on its codeword weight distribution. Using ML analysis we can
demonstrate the source of the interleaver gain for iterative codes,
show why recursive encoders are so important for concatenated codes, and
show how the error floor performance of iterative codes depends on the chosen component codes.
Lastly, for the special case of the binary erasure channel we will use the concept of stopping sets to analyze the finite-length performance of LDPC ensembles and message-passing decoding.
Maximum likelihood analysis
Although it is impractical to decode the long, pseudo-random, codes designed for iterative decoding using ML decoding, the ML decoder is the best possible decoder (assuming equiprobable source symbols) and so provides an upper bound on the performance of iterative decoders.
The construction of binary low-density parity-check (LDPC) codes simply involves replacing a small number of the values in an all-zeros matrix by 1s in such a way that the rows and columns have the required degree distribution. In many cases, randomly allocating the entries in H will produce a reasonable LDPC code. However, the construction of H can affect the performance of the sum–product decoder, significantly so for some codes, and also the implementation complexity of the code.
While there is no one recipe for a “good” LDPC code, there are a number of principles that inform the code designer. The first obvious decisions are which degree distribution to choose and how to construct the matrix with the chosen degrees, i.e. pseudo-randomly or with some sort of structure. Whichever construction is chosen, the features to consider include the girth of the Tanner graph and the minimum distance of the code.
In this chapter we will discuss those properties of an LDPC code that affect its iterative decoding performance and then present the common construction methods used to produce codes with the preferred properties. Following common practice in the field we will call the selection of the degree distributions for an LDPC code code design and the methods to assign the locations in the parity-check matrix for the 1 entries code construction.
In this chapter we introduce our task: communicating a digital message without error (or with as few errors as possible) despite an imperfect communications medium. Figure 1.1 shows a typical communications system. In this text we will assume that our source is producing binary data, but it could equally be an analog source followed by analog-to-digital conversion.
Through the early 1940s, engineers designing the first digital communications systems, based on pulse code modulation, worked on the assumption that information could be transmitted usefully in digital form over noise-corrupted communication channels but only in such a way that the transmission was unavoidably compromised. The effects of noise could be managed, it was believed, only by increasing the transmitted signal power enough to ensure that the received signal-to-noise ratio was sufficiently high.
Shannon's revolutionary 1948 work changed this view in a fundamental way, showing that it is possible to transmit digital data with arbitrarily high reliability, over noise-corrupted channels, by encoding the digital message with an error correction code prior to transmission and subsequently decoding it at the receiver. The error correction encoder maps each vector of K digits representing the message to longer vectors of N digits known as codewords. The redundancy implicit in the transmission of codewords, rather than the raw data alone, is the quid pro quo for achieving reliable communication over intrinsically unreliable channels. The code rate r = K/N defines the amount of redundancy added by the error correction code.