To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We start by perhaps the simplest variation of the NASCC problem, namely that of lossless information multicast. This problem has been the source of much interest and studies in the past several years, as ignited by the original work on network coding [8]. We are not going to review the large and exciting body of work on lossless multicast in this book in great detail. However, we will review some of the basic concepts in order to put this book in a proper historical and comparative context. We refer the reader to an array of excellent new books on network coding for further reading [57, 58].
Network coding, the multicast scenario
In the notations of this book, the network information flow problem introduced by Ahlswede et al. in [8] can be defined, for the case of a single information source, with the following elements.
A directed graph G〈V, E〉 with node set V and edge set E ⊂ V × V.
A function R : E → ℝ+ that assigns a capacity R(e) to each link e ∈ E.
An information source I that generates information at a server node s ∈ S at a rate h bits per time unit.
A set of sink nodes T ⊆ V that are interested in receiving this information.
The multicast demand h is said to be admissible with capacity constraint function R, or equivalently (G, S, T, R, h) is said to be admissible if there exists a coding scheme that satisfies the multicast requirement rate h and respects the capacity constraints on all links e ∈ E.
At first glance, block diagrams such as the communication system shown in Figure 2.13 probably appear complex and intimidating. There are so many different blocks and so many unfamiliar names and acronyms! Fortunately, all the blocks can be built from six simple elements:
signal generators such as oscillators, which create sine and cosine waves,
linear time-invariant filters, which augment or diminish the amplitude of particular frequencies or frequency ranges in a signal,
samplers, which change analog (continuous-time) signals into discrete-time signals,
static nonlinearities such as squarers and quantizers, which can add frequency content to a signal,
linear time-varying systems such as mixers that shift frequencies around in useful and understandable ways, and
adaptive elements, which track the desired values of parameters as they slowly change over time.
This section provides a brief overview of these six elements. In doing so, it also reviews some of the key ideas from signals and systems. Later chapters explore how the elements work, how they can be modified to accomplish particular tasks within the communication system, and how they can be combined to create a large variety of blocks such as those that appear in Figure 2.13.
The elements of a communication system have inputs and outputs; the element itself operates on its input signal to create its output signal. The signals that form the inputs and outputs are functions that represent the dependence of some variable of interest (such as a voltage, current, power, air pressure, temperature, etc.) on time.
Noise generally refers to unwanted or undesirable signals that disturb or interfere with the operation of a system. There are many sources of noise. In electrical systems, there may be coupling with the power lines, lightning, bursts of solar radiation, or thermal noise. Noise in a transmission system may arise from atmospheric disturbances, from other broadcasts that are not well shielded, and from unreliable clock pulses or inexact frequencies used to modulate signals.
Whatever the physical source, there are two very different kinds of noise: nar rowband and broadband. Narrowband noise consists of a thin slice of frequencies. With luck, these frequencies will not overlap the frequencies that are crucial to the communication system. When they do not overlap, it is possible to build filters that reject the noise and pass only the signal, analogous to the filter designed in Section 7.2.3 to remove certain frequencies from the gong waveform. When running simulations or examining the behavior of a system in the presence of narrowband noise, it is common to model the narrowband noise as a sum of sinusoids.
Broadband noise contains significant amounts of energy over a large range of frequencies. This is problematic because there is no obvious way to separate the parts of the noise that lie in the same frequency regions as the signals from the signals themselves.
In Chapter 9, the Basic Black Box Impairment Generator of Figure 9.22 (on page 187) was described as a routine that transforms a Matlab script specifying the operation of the transmitter into the (received) signal that appears at the input of the receiver. This appendix opens up the black box, shining light on the internal operation of the B3IG.
The B3IG is implemented in Matlab as the routine BigTransmitter.m, and it allows straightforward modeling of any (or all) of the possible impairments discussed throughout Software Receiver Design, including carrier-frequency offset, baud-timing offsets, and frequency-selective and time-varying channels, as well as channel noise. Since many of the impairments and nonidealities that arise in a communication system occur in the channel and RF front end, B3IG is more than a transmitter: it includes the communication channel and receiver RF front end as well. An overview of the B3IG is shown in Figure H.1.
The B3IG architecture expands on the simplified communication system of Chapter 9 and has more options than the M6 transmitter of Chapter 15. Some of the additional features are as follows.
Support for multiple users. The transmitter generates a signal that may contain information intended for more than one receiver.
The projects of Chapters 15 and 16 integrate all the fixes of the fourth step into the receiver structure of the third step to create fully functional digital receivers. The well-fabricated receiver is robust with respect to distortions such as those caused by noise, multipath interference, timing inaccuracies, and clock mismatches.
After building the components, testing them, assembling them into a receiver, and testing the full design, your receiver is ready. Congratulations. You have earned the degree of Master of Digital Radio. You are now ready to conquer the world!
The fundamental principles of telecommunications have remained much the same since Shannon's time. What has changed, and is continuing to change, is how those principles are deployed in technology. One of the major ongoing changes is the shift from hardware to software–and Software Receiver Design reflects this trend by focusing on the design of a digital software-defined radio that you will implement in Matlab.
“Radio” does not literally mean the AM/FM radio in your car; it represents any through-the-air transmission such as television, cell phone, or wireless computer data, though many of the same ideas are also relevant to wired systems such as modems, cable TV, and telephones. “Software-defined” means that key elements of the radio are implemented in software. Taking a “software-defined” approach mirrors the trend in modern receiver design in which more and more of the system is designed and built in reconfigurable software, rather than in fixed hardware. The fundamental concepts behind the transmission are introduced, demonstrated, and (we hope) understood through simulation. For example, when talking about how to translate the frequency of a signal, the procedures are presented mathematically in equations, pictorially in block diagrams, and then concretely as short Matlab programs.
Our educational philosophy is that it is better to learn by doing: to motivate study with experiments, to reinforce mathematics with simulated examples, to integrate concepts by “playing” with the pieces of the system.
Software Receiver Design: Build Your Own Digital Communications System in Five Easy Steps is structured like a staircase with five simple steps. The first chapter presents a naive digital communications system, a sketch of the digital radio, as the first step. The second chapter ascends one step to fill in details and demystify various pieces of the design. Successive chapters then revisit the same ideas, each step adding depth and precision. The first functional (though idealized) receiver appears in Chapter 9. Then the idealizing assumptions are stripped away one by one throughout the remaining chapters, culminating in sophisticated receiver designs in the final chapters. Section 1.3 on page 12 outlines the five steps in the construction of the receiver and provides an overview of the order in which topics are discussed.
When the signal arrives at the receiver, it is a complicated analog waveform that must be sampled in order to eventually recover the transmitted message. The timing-offset experiments of Section 9.4.5 showed that one kind of “stuff” that can “happen” to the received signal is that the samples might inadvertently be taken at inopportune moments. When this happens, the “eye” becomes “closed” and the symbols are incorrectly decoded. Thus there needs to be a way to determine when to take the samples at the receiver. In accordance with the basic system architecture of Chapter 2, this chapter focuses on baseband methods of timing recovery (also called clock recovery). The problem is approached in a familiar way: find performance functions that have their maximum (or minimum) at the optimal point (i.e., at the correct sampling instants when the eye is open widest). These performance functions are then used to define adaptive elements that iteratively estimate the sampling times. As usual, all other aspects of the system are presumed to operate flawlessly: the up and down conversions are ideal, there are no interferers, and the channel is benign.
The discussion of timing recovery begins in Section 12.1 by showing how a sampled version of the received signal x[k] can be written as a function of the timing parameter τ, which dictates when to take samples. Section 12.2 gives several examples that motivate several different possible performance functions, (functions of x[k]), which lead to “different” methods of timing recovery.
The next two chapters provide more depth and detail by outlining a complete telecommunication system. When the transmitted signal is passed through the air using electromagnetic waves, it must take the form of a continuous (analog) waveform. A good way to understand such analog signals is via the Fourier transform, and this is reviewed briefly in Chapter 2. The six basic elements of the receiver will be familiar to many readers, and they are presented in Chapter 3 in a form that will be directly useful when creating Matlab implementations of the various parts of the communications system. By the end of the second step, the basic system architecture is fixed; the ordering of the blocks in the system diagram has stabilized.
When the message is digital, it must be converted into an analog signal in order to be transmitted. This conversion is done by the “transmit” or “pulse-shaping” filter, which changes each symbol in the digital message into a suitable analog pulse. After transmission, the “receive” filter assists in recapturing the digital values from the received pulses. This chapter focuses on the design and specification of these filters.
The symbols in the digital input sequence w(kT) are chosen from a finite set of values. For instance, they might be binary ±1, or they may take values from a larger set such as the four-level alphabet ±1, ±3. As suggested in Figure 11.1, the sequence w(kT) is indexed by the integer k, and the data rate is one symbol every T seconds. Similarly, the output m(kT) assumes values from the same alphabet as w(kT) and at the same rate. Thus the message is fully specified at times kT for all integers k. But what happens between these times, between kT and (k + 1)T? The analog modulation of Chapter 5 operates continuously, and some values must be used to fill in the digital input between the samples. This is the job of the pulse-shaping filter: to turn a discrete-time sequence into an analog signal.
Each symbol w(kT) of the message initiates an analog pulse that is scaled by the value of the signal.
Telecommunications technologies using electromagnetic transmission surround us: television images flicker, radios chatter, cell phones (and telephones) ring, allowing us to see and hear each other anywhere on the planet. E-mail and the Internet link us via our computers, and a large variety of common devices such as CDs, DVDs, and hard disks augment the traditional pencil and paper storage and transmittal of information. People have always wished to communicate over long distances: to speak with someone in another country, to watch a distant sporting event, to listen to music performed in another place or another time, to send and receive data remotely using a personal computer. In order to implement these desires, a signal (a sound wave, a signal from a TV camera, or a sequence of computer bits) needs to be encoded, stored, transmitted, received, and decoded. Why? Consider the problem of voice or music transmission. Sending sound directly is futile because sound waves dissipate very quickly in air. But if the sound is first transformed into electromagnetic waves, then they can be beamed over great distances very efficiently. Similarly, the TV signal and computer data can be transformed into electromagnetic waves.
Electromagnetic Transmission of Analog Waveforms
There are some experimental (physical) facts that cause transmission systems to be constructed as they are. First, for efficient wireless broadcasting of electromagnetic energy, an antenna needs to be longer than about 1/10 of a wavelength of the frequency being transmitted.
If every signal that went from here to there arrived at its intended receiver unchanged, the life of a communications engineer would be easy. Unfortunately, the path between here and there can be degraded in several ways, including multipath interference, changing (fading) channel gains, interference from other users, broadband noise, and narrowband interference.
This chapter begins by describing some of the funny things that can happen to signals, some of which are diagrammed in Figure 4.1. More important than locating the sources of the problems is fixing them. The received signal can be processed using linear filters to help reduce the interferences and to undo, to some extent, the effects of the degradations. The central question is how to specify filters that can successfully mitigate these problems, and answering this requires a fairly detailed understanding of filtering. Thus, a discussion of linear filters occupies the bulk of this chapter, which also provides a background for other uses of filters throughout the receiver, such as the lowpass filters used in the demodulators of Chapter 5, the pulse-shaping and matched filters of Chapter 11, and the equalizing filters of Chapter 13.
When Bad Things Happen to Good Signals
The path from the transmitter to the receiver is not simple, as Figure 4.1 suggests. Before the signal reaches the receiver, it may be subject to a series of strange events that can corrupt the signal and degrade the functioning of the receiver.
The underlying purpose of any communication system is to transmit information. But what exactly is information? How is it measured? Are there limits to the amount of data that can be sent over a channel, even when all the parts of the system are operating at their best? This chapter addresses these fundamental questions using the ideas of Claude Shannon (1916–2001), who defined a measure of information in terms of bits. The number of bits per second that can be transmitted over the channel (taking into account its bandwidth, the power of the signal, and the noise) is called the bit rate, and can be used to define the capacity of the channel.
Unfortunately, Shannon's results do not give a recipe for how to construct a system that achieves the optimal bit rate. Earlier chapters have highlighted several problems that can arise in communication systems (including synchronization errors such as intersymbol interference). This chapter assumes that all of these are perfectly mitigated. Thus, in Figure 14.1, the inner parts of the communication system are assumed to be ideal, except for the presence of channel noise. Even so, most systems still fall far short of the optimal performance promised by Shannon.
There are two problems. First, most messages that people want to send are redundant, and the redundancy squanders the capacity of the channel. A solution is to preprocess the message so as to remove the redundancies.