To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Consider a linear system y = Φx where Φ can be taken as an m × n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input, Φ the transform–sample matrix–filter and the vector y the sample or output. The problem is to reconstruct x from y, or more generally, to reconstruct an altered version of x from an altered y. For example, we might analyze the signal x in terms of frequency components and various combinations of time and frequency components y. Once we have analyzed the signal we may alter some of the component parts to eliminate undesirable features or to compress the signal for more efficient transmission and storage. Finally, we reconstitute the signal from its component parts.
The three typical steps in this process are:
Analysis. Decompose the signal into basic components. This is called analysis. We will think of the signal space as a vector space and break it up into a sum of subspaces, each of which captures a special feature of a signal.
Processing. Modify some of the basic components of the signal that were obtained through the analysis. This is called processing.
Synthesis. Reconstitute the signal from its (altered) component parts. This is called synthesis. Sometimes, we will want perfect reconstruction. Sometimes only perfect reconstruction with high probability. If we don't alter the component parts, we usually want the synthesized signal to agree exactly with the original signal. […]
One way to approach the notion of probability is through the phenomenon of statistical regularity. There are many repeating situations in nature for which we can predict in advance, from previous experiences, roughly what will happen, but not exactly what will happen. We say in such cases that the occurrences are random. The reason that we cannot predict future events exactly may be that (i) we do not have enough data about the condition of the given problem, (ii) the laws governing a progression of events may be so complicated that we cannot undertake a detailed analysis, or possibly (iii) there is some basic indeterminacy in the physical world. Whatever the reason for the randomness, a definite average pattern of results may be observed in many situations leading to random occurrences when the situation is recreated a great number of times. For example, if a fair coin is flipped many times, it will turn up heads on about half of the flips.
Another example of randomness is the response time of a web (i.e.,WorldWideWeb or WWW) access request you may send over the Internet in order to retrieve some information from a certain website. The amount of time you have to wait until you receive a response will not be precisely predictable, because the total round trip time depends on a number of factors.
Access networks connect business and residential subscribers to the central offices of service providers, which in turn are connected to metropolitan area networks (MANs) or wide area networks (WANs). Access networks are commonly referred to as the last mile or the first mile, whereby the latter term emphasizes their importance to subscribers. Future first-mile solutions have to not only meet the cost sensitivity constraints of access networks arising from the small number of cost-sharing subscribers but also have to provide an ever increasing amount of capacity due to emerging bandwidth-hungry multimedia applications such as video on demand (VoD), high-definition television (HDTV), digital cinema, split-screen video, and 3D online games. These new services and applications are expected to require data rates of up to 100 Mb/s per home, which cannot be provided by traditional narrowband access solutions, e.g., dial-up connections. To meet the bandwidth requirements of emerging and future video-dominated services and applications, legacy access networks have been replaced with broadband access networks over the last few years.
Definition
The term broadband is commonly used to refer to high-speed Internet access with data rates exceeding those of traditional dial-up Internet connections, which typically offer data rates of only 64 kb/s or below. More specifically, the Federal Communications Commission (FCC) used to define broadband service as data transmission speeds exceeding 200 kb/s in at least one direction, i.e., downstream (from the Internet to the subscriber's computer) or upstream (from the user's computer to the Internet).
Wireless fidelity (WiFi) has been envisioned by the WiFi alliance as a single worldwide adopted standard for high-speed wireless local area networking. The term WiFi denotes wireless local area network (WLAN) technology based on IEEE 802.11 specifications. In this chapter, we provide an overview of the salient features and most important specifications of legacy and next-generation WLANs.
Legacy WLAN
WLANs based on IEEE 802.11 have become very popular in providing different data services. Figure 6.1 shows the general WLAN architecture, where an access point (AP) is connected to the Internet and/or other WLANs through a wired network infrastructure, referred to as the distribution system (DS). In this architecture, wireless stations (STAs) communicate with their associated AP using the medium access control (MAC) protocols defined in the IEEE 802.11 specifications.
Due to the use of unlicensed frequency bands (2.4 GHz with 14 distinct channels and 5 MHz) in IEEE 802.11b/g with data rates of up to 11/54 Mbps, WiFi networks have gained much attention (Kuran and Tugcu [2007]). During the last decade, various standards and/or amendments have been approved or initiated to enhance IEEE 802.11 based WiFi technology. Table 6.1 summarizes the IEEE 802.11 WiFi standard family.
The initial IEEE 802.11 physical (PHY) layer includes: (i) frequency hopping spread spectrum (FHSS), (ii) direct sequence spread spectrum (DSSS), and (iii) infrared (IR). IEEE 802.11b uses high-rate DSSS (HR-DSSS), while IEEE 802.11g deploys orthogonal frequency division multiplexing (OFDM). The IEEE 802.11 MAC layer deploys the distributed coordination function (DCF) as a default access technique.
The world has become heavily dependent on oil through the widespread use of combustion engines in gasoline cars, resulting in climate change, massive transfers of wealth to oil-producing countries, and heightened geopolitical tensions. The advent of commercially available electric vehicles (EVs) by the end of 2010 is expected to be a game changer that will shake things up in a fundamental manner. Not only can electricity be produced in a number of environmentally friendly ways, e.g., hydroelectric generators, wind farms, or solar arrays, but also the electric engine is significantly more efficient than the combustion engine of traditional gasoline cars or today's gasoline–electric hybrids (Davis [2010]). A promising example of using EVs for a sustainable electric mobility (e-mobility) in urban areas is the “e-mobility Berlin” project, which deploys only green electricity from renewable sources to realize a user-friendly public charging infrastructure for plug-in EVs (PEVs). The emission-free PEVs may be shared following DAIMLER's “car-to-go” idea and allow environmental zones to be set up in cities, from which environmentally friendly PEVs with no emissions are exempt (DAIMLER [2009]). Replacing gasoline vehicles with PEVs could reduce the importation of oil by up to 52% in the United States. Despite their huge potential to create new markets, revenues, and jobs, PEVs pose severe challenges to electric utility companies. A PEV being charged at home in the evening may more than double the average household electricity load and thereby dramatically exacerbates the load profile imbalance of power grids between off- and on-peak hours (Ipakchi and Albuyeh [2009]).
Radio-over-fiber (RoF) networks have been studied for many years as an approach to integrating optical fiber and wireless networks. In RoF networks, radio frequencies (RFs) are carried over optical fiber links between a central station and multiple low-cost remote antenna units (RAUs) in support of a variety of wireless applications. For instance, a distributed antenna system connected to the base station of a microcellular radio system via optical fibers was proposed in (Chu and Gans [1991]). To efficiently support time-varying traffic between the central station and its attached base stations, a centralized dynamic channel assignment method is applied at the central station of the proposed fiber optic microcellular radio system. To avoid having to equip each radio port in a fiber optic microcellular radio network with a laser and its associated circuit to control the laser parameters such as temperature, output power, and linearity, a cost-effective radio port architecture deploying remote modulation may be used (Wu et al. [1994]).
Apart from realizing low-cost microcellular radio networks, optical fibers can also be used to support a wide variety of other radio signals. RoF networks are attractive since they provide transparency against modulation techniques and are able to support various digital formats and wireless standards in a cost-effective manner. It was experimentally demonstrated in (Tang et al. [2004]) that RoF networks are well suited to simultaneously transmit wideband code division multiple access (WCDMA), IEEE 802.11a/g wireless local area network (WLAN), personal handyphone system (PHS), and global system for mobile communications (GSM) signals.
The network models we studied so far involve only one-way (feedforward) communication. Many communication systems are inherently interactive, allowing for cooperation through feedback and information exchange over multiway channels. In this chapter, we study the role of feedback in communication and present results on the two-way channel introduced by Shannon as the first multiuser channel. The role of multiway interaction in compression and secure communication will be studied in Chapters 20 and 22, respectively.
As we showed in Section 3.1.1, the capacity of a memoryless point-to-point channel does not increase when noiseless causal feedback is present. Feedback can still benefit point-to-point communication, however, by simplifying coding and improving reliability. The idea is to first send the message uncoded and then to use feedback to iteratively reduce the receiver's error about the message, the error about the error, and so on. We demonstrate this iterative refinement paradigm via the Schalkwijk–Kailath coding scheme for the Gaussian channel and the Horstein and block feedback coding schemes for the binary symmetric channel. We show that the probability of error for the Schalkwijk–Kailath scheme decays double-exponentially in the block length, which is significantly faster than the single-exponential decay of the probability of error without feedback.
We then show that feedback can enlarge the capacity region in multiuser channels. For the multiple access channel, feedback enlarges the capacity region by enabling statistical cooperation between the transmitters. We show that the capacity of the Gaussian MAC with feedback coincides with the outer bound obtained by allowing arbitrary (instead of product) joint input distributions. For the broadcast channel, feedback can enlarge the capacity region by enabling the sender to simultaneously refine both receivers’ knowledge about the messages. For the relay channel, we show that the cutset bound is achievable when noiseless causal feedback from the receiver to the relay is allowed. This is in contrast to the case without feedback in which the cutset bound is not achievable in general.
Finally, we discuss the two-way channel, where two nodes wish to exchange their messages interactively over a shared noisy channel. The capacity region of this channel is not known in general. We first establish simple inner and outer bounds on the capacity region.
We consider the problem of generating two descriptions of a source such that each description by itself can be used to reconstruct the source with some desired distortion and the two descriptions together can be used to reconstruct the source with a lower distortion. This problem is motivated by the need to efficiently communicate multimedia content over networks such as the Internet. Consider the following two scenarios:
• Path diversity: Suppose we wish to send a movie to a viewer over a network that suffers from data loss and delays. We can send multiple copies of the same description of the movie to the viewer via different paths in the network. Such replication, however, is inefficient and the viewer does not benefit from receiving more than one copy of the description. Multiple description coding provides a more efficient means to achieve such “path diversity.” We generate multiple descriptions of the movie, so that if the viewer receives only one of them, the movie can be reconstructed with some acceptable quality, and if the viewer receives two of them, the movie can be reconstructed with a higher quality and so on.
• Successive refinement: Suppose we wish to send a movie with different levels of quality to different viewers. We can send a separate description of the movie to each viewer. These descriptions, however, are likely to have significant overlaps. Successive refinement, which is a special case of multiple description coding, provides a more efficient way to distribute the movie. The idea is to send the lowest quality description and successive refinements of it (instead of additional full descriptions). Each viewer then uses the lowest quality description and some of the successive refinements to reconstruct the movie at her desired level of quality.
The optimal scheme for generating multiple descriptions is not known in general. We present the El Gamal–Cover coding scheme for generating two descriptions that are individually good but still carry additional information about the source when combined together. The proof of achievability uses the multivariate covering lemma in Section 8.4. We show that this scheme is optimal for the quadratic Gaussian case. The key to the converse is the identification of a common-information random variable. We then present an improvement on the El Gamal–Cover scheme by Zhang and Berger that involves sending an additional common description.
We resume the discussion of broadcast channels started in Chapter 5. Again consider the 2-receiver DM-BCp(y1, y2|x) with private and common messages depicted in Figure 8.1. The definitions of a code, achievability, and capacity regions are the same as in Chapter 5. As mentioned before, the capacity region of the DM-BC is not known in general. In Chapter 5, we presented the superposition coding scheme and showed that it is optimal for several classes of channels in which one receiver is stronger than the other. In this chapter, we study coding schemes that can outperform superposition coding and present the tightest known inner and outer bounds on the capacity region of the general broadcast channel.
We first show that superposition coding is optimal for the 2-receiver DM-BC with degraded message sets, that is, when either R1 = 0 or R2 = 0. We then show that superposition coding is not optimal for BCs with more than two receivers. In particular, we establish the capacity region of the 3-receiver multilevel BC. The achievability proof involves the new idea of indirect decoding, whereby a receiver who wishes to recover only the common message still uses satellite codewords in decoding for the cloud center.
We then present Marton's inner bound on the private-message capacity region of the 2-receiver DM-BC and show that it is optimal for the class of semideterministic BCs. The coding scheme involves the multicoding technique introduced in Chapter 7 and the new idea of joint typicality codebook generation to construct dependent codewords for independent messages without the use of a superposition structure. The proof of the inner bound uses the mutual covering lemma, which is a generalization of the covering lemma in Section 3.7. Marton's coding scheme is then combined with superposition coding to establish an inner bound on the capacity region of the DM-BC that is tight for all classes of DM-BCs with known capacity regions. Next, we establish the Nair–El Gamal outer bound on the capacity region of the DM-BC. We show through an example that there is a gap between these inner and outer bounds. Finally, we discuss extensions of the aforementioned coding techniques to broadcast channels with more than two receivers and with arbitrary messaging requirements.
Confidentiality of information is a key consideration in many networking applications, including e-commerce, online banking, and intelligence operations. How can information be communicated reliably to the legitimate users, while keeping it secret from eavesdroppers? How does such a secrecy constraint on communication affect the limits on information flow in the network?
In this chapter, we study these questions under the information theoretic notion of secrecy, which requires each eavesdropper to obtain essentially no information about the messages sent from knowledge of its received sequence, the channel statistics, and the codebooks used. We investigate two approaches to achieve secure communication. The first is to exploit the statistics of the channel from the sender to the legitimate receivers and the eavesdroppers. We introduce the wiretap channel as a 2-receiver broadcast channel with a legitimate receiver and an eavesdropper, and establish its secrecy capacity, which is the highest achievable secret communication rate. The idea is to design the encoder so that the channel from the sender to the receiver becomes effectively stronger than the channel to the eavesdropper; hence the receiver can recover the message but the eavesdropper cannot. This wiretap coding scheme involves multicoding and randomized encoding.
If the channel from the sender to the receiver is weaker than that to the eavesdropper, however, secret communication at a positive rate is not possible. This brings us to the second approach to achieve secret communication, which is to use a secret key shared between the sender and the receiver but unknown to the eavesdropper. We show that the rate of such secret key must be at least as high as the rate of the confidential message. This raises the question of how the sender and the receiver can agree on such a long secret key in the first place. After all, if they had a confidential channel with sufficiently high capacity to communicate the key, then why not use it to communicate the message itself!
We show that if the sender and the receiver have access to correlated sources (e.g., through a satellite beaming common randomness to them), then they can still agree on a secret key even when the channel has zero secrecy capacity. We first consider the source model for key agreement, where the sender communicates with the receiver over a noiseless public broadcast channel to generate a secret key from their correlated sources.