To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The network models we studied so far involve only one-way (feedforward) communication. Many communication systems are inherently interactive, allowing for cooperation through feedback and information exchange over multiway channels. In this chapter, we study the role of feedback in communication and present results on the two-way channel introduced by Shannon as the first multiuser channel. The role of multiway interaction in compression and secure communication will be studied in Chapters 20 and 22, respectively.
As we showed in Section 3.1.1, the capacity of a memoryless point-to-point channel does not increase when noiseless causal feedback is present. Feedback can still benefit point-to-point communication, however, by simplifying coding and improving reliability. The idea is to first send the message uncoded and then to use feedback to iteratively reduce the receiver's error about the message, the error about the error, and so on. We demonstrate this iterative refinement paradigm via the Schalkwijk–Kailath coding scheme for the Gaussian channel and the Horstein and block feedback coding schemes for the binary symmetric channel. We show that the probability of error for the Schalkwijk–Kailath scheme decays double-exponentially in the block length, which is significantly faster than the single-exponential decay of the probability of error without feedback.
We then show that feedback can enlarge the capacity region in multiuser channels. For the multiple access channel, feedback enlarges the capacity region by enabling statistical cooperation between the transmitters. We show that the capacity of the Gaussian MAC with feedback coincides with the outer bound obtained by allowing arbitrary (instead of product) joint input distributions. For the broadcast channel, feedback can enlarge the capacity region by enabling the sender to simultaneously refine both receivers’ knowledge about the messages. For the relay channel, we show that the cutset bound is achievable when noiseless causal feedback from the receiver to the relay is allowed. This is in contrast to the case without feedback in which the cutset bound is not achievable in general.
Finally, we discuss the two-way channel, where two nodes wish to exchange their messages interactively over a shared noisy channel. The capacity region of this channel is not known in general. We first establish simple inner and outer bounds on the capacity region.
We consider the problem of generating two descriptions of a source such that each description by itself can be used to reconstruct the source with some desired distortion and the two descriptions together can be used to reconstruct the source with a lower distortion. This problem is motivated by the need to efficiently communicate multimedia content over networks such as the Internet. Consider the following two scenarios:
• Path diversity: Suppose we wish to send a movie to a viewer over a network that suffers from data loss and delays. We can send multiple copies of the same description of the movie to the viewer via different paths in the network. Such replication, however, is inefficient and the viewer does not benefit from receiving more than one copy of the description. Multiple description coding provides a more efficient means to achieve such “path diversity.” We generate multiple descriptions of the movie, so that if the viewer receives only one of them, the movie can be reconstructed with some acceptable quality, and if the viewer receives two of them, the movie can be reconstructed with a higher quality and so on.
• Successive refinement: Suppose we wish to send a movie with different levels of quality to different viewers. We can send a separate description of the movie to each viewer. These descriptions, however, are likely to have significant overlaps. Successive refinement, which is a special case of multiple description coding, provides a more efficient way to distribute the movie. The idea is to send the lowest quality description and successive refinements of it (instead of additional full descriptions). Each viewer then uses the lowest quality description and some of the successive refinements to reconstruct the movie at her desired level of quality.
The optimal scheme for generating multiple descriptions is not known in general. We present the El Gamal–Cover coding scheme for generating two descriptions that are individually good but still carry additional information about the source when combined together. The proof of achievability uses the multivariate covering lemma in Section 8.4. We show that this scheme is optimal for the quadratic Gaussian case. The key to the converse is the identification of a common-information random variable. We then present an improvement on the El Gamal–Cover scheme by Zhang and Berger that involves sending an additional common description.
An increasing number of researchers and practitioners in Natural Language Engineering face the prospect of having to work with entire texts, rather than individual sentences. While it is clear that text must have useful structure, its nature may be less clear, making it more difficult to exploit in applications. This survey of work on discourse structure thus provides a primer on the bases of which discourse is structured along with some of their formal properties. It then lays out the current state-of-the-art with respect to algorithms for recognizing these different structures, and how these algorithms are currently being used in Language Technology applications. After identifying resources that should prove useful in improving algorithm performance across a range of languages, we conclude by speculating on future discourse structure-enabled technology.
We resume the discussion of broadcast channels started in Chapter 5. Again consider the 2-receiver DM-BCp(y1, y2|x) with private and common messages depicted in Figure 8.1. The definitions of a code, achievability, and capacity regions are the same as in Chapter 5. As mentioned before, the capacity region of the DM-BC is not known in general. In Chapter 5, we presented the superposition coding scheme and showed that it is optimal for several classes of channels in which one receiver is stronger than the other. In this chapter, we study coding schemes that can outperform superposition coding and present the tightest known inner and outer bounds on the capacity region of the general broadcast channel.
We first show that superposition coding is optimal for the 2-receiver DM-BC with degraded message sets, that is, when either R1 = 0 or R2 = 0. We then show that superposition coding is not optimal for BCs with more than two receivers. In particular, we establish the capacity region of the 3-receiver multilevel BC. The achievability proof involves the new idea of indirect decoding, whereby a receiver who wishes to recover only the common message still uses satellite codewords in decoding for the cloud center.
We then present Marton's inner bound on the private-message capacity region of the 2-receiver DM-BC and show that it is optimal for the class of semideterministic BCs. The coding scheme involves the multicoding technique introduced in Chapter 7 and the new idea of joint typicality codebook generation to construct dependent codewords for independent messages without the use of a superposition structure. The proof of the inner bound uses the mutual covering lemma, which is a generalization of the covering lemma in Section 3.7. Marton's coding scheme is then combined with superposition coding to establish an inner bound on the capacity region of the DM-BC that is tight for all classes of DM-BCs with known capacity regions. Next, we establish the Nair–El Gamal outer bound on the capacity region of the DM-BC. We show through an example that there is a gap between these inner and outer bounds. Finally, we discuss extensions of the aforementioned coding techniques to broadcast channels with more than two receivers and with arbitrary messaging requirements.
Existing sonar rings are limited in their refresh rate to the transmit echo rate, that is, waiting for maximum range echoes to arrive before transmitting again. This paper presents a sonar ring refreshing at 60 Hz for 5.7-m range, which is twice the transmit echo rate, and this leads to lower latency, denser measurements. Two custom Field Programmable Gate Array signal processors provide real time continuous match filtering with dynamic templates. A new method is implemented to select the transmit time from a random set based on minimizing interference. Experiments demonstrate the increased refresh rate, interference rejection, and maps generated by the sonar ring.
Confidentiality of information is a key consideration in many networking applications, including e-commerce, online banking, and intelligence operations. How can information be communicated reliably to the legitimate users, while keeping it secret from eavesdroppers? How does such a secrecy constraint on communication affect the limits on information flow in the network?
In this chapter, we study these questions under the information theoretic notion of secrecy, which requires each eavesdropper to obtain essentially no information about the messages sent from knowledge of its received sequence, the channel statistics, and the codebooks used. We investigate two approaches to achieve secure communication. The first is to exploit the statistics of the channel from the sender to the legitimate receivers and the eavesdroppers. We introduce the wiretap channel as a 2-receiver broadcast channel with a legitimate receiver and an eavesdropper, and establish its secrecy capacity, which is the highest achievable secret communication rate. The idea is to design the encoder so that the channel from the sender to the receiver becomes effectively stronger than the channel to the eavesdropper; hence the receiver can recover the message but the eavesdropper cannot. This wiretap coding scheme involves multicoding and randomized encoding.
If the channel from the sender to the receiver is weaker than that to the eavesdropper, however, secret communication at a positive rate is not possible. This brings us to the second approach to achieve secret communication, which is to use a secret key shared between the sender and the receiver but unknown to the eavesdropper. We show that the rate of such secret key must be at least as high as the rate of the confidential message. This raises the question of how the sender and the receiver can agree on such a long secret key in the first place. After all, if they had a confidential channel with sufficiently high capacity to communicate the key, then why not use it to communicate the message itself!
We show that if the sender and the receiver have access to correlated sources (e.g., through a satellite beaming common randomness to them), then they can still agree on a secret key even when the channel has zero secrecy capacity. We first consider the source model for key agreement, where the sender communicates with the receiver over a noiseless public broadcast channel to generate a secret key from their correlated sources.
In the first three parts of the book we investigated the limits on information flow in networks whose task is to communicate (or store) distributed information. In many real world distributed systems, such as multiprocessors, peer-to-peer networks, networked mobile agents, and sensor networks, the task of the network is to compute a function, make a decision, or coordinate an action based on distributed information. Can the communication rate needed to perform such a task at some node be reduced relative to communicating all the sources to this node?
This question has been formulated and studied in computer science under communication complexity and gossip algorithms, in control and optimization under distributed consensus, and in information theory under coding for computing and the μ -sum problem, among other topics. In this chapter, we study information theoretic models for distributed computing over networks. In some cases, we find that the total communication rate can be significantly reduced when the task of the network is to compute a function of the sources rather than to communicate the sources themselves, while in other cases, no such reduction is possible.
We first show that the Wyner–Ziv theorem in Chapter 11 extends naturally to the case when the decoder wishes to compute a function of the source and the side information. We provide a refined characterization of the lossless special case of this result in terms of conditional graph entropy. We then discuss distributed coding for computing. Although the rate–distortion region for this case is not known in general (even when the goal is to reconstruct the sources themselves), we show through examples that the total communication rate needed for computing can be significantly lower than for communicating the sources themselves. The first example we discuss is the μ -sum problem, where the decoder wishes to reconstruct a weighted sum of two separately encoded Gaussian sources with a prescribed quadratic distortion. We establish the rate–distortion region for this setting by reducing the problem to the CEO problem discussed in Chapter 12. The second example is lossless computing of the modulo-2 sum of a DSBS. Surprisingly, we find that using the same linear code at both encoders can outperform Slepian–Wolf coding.