To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A simple look at today's information and communication infrastructure is sufficient for one to appreciate the elegance of the layered networking architecture. As networks flourish worldwide, the fundamental problems of transmission, routing, resource allocation, end-to-end reliability, and congestion control are assigned to different layers of protocols, each with its own specific tools and network abstractions. However, the conceptual beauty of the layered protocol stack is not easily found when we turn our attention to the issue of network security. In the early days of the Internet, possibly because network access was very limited and tightly controlled, network security was not yet viewed as a primary concern for computer users and system administrators. This perception changed with the increase in network connections. Technical solutions, such as personnel access controls, password protection, and end-to-end encryption, were developed soon after. The steady growth in connectivity, fostered by the advent of electronic-commerce applications and the ubiquity of wireless communications, remains unhindered and has resulted in an unprecedented awareness of the importance of network security in all its guises.
The standard practice of adding authentication and encryption to the existing protocols at the various communication layers has led to what could be rightly classified as a patchwork of security mechanisms. Given that data security is so critically important, it is reasonable to argue that security measures should be implemented at all layers where this can be done in a cost-effective manner.
In Chapter 3, we considered the transmission of information over a noisy broadcast channel subject to reliability and security constraints; we showed that appropriate coding schemes can exploit the presence of noise to confuse the eavesdropper and guarantee some amount of information-theoretic security. It is important to note that the wiretap channel model assumes that all communications occur over the channel, hence communications are inherently rate-limited and one-way. Consequently, the results obtained do not fully capture the role of noise for secrecy; in particular, for situations in which the secrecy capacity is zero, it is not entirely clear whether this stems from the lack of any “physical advantage” over the eavesdropper or the restrictions imposed on the communication schemes.
The objective of this chapter is to study more precisely the fundamental role of noise in information-theoretic security. Instead of studying how we can communicate messages securely over a noisy channel, we now analyze how much secrecy we can extract from the noise itself in the form of a secret key. Specifically, we assume that the legitimate parties and the eavesdropper observe the realizations of correlated random variables and that the legitimate parties attempt to agree on a secret key unknown to the eavesdropper. To isolate the role played by noise, we remove restrictions on communication schemes and we assume that the legitimate parties can distill their key by communicating over a two-way, public, noiseless, and authenticated channel at no cost.
In the previous chapter, we reviewed the problem of lossless multicast when there is only one information source in the network. In many practical applications, however, there is more than one information source, each with its intended set of clients. One might be interested in the maximum rate of information that can be communicated from each source to its intended clients. Unlike the case with only one information source, the problem becomes involved even when each source has only one intended client, a case called multi-unicast problem. Even then, optimum communication strategies and the set of achievable information rates are not known in general. We will review some of the known results for the multi-unicast problem.
Multi-unicast problem
We start with the multiple unicast, or multi-unicast, problem. Materials in this section are mainly a summary of the results in [17, 18]. The readers are referred to these works for further reading. The multi-unicast communication problem with k sources is defined as follows: A graph G = 〈V, E〉 and k pairs of vertices {(s1; t1); (s2; t2) … (sk; tk)} are given. Each source si is assumed to have access to an independent information source that it wishes to communicate to its intended receiver ti. A demand vector of rates r = (r1, r2, …, rk) is said to be achievable if a communication strategy can be found to simultaneously communicate information with rate ri from si to ti for all i = 1, 2, 3, …, k.
To state intuitively, the question investigated in this book is the following:
How does one communicate one or more source signals over a network from nodes (servers) that observe/supply the sources to a set of sink nodes (clients) to realize the best possible reconstruction of the signals at the clients?
The above question sets the unifying theme for the problems studied in this book, and, as will be made clear in this introduction, contains some of the most important and fundamental problems in information and network communication theory.
Network representation of source coding problems
Let's start with the observation that even the simplest source coding problems have (perhaps trivial) network representations. Figure 1.1, for instance, shows network representations of the three arguably most fundamental source coding problems. Here, the goal is to communicate a single source signal X, from a single server node s to one or more sinks. Each link of the network has a “capacity” assigned to it, which, when properly normalized, indicates the number of bits that can be communicated over that link, without errors, for every source symbol emitted from X.
Figure 1.1(a) is the simplest source coding problem. The receiver node t receives an R1 bits encoding of X from the source node s. If X admits a rate-distortion function DX(R), the reconstruction error at t is at best dt = DX(R1).
In the previous chapters, we studied the merits of using balanced multiple-description codes, along with carefully optimized routing strategies for lossy source communication in heterogeneous networks. As discussed in the first part of this book, network coding is able to improve the network communication throughput, when compared to routing only. Starting from this chapter, we consider themerits of using network coding for lossy source communication. The problem becomes very complex in its most general form. Some of the complexities are apparent from the discussion in the following section. A practical subclass of the problem uses progressive source codes along with carefully optimized network coding strategies for efficient multicast of compressible sources, and is the subject of this chapter. But first, some general notes on using network coding for lossy communications.
Lossy source communication with network coding: an introduction
The lossy source communication strategies studied so far have been based on routing of multiple-description codes. The descriptions have been routed and duplicated in the network, without network coding. In this chapter, we start introducing lossy source communication methods that may use network coding ideas for network delivery. We start by introducing a generalization of the network coding problem, called Rainbow Network Coding (RNC). Unlike the current formulation of network coding, RNC recognizes the fact that the information communicated to different members of a multicast group can be different in general.
In Chapter 5, we introduced a practical approach to the NASCC problem by optimal diversity routing of MDC code streams (the RNF problem) and optimal design of the MDC codes by PET technique. There, we briefly discussed the role of the common rate of the descriptions r and the total number of possible descriptions K. The developments so far assumed communication with bounded delays, in which case the values of r, K become particularly important.
When the delay constraint is relaxed, we find that the set of all achievable distortion tuples converges to a limit independent of the description rate r. This limiting region will turn out to have a simple representation by introduction of a new form of flow we call continuous Rainbow Network Flow, or co RNF, as opposed to the discrete version of the problem considered so far in the thesis. co RNF can be viewed as the generalization of the RNF to fractional flows and is the subject of Sections 7.1 and 7.2.
In di RNF, we assumed the existence of K descriptions of equal rate r. co RNF, in one view, relaxes the constraint on description rates and allows for an arbitrary number of descriptions. Therefore, co RNF contains RNF as a special case. On the converse side, we show that the performance achievable with arbitrary description rates can be achieved, arbitrarily closely, using any description rate, provided that the delay constraint is relaxed and the number of descriptions is left unbounded.
The methods proposed in the previous chapter rely on our ability to find a, perhaps approximate, solution to the RNF problem, and in particular the CRNF problem. Once such a solution is found, the design of the MDC codes with PET technique is straightforward. This chapter is a more detailed algorithmic account of RNF and in particular the CRNF problem.
We start by proving the CRNF problem to be NP-Hard in Section 6.1, even on Directed Acyclic Graph (DAG) network topologies. In Section 6.2, we show that the CRNF problem can be posed as an integer linear program for DAGs. The problem can therefore be solved for moderate network sizes using existing numerical optimization packages.
For large networks, the results in Section 6.1 suggest that the exact solution to the linear integer program formulated in Section 6.2 can not be found efficiently. While the CRNF problem is NP-Hard even on DAGs, we find a polynomial-time solution for a generalized tree topology. If the network graph can be appropriately decomposed into tree components, then a dynamic programming algorithm can be applied to solve the CRNF problem exactly for a general distortion function δ(k). The development of this algorithm is the subject of Section 6.3.
Complexity results of the CRNF problem
In this section, we prove that the CRNF problem is NP-Hard. The proof is constructed by reducing the well-known NP-hard problem of graph 3-colorability [42] to a special instance of CRNF on a DAG, where there is only a single server node.
Multiple-description codes are powerful tools for network-aware source coding and communication, as suggested in the previous chapters. We showed how MDC can be useful even in error-free networks. MDC, however, traditionally has been used to combat losses in packet lossy networks in which packets are likely to be dropped or lost. In this chapter, we review practical techniques for construction and optimization of MDCs.
In the most general setting, an MDC scheme generating K descriptions can be regarded as a system consisting of K encoders (also called side encoders), and 2K - 1 decoders, one for each subset of descriptions. Figure 8.1 illustrates the block diagram of an MDC scheme for three descriptions. Each encoder generates a bit stream (description) of the same source and sends it to the receiver(s). The sender does not know how many streams are received by a particular receiver, but each receiver has this information. If only some descriptions arrive at a given destination, the decoder corresponding to that subset of descriptions is used to jointly decode them. The K decoders corresponding to individual descriptions are called side decoders, while the others are termed joint decoders. Moreover, the joint decoder corresponding to the whole set of descriptions is known as the central decoder.
Overview of MDC techniques
Practical MD coding schemes for memoryless sources have been extensively investigated. Some of the most representative approaches are PET-based MDC, MD quantization, and MD correlating transforms. This section offers a brief overview of these three approaches.
In Part I of this book, we investigated the problem of lossless source communication in networks. We reviewed the current literature, with an emphasis on new results in network coding. This chapter is the beginning of Part II, which deals with the case of lossy source coding and communication, where we allow imperfect reconstruction of sources at receivers.
There are two cases where such lossy extension to network information flow problems are required/necessary. For one, most multimedia signals, such as audio, video, and images, simply cannot be encoded losslessly. The digitization process is, by nature, lossy. With multimedia signals claiming the largest share of the traffic in today's Internet, the study of lossy network communication is particularly important. Second, lossless communication implies that the same information content has to be delivered to all receivers (the so-called common information multicast). For most multimedia applications, different classes of receivers with different bandwidth resources are often interested in consuming the same multimedia content. In such heterogeneous network environments, one might consider communicating different encodings of the same content to different receivers, such that the receivers with higher bandwidths are able to reconstruct the signal at a higher quality. The concept of quality, and the tradeoff between the rate and distortion, are the defining characteristics of lossy network communication.
In this chapter, we introduce a number of techniques that allow one to optimize the quality of delivered multimedia signals to receivers over a heterogenous network application.
Lossless distributed source coding is a well-studied topic in source communication that has been studied for simple and general cases, with or without network coding. The Slepian-Wolf (S-W) [1] distributed source coding problem considers lossless communication of two discrete correlated sources. More precisely, let X and Y be two correlated sources on a finite alphabet. The S-W problem is to encode X and Y separately to RX and RY bits per source symbol respectively, such that a joint decoder can recover both X and Y without error. As Slepian and Wolf proved in their seminal work [1], such encoding is possible if and only if RX ≥ H(X|Y), RY ≥ H(Y|X) and RX + RY ≥ H(X, Y), where H is the discrete entropy.
As discussed in the introduction, the S-W distributed source coding can be considered as an instance of network source coding for a special network with two source nodes (one for X and one for Y) that are directly connected to a single client. Recently, Ho et al. [3] generalized S-W coding to distributed source coding over arbitrary networks, with an arbitrary number of sources. We start with the simple S-W problem, before reviewing the results for S-W coding on arbitrary networks.
Slepian-Wolf problem in simple networks
Slepian-Wolf theory, also known as distributed data compression, or lossless separate coding of correlated random variables, is a typical source coding problem and belongs to the scope of network information theory.