To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the first three parts of the book we investigated the limits on information flow in networks whose task is to communicate (or store) distributed information. In many real world distributed systems, such as multiprocessors, peer-to-peer networks, networked mobile agents, and sensor networks, the task of the network is to compute a function, make a decision, or coordinate an action based on distributed information. Can the communication rate needed to perform such a task at some node be reduced relative to communicating all the sources to this node?
This question has been formulated and studied in computer science under communication complexity and gossip algorithms, in control and optimization under distributed consensus, and in information theory under coding for computing and the μ -sum problem, among other topics. In this chapter, we study information theoretic models for distributed computing over networks. In some cases, we find that the total communication rate can be significantly reduced when the task of the network is to compute a function of the sources rather than to communicate the sources themselves, while in other cases, no such reduction is possible.
We first show that the Wyner–Ziv theorem in Chapter 11 extends naturally to the case when the decoder wishes to compute a function of the source and the side information. We provide a refined characterization of the lossless special case of this result in terms of conditional graph entropy. We then discuss distributed coding for computing. Although the rate–distortion region for this case is not known in general (even when the goal is to reconstruct the sources themselves), we show through examples that the total communication rate needed for computing can be significantly lower than for communicating the sources themselves. The first example we discuss is the μ -sum problem, where the decoder wishes to reconstruct a weighted sum of two separately encoded Gaussian sources with a prescribed quadratic distortion. We establish the rate–distortion region for this setting by reducing the problem to the CEO problem discussed in Chapter 12. The second example is lossless computing of the modulo-2 sum of a DSBS. Surprisingly, we find that using the same linear code at both encoders can outperform Slepian–Wolf coding.
In this chapter, we begin the discussion on communication of uncompressed sources over multiple noiseless links. We consider the limits on lossless compression of separately encoded sources, which is motivated by distributed sensing problems. For example, consider a sensor network for measuring the temperature at different locations across a city. Suppose that each sensor node compresses its measurement and transmits it to a common base station via a noiseless link. What is the minimum total transmission rate needed so that the base station can losslessly recover the measurements from all the sensors? If the sensor measurements are independent of each other, then the answer to this question is straightforward; each sensor compresses its measurement to the entropy of its respective temperature process, and the limit on the total rate is the sum of the individual entropies? The temperature processes at the sensors, however, can be highly correlated. Can such correlation be exploited to achieve a lower rate than the sum of the individual entropies? Slepian and Wolf showed that the total rate can be reduced to the joint entropy of the processes, that is, the limit on distributed lossless compression is the same as that on centralized compression, where the sources are jointly encoded. The achievability proof of this surprising result uses the new idea of random binning.
We then consider lossless source coding with helpers. Suppose that the base station in our sensor network example wishes to recover the temperature measurements from only a subset of the sensors while using the information sent by the rest of the sensor nodes to help achieve this goal. What is the optimal tradeoff between the rates from the different sensors? We establish the optimal rate region for the case of a single helper node.
In Chapter 20, we continue the discussion of distributed lossless source coding by considering more general networks modeled by graphs.
Distributed Lossless Source Coding For A 2-Dms
Consider the distributed compression system depicted in Figure 10.1, where two sources X1 and X2 are separately encoded (described) at rates R1 and R2, respectively, and the descriptions are communicated over noiseless links to a decoder who wishes to recover both sources losslessly. What is the set of simultaneously achievable description rate pairs (R1, R2)?
In this chapter, we discuss models for wireless multihop networks that generalize the Gaussian channel models we studied earlier. We extend the cutset bound and the noisy network coding inner bound on the capacity region of the multimessage DMN presented in Chapter 18 to Gaussian networks. We show through a Gaussian two-way relay channel example that noisy network coding can outperform decode–forward and amplify– forward, achieving rates within a constant gap of the cutset bound while the inner bounds achieved by these other schemes can have an arbitrarily large gap to the cutset bound. More generally, we show that noisy network coding for the Gaussian multimessage multicast network achieves rates within a constant gap of the capacity region independent of network topology and channel gains. For Gaussian networks with other messaging demands, e.g., general multiple-unicast networks, however, no such constant gap results exist in general. Can we still obtain some guarantees on the capacity of these networks?
To address this question, we introduce the scaling-law approach to capacity, where we seek to find the order of capacity scaling as the number of nodes in the network becomes large. In addition to providing some guarantees on network capacity, the study of capacity scaling sheds light on the role of cooperation through relaying in combating interference and path loss in large wireless networks. We first illustrate the scaling-law approach via a simple unicast network example that shows how relaying can dramatically increase the capacity by reducing the effect of high path loss. We then present the Gupta–Kumar random network model in which the nodes are randomly distributed over a geographical area and the goal is to determine the capacity scaling law that holds for most such networks. We establish lower and upper bounds on the capacity scaling law for the multiple-unicast case. The lower bound is achieved via a cellular time-division scheme in which the messages are sent simultaneously using a simple multihop scheme with nodes in cells along the lines from each source to its destination acting as relays. We show that this scheme achieves much higher rates than direct transmission with time division, which demonstrates the role of relaying in mitigating interference in large networks.