To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we begin the discussion on communication of uncompressed sources over multiple noiseless links. We consider the limits on lossless compression of separately encoded sources, which is motivated by distributed sensing problems. For example, consider a sensor network for measuring the temperature at different locations across a city. Suppose that each sensor node compresses its measurement and transmits it to a common base station via a noiseless link. What is the minimum total transmission rate needed so that the base station can losslessly recover the measurements from all the sensors? If the sensor measurements are independent of each other, then the answer to this question is straightforward; each sensor compresses its measurement to the entropy of its respective temperature process, and the limit on the total rate is the sum of the individual entropies? The temperature processes at the sensors, however, can be highly correlated. Can such correlation be exploited to achieve a lower rate than the sum of the individual entropies? Slepian and Wolf showed that the total rate can be reduced to the joint entropy of the processes, that is, the limit on distributed lossless compression is the same as that on centralized compression, where the sources are jointly encoded. The achievability proof of this surprising result uses the new idea of random binning.
We then consider lossless source coding with helpers. Suppose that the base station in our sensor network example wishes to recover the temperature measurements from only a subset of the sensors while using the information sent by the rest of the sensor nodes to help achieve this goal. What is the optimal tradeoff between the rates from the different sensors? We establish the optimal rate region for the case of a single helper node.
In Chapter 20, we continue the discussion of distributed lossless source coding by considering more general networks modeled by graphs.
Distributed Lossless Source Coding For A 2-Dms
Consider the distributed compression system depicted in Figure 10.1, where two sources X1 and X2 are separately encoded (described) at rates R1 and R2, respectively, and the descriptions are communicated over noiseless links to a decoder who wishes to recover both sources losslessly. What is the set of simultaneously achievable description rate pairs (R1, R2)?
In this chapter, we discuss models for wireless multihop networks that generalize the Gaussian channel models we studied earlier. We extend the cutset bound and the noisy network coding inner bound on the capacity region of the multimessage DMN presented in Chapter 18 to Gaussian networks. We show through a Gaussian two-way relay channel example that noisy network coding can outperform decode–forward and amplify– forward, achieving rates within a constant gap of the cutset bound while the inner bounds achieved by these other schemes can have an arbitrarily large gap to the cutset bound. More generally, we show that noisy network coding for the Gaussian multimessage multicast network achieves rates within a constant gap of the capacity region independent of network topology and channel gains. For Gaussian networks with other messaging demands, e.g., general multiple-unicast networks, however, no such constant gap results exist in general. Can we still obtain some guarantees on the capacity of these networks?
To address this question, we introduce the scaling-law approach to capacity, where we seek to find the order of capacity scaling as the number of nodes in the network becomes large. In addition to providing some guarantees on network capacity, the study of capacity scaling sheds light on the role of cooperation through relaying in combating interference and path loss in large wireless networks. We first illustrate the scaling-law approach via a simple unicast network example that shows how relaying can dramatically increase the capacity by reducing the effect of high path loss. We then present the Gupta–Kumar random network model in which the nodes are randomly distributed over a geographical area and the goal is to determine the capacity scaling law that holds for most such networks. We establish lower and upper bounds on the capacity scaling law for the multiple-unicast case. The lower bound is achieved via a cellular time-division scheme in which the messages are sent simultaneously using a simple multihop scheme with nodes in cells along the lines from each source to its destination acting as relays. We show that this scheme achieves much higher rates than direct transmission with time division, which demonstrates the role of relaying in mitigating interference in large networks.
In Chapters 4 through 9, we studied reliable communication of independent messages over noisy single-hop networks (channel coding), and in Chapters 10 through 13, we studied the dual setting of reliable communication of uncompressed sources over noiseless single-hop networks (source coding). These settings are special cases of the more general information flow problem of reliable communication of uncompressed sources over noisy single-hop networks. As we have seen in Section 3.9, separate source and channel coding is asymptotically sufficient for communicating a DMS over a DMC. Does such separation hold in general for communicating a k-DMS over a DM single-hop network?
In this chapter, we show that such separation does not hold in general. Thus in some multiuser settings it is advantageous to perform joint source–channel coding. We demonstrate this breakdown in separation through examples of lossless communication of a 2-DMS over a DM-MAC and over a DM-BC.
For the DM-MAC case, we show that joint source–channel coding can help communication by utilizing the correlation between the sources to induce statistical cooperation between the transmitters. We present a joint source–channel coding scheme that outperforms separate source and channel coding. We then show that this scheme can be improved when the sources have a common part, that is, a source that both senders can agree on with probability one.
For the DM-BC case, we show that joint source–channel coding can help communication by utilizing the statistical compatibility between the sources and the channel. We first consider a separate source and channel coding scheme based on the Gray–Wyner source coding system and Marton's channel coding scheme. The optimal rate–region for the Gray–Wyner system naturally leads to several definitions of common information between correlated sources. We then describe a joint source–channel coding scheme that outperforms the separate Gray–Wyner and Marton coding scheme.
Finally, we present a general single-hop network that includes as special cases many of themultiuser source and channel settings we discussed in previous chapters. We describe a hybrid source–channel coding scheme for this network.