To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cooperation among wireless nodes has attracted significant attention as a novel networking paradigm for future wireless cellular networks. It has been demonstrated that, by using cooperation at different layers (physical layer, multiple access channel (MAC) layer, network layer), the performance of wireless systems such as cellular networks can be significantly improved. In fact, cooperation can yield significant performance improvement in terms of reduced bit error rate (BER), improved throughput, efficient packet forwarding, reduced energy, and so on. In order to reap the benefits of cooperation, efficient and distributed cooperation strategies must be devised in future wireless networks. Designing such cooperation protocols encounters many challenges. On the one hand, any cooperation algorithm must take into account not only the gains but also the costs from cooperation which can both be challenging to model. On the other hand, the wireless network users tend to be selfish in nature and aim at improving their own performance. Therefore, giving incentives for these users to cooperate is another major challenge. Hence, there is a strong need to design cooperative strategies that can be implemented by the wireless nodes, in a distributed manner, while taking into account the selfish goals of each user as well as all the gains and losses from this cooperation.
This chapter describes analytical tools from game theory that can be used to model the cooperative behavior in wireless cellular networks.
Cooperative communications and networking represent a new paradigm which uses both transmission and distributed processing to significantly increase the capacity in wireless communication networks. Current wireless networks face challenges in fulfilling users’ ever-increasing expectations and needs. This is mainly due to the following reasons: lack of available radio spectrum, the unreliable wireless radio link, and the limited battery capacity of wireless devices. The evolving cooperative wireless networking paradigm can tackle these challenges. The basic idea of cooperative wireless networking is that wireless devices work together to achieve their individual goals or one common goal following a common strategy. Wireless devices share their resources (i.e., radio link, antenna, etc.) during cooperation using short-range communications. The advantages of cooperation are as follows: first, the communications capability, reliability, coverage, and quality-of-service (QoS) of wireless devices can be enhanced by cooperation; second, the cost of information exchange (i.e., transmission power, transmission time, spectrum, etc.) can be reduced. Cooperative communication and networking will be a key component in next generation wireless networks. In this book we particularly focus on cooperative transmission techniques in cellular wireless networks.
Although cellular wireless systems are regarded as a highly successful technology, their potential in throughput and network coverage has not been fully realized. Cooperative communication is a key technique to harness the potential throughput and coverage gains in these networks.
Since its inception in information theory, network coding has attracted a significant amount of research attention. After theoretical explorations in wired networks, the use of network coding in wireless networks to improve throughput has been widely recognized. In this chapter, we present a survey of advances in relay-based cellular networks with network coding. We begin with an introduction to network coding theory with a focus on wireless networks. We discuss various network coded cooperation schemes that apply network coding on digital bits of packets or channel codes in terms of, for example, outage probability and diversity–multiplexing tradeoff. We also consider physical-layer network coding which operates on the electromagnetic waves and its application in relay-based networks. Then we take a networking perspective, and present in detail some scheduling and resource allocation algorithms to improve throughput using network coding in relay-based networks with a cross-layer perspective. Finally, we conclude the chapter with an outlook into future developments.
Network coding was first proposed in for noiseless wireline communication networks to achieve the multicast capacity of the underlying network graph. The essential idea of network coding is to allow coding capability at network nodes (routers, relays, etc.) in exchange for capacity gain, i.e., an alternative tradeoff between computation and communication. This can be understood by considering the classic “butterfly” network example. In Figure 12.1, suppose the source S wants to multicast two bits a and b to two sinks D1 and D2 simultaneously.
By
Dong In Kim, Sungky unkwan University (SKKU), Korea,
Wan Choi, Korea Advanced Institute of Science and Technology (KAIST), Korea,
Hanbyul Seo, LG Electronics, Inc., Korea,
Byoung-Hoon Kim, LG Electronics, Inc., Korea
Direct transmission from source to destination often faces weaker channel conditions when a mobile is moving across the cell border, because of the large propagation loss due to path-loss and shadowing, and the power limitation not to cause undue interference. For this reason, attention has been given to the use of cooperative relaying to mitigate intercell interference to abtain an increased rate and extended coverage at cell edge.
There have been many proposals for cooperative relaying, such as amplify-and-forward (AF), decode-and-forward (DF), and compress-and-forward (CF), some of which were developed in. Such relaying schemes are mainly designed to exploit the multipath diversity for a power gain (or increased rate) that results from combining direct and relayed signals. However, these schemes do not fully utilize the asymmetric link capacity in direct (source–destination) and relay (source–relay) links, e.g., where the latter gives better results in the downlink if line-of-sight (LoS) transmission is realized in the link between the base station and a fixed relay.
A partial DF protocol has been proposed in that aims to exploit the asymmetric link capacity more efficiently by forwarding a part of the decoded information to the destination using superposition coding. Further, Popovski and de Carvalho investigated a power division between the basic data and the superposed data that result from superposition coding, for a maximum overall rate capacity.
Analyzing the behavior of complex networks is an important element in the design of new man-made structures such as communication systems and biologically engineered molecules. Because any complex network can be represented by a graph, and therefore in turn by a matrix, graph theory has become a powerful tool in the investigation of network performance. This self-contained 2010 book provides a concise introduction to the theory of graph spectra and its applications to the study of complex networks. Covering a range of types of graphs and topics important to the analysis of complex systems, this guide provides the mathematical foundation needed to understand and apply spectral insight to real-world systems. In particular, the general properties of both the adjacency and Laplacian spectrum of graphs are derived and applied to complex networks. An ideal resource for researchers and students in communications networking as well as in physics and mathematics.
This chapter provides the tools for finding Hessians (i.e., second-order derivatives) in a systematic way when the input variables are complex-valued matrices. The proposed theory is useful when solving numerous problems that involve optimization when the unknown parameter is a complex-valued matrix. In an effort to build adaptive optimization algorithms, it is important to find out if a certain value of the complex-valued parameter matrix at a stationary point is a maximum, minimum, or saddle point; the Hessian can then be utilized very efficiently. The complex Hessian might also be used to accelerate the convergence of iterative optimization algorithms, to study the stability of iterative algorithms, and to study convexity and concavity of an objective function. The methods presented in this chapter are general, such that many results can be derived using the introduced framework. Complex Hessians are derived for some useful examples taken from signal processing and communications.
The problem of finding Hessians has been treated for real-valued matrix variables in Magnus and Neudecker (1988, Chapter 10). For complex-valued vector variables, the Hessian matrix is treated for scalar functions in Brookes (July 2009) and Kreutz-Delgado (2009, June 25th). Both gradients and Hessians for scalar functions that depend on complex-valued vectors are studied in van den Bos (1994a). The Hessian of real-valued functions depending on real-valued matrix variables is used in Payaró and Palomar (2009) to enhance the connection between information theory and estimation theory.
Often in signal processing and communications, problems appear for which we have to find a complex-valued matrix that minimizes or maximizes a real-valued objective function under the constraint that the matrix belongs to a set of matrices with a structure or pattern (i.e., where there exist some functional dependencies among the matrix elements). The theory presented in previous chapters is not suited for the case of functional dependencies among elements of the matrix. In this chapter, a systematic method is presented for finding the generalized derivative of complex-valued matrix functions, which depend on matrix arguments that have a certain structure. In Chapters 2 through 5, theory has been presented for how to find derivatives and Hessians of complex-valued functions F: ℂN×Q × ℂN×Q → ℂM×P with respect to the complex-valued matrix Z∈ℂN×Q and its complex conjugate Z* ℂN×Q. As seen from Lemma 3.1, the differential variables d vec(Z) and d vec(Z*) should be treated as independent when finding derivatives. This is the main reason why the function F: ℂN×Q × ℂN×Q → ℂM×Pis denoted by two complex-valued input arguments F(Z, Z*) because Z ∈ ℂN×Q and Z* ∈ ℂN×Q should be treated independently when finding complex-valued matrix derivatives (see Lemma 3.1). Based on the presented theory, up to this point, it has been assumed that all elements of the input matrix variable Z contain independent elements.
The definition of a complex-valued matrix derivative was given in Chapter 3 (see Definition 3.1). In this chapter, it will be shown how the complex-valued matrix derivatives can be found for all nine different types of functions given in Table 2.2. Three different choices are given for the complex-valued input variables of the functions, namely, scalar, vector, or matrix; in addition, three possibilities for the type of output that functions return, again, could be scalar, vector, or matrix. The derivative can be identified through the complex differential by using Table 3.2. In this chapter, it will be shown how the theory introduced in Chapters 2 and 3 can be used to find complex-valued matrix derivatives through examples. Many results are collected inside tables to make them more accessible.
The rest of this chapter is organized as follows: The simplest case, when the output of a function is a complex-valued scalar, is treated in Section 4.2, which contains three subsections (4.2.1, 4.2.2, and 4.2.3) when the input variables are scalars, vectors, and matrices, respectively. Section 4.3 looks at the case of vector functions; it contains Subsections 4.3.1, 4.3.2, and 4.3.3, which treat the three cases of complex-valued scalar, vector, and matrix input variables, respectively. Matrix functions are considered in Section 4.4, which contains three subsections. The three cases of complex-valued matrix functions with scalar, vector, and matrix inputs are treated in Subsections 4.4.1, 4.4.2, and 4.4.3, respectively. The chapter ends with Section 4.5, which consists of 10 exercises.
To solve increasingly complicated open research problems, it is crucial to develop useful mathematical tools. Often, the task of a researcher or an engineer is to find the optimal values of unknown parameters that can be represented by complex-valued matrices. One powerful tool for finding the optimal values of complex-valued matrices is to calculate the derivatives with respect to these matrices. In this book, the main focus is on complex-valued matrix calculus because the theory of real-valued matrix derivatives has been thoroughly covered already in an excellent manner in Magnus and Neudecker (1988). The purpose of this book is to provide an introduction to the area of complex-valued matrix derivatives and to show how they can be applied as a tool for solving problems in signal processing and communications.
The framework of complex-valued matrix derivatives can be used in the optimization of systems that depend on complex design parameters in areas where the unknown parameters are complex-valued matrices with independent components, or where they belong to sets of matrices with certain structures. Many of the results discussed in this book are summarized in tabular form, so that they are easily accessible. Several examples taken from recently published material show how signal processing and communication systems can be optimized using complex-valued matrix derivatives. Note that the differentiation procedure is usually not sufficient to solve such problems completely; however, it is often an essential step toward finding the solution to the problem.
In many engineering problems, the unknown parameters are complex-valued matrices, and often, the task of the system designer is to find the values of these complex parameters, which optimize a certain scalar real-valued objective function.
This book is written as an engineering-oriented mathematics book. It introduces the field involved in finding derivatives of complex-valued functions with respect to complex-valued matrices, in which the output of the function may be a scalar, a vector, or a matrix. The theory of complex-valued matrix derivatives, collected in this book, will benefit researchers and engineers working in fields such as signal processing and communications. Theories for finding complex-valued derivatives with respect to both complex-valued matrices with independent components and matrices that have certain dependencies among the components are developed and illustrative examples that show how to find such derivatives are presented. Key results are summarized in tables. Through several research-related examples, it will be shown how complex-valued matrix derivatives can be used as a tool to solve research problems in the fields of signal processing and communications.
This book is suitable for M.S. and Ph.D. students, researchers, engineers, and professors working in signal processing, communications, and other fields in which the unknown variables of a problem can be expressed as complex-valued matrices. The goal of the book is to present the tools of complex-valued matrix derivatives such that the reader is able to use these theories to solve open research problems in his or her own field. Depending on the nature of the problem, the components inside the unknown matrix might be independent, or certain interrelations might exist among the components. Matrices with independent components are called unpatterned and, if functional dependencies exist among the elements, the matrix is called patterned or structured.
A theory developed for finding derivatives with respect to real-valued matrices with independent elements was presented in Magnus and Neudecker (1988) for scalar, vector, and matrix functions. There, the matrix derivatives with respect to a real-valued matrix variable are found by means of the differential of the function. This theory is extended in this chapter to the case where the function depends on a complex-valued matrix variable and its complex conjugate, when all the elements of the matrix are independent. It will be shown how the complex differential of the function can be used to identify the derivative of the function with respect to both the complex-valued input matrix variable and its complex conjugate. This is a natural extension of the real-valued vector derivatives in Kreutz-Delgado (2008) and the real-valued matrix derivatives in Magnus and Neudecker (1988) to the case of complex-valued matrix derivatives. The complex-valued input variable and its complex conjugate should be treated as independent when finding complex matrix derivatives. For scalar complex-valued functions that depend on a complex-valued vector and its complex conjugate, a theory for finding derivatives with respect to complex-valued vectors, when all the vector components are independent, was given in Brandwood (1983). This was extended to a systematic and simple way of finding derivatives of scalar, vector, and matrix functions with respect to complex-valued matrices when the matrix elements are independent (Hjørungnes & Gesbert 2007a). In this chapter, the definition of the complex-valued matrix derivative will be given, and a procedure will be presented for how to obtain the complex-valued matrix derivative.