To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Analyzing the behavior of complex networks is an important element in the design of new man-made structures such as communication systems and biologically engineered molecules. Because any complex network can be represented by a graph, and therefore in turn by a matrix, graph theory has become a powerful tool in the investigation of network performance. This self-contained 2010 book provides a concise introduction to the theory of graph spectra and its applications to the study of complex networks. Covering a range of types of graphs and topics important to the analysis of complex systems, this guide provides the mathematical foundation needed to understand and apply spectral insight to real-world systems. In particular, the general properties of both the adjacency and Laplacian spectrum of graphs are derived and applied to complex networks. An ideal resource for researchers and students in communications networking as well as in physics and mathematics.
This chapter provides the tools for finding Hessians (i.e., second-order derivatives) in a systematic way when the input variables are complex-valued matrices. The proposed theory is useful when solving numerous problems that involve optimization when the unknown parameter is a complex-valued matrix. In an effort to build adaptive optimization algorithms, it is important to find out if a certain value of the complex-valued parameter matrix at a stationary point is a maximum, minimum, or saddle point; the Hessian can then be utilized very efficiently. The complex Hessian might also be used to accelerate the convergence of iterative optimization algorithms, to study the stability of iterative algorithms, and to study convexity and concavity of an objective function. The methods presented in this chapter are general, such that many results can be derived using the introduced framework. Complex Hessians are derived for some useful examples taken from signal processing and communications.
The problem of finding Hessians has been treated for real-valued matrix variables in Magnus and Neudecker (1988, Chapter 10). For complex-valued vector variables, the Hessian matrix is treated for scalar functions in Brookes (July 2009) and Kreutz-Delgado (2009, June 25th). Both gradients and Hessians for scalar functions that depend on complex-valued vectors are studied in van den Bos (1994a). The Hessian of real-valued functions depending on real-valued matrix variables is used in Payaró and Palomar (2009) to enhance the connection between information theory and estimation theory.
Often in signal processing and communications, problems appear for which we have to find a complex-valued matrix that minimizes or maximizes a real-valued objective function under the constraint that the matrix belongs to a set of matrices with a structure or pattern (i.e., where there exist some functional dependencies among the matrix elements). The theory presented in previous chapters is not suited for the case of functional dependencies among elements of the matrix. In this chapter, a systematic method is presented for finding the generalized derivative of complex-valued matrix functions, which depend on matrix arguments that have a certain structure. In Chapters 2 through 5, theory has been presented for how to find derivatives and Hessians of complex-valued functions F: ℂN×Q × ℂN×Q → ℂM×P with respect to the complex-valued matrix Z∈ℂN×Q and its complex conjugate Z* ℂN×Q. As seen from Lemma 3.1, the differential variables d vec(Z) and d vec(Z*) should be treated as independent when finding derivatives. This is the main reason why the function F: ℂN×Q × ℂN×Q → ℂM×Pis denoted by two complex-valued input arguments F(Z, Z*) because Z ∈ ℂN×Q and Z* ∈ ℂN×Q should be treated independently when finding complex-valued matrix derivatives (see Lemma 3.1). Based on the presented theory, up to this point, it has been assumed that all elements of the input matrix variable Z contain independent elements.
The definition of a complex-valued matrix derivative was given in Chapter 3 (see Definition 3.1). In this chapter, it will be shown how the complex-valued matrix derivatives can be found for all nine different types of functions given in Table 2.2. Three different choices are given for the complex-valued input variables of the functions, namely, scalar, vector, or matrix; in addition, three possibilities for the type of output that functions return, again, could be scalar, vector, or matrix. The derivative can be identified through the complex differential by using Table 3.2. In this chapter, it will be shown how the theory introduced in Chapters 2 and 3 can be used to find complex-valued matrix derivatives through examples. Many results are collected inside tables to make them more accessible.
The rest of this chapter is organized as follows: The simplest case, when the output of a function is a complex-valued scalar, is treated in Section 4.2, which contains three subsections (4.2.1, 4.2.2, and 4.2.3) when the input variables are scalars, vectors, and matrices, respectively. Section 4.3 looks at the case of vector functions; it contains Subsections 4.3.1, 4.3.2, and 4.3.3, which treat the three cases of complex-valued scalar, vector, and matrix input variables, respectively. Matrix functions are considered in Section 4.4, which contains three subsections. The three cases of complex-valued matrix functions with scalar, vector, and matrix inputs are treated in Subsections 4.4.1, 4.4.2, and 4.4.3, respectively. The chapter ends with Section 4.5, which consists of 10 exercises.
To solve increasingly complicated open research problems, it is crucial to develop useful mathematical tools. Often, the task of a researcher or an engineer is to find the optimal values of unknown parameters that can be represented by complex-valued matrices. One powerful tool for finding the optimal values of complex-valued matrices is to calculate the derivatives with respect to these matrices. In this book, the main focus is on complex-valued matrix calculus because the theory of real-valued matrix derivatives has been thoroughly covered already in an excellent manner in Magnus and Neudecker (1988). The purpose of this book is to provide an introduction to the area of complex-valued matrix derivatives and to show how they can be applied as a tool for solving problems in signal processing and communications.
The framework of complex-valued matrix derivatives can be used in the optimization of systems that depend on complex design parameters in areas where the unknown parameters are complex-valued matrices with independent components, or where they belong to sets of matrices with certain structures. Many of the results discussed in this book are summarized in tabular form, so that they are easily accessible. Several examples taken from recently published material show how signal processing and communication systems can be optimized using complex-valued matrix derivatives. Note that the differentiation procedure is usually not sufficient to solve such problems completely; however, it is often an essential step toward finding the solution to the problem.
In many engineering problems, the unknown parameters are complex-valued matrices, and often, the task of the system designer is to find the values of these complex parameters, which optimize a certain scalar real-valued objective function.
This book is written as an engineering-oriented mathematics book. It introduces the field involved in finding derivatives of complex-valued functions with respect to complex-valued matrices, in which the output of the function may be a scalar, a vector, or a matrix. The theory of complex-valued matrix derivatives, collected in this book, will benefit researchers and engineers working in fields such as signal processing and communications. Theories for finding complex-valued derivatives with respect to both complex-valued matrices with independent components and matrices that have certain dependencies among the components are developed and illustrative examples that show how to find such derivatives are presented. Key results are summarized in tables. Through several research-related examples, it will be shown how complex-valued matrix derivatives can be used as a tool to solve research problems in the fields of signal processing and communications.
This book is suitable for M.S. and Ph.D. students, researchers, engineers, and professors working in signal processing, communications, and other fields in which the unknown variables of a problem can be expressed as complex-valued matrices. The goal of the book is to present the tools of complex-valued matrix derivatives such that the reader is able to use these theories to solve open research problems in his or her own field. Depending on the nature of the problem, the components inside the unknown matrix might be independent, or certain interrelations might exist among the components. Matrices with independent components are called unpatterned and, if functional dependencies exist among the elements, the matrix is called patterned or structured.
A theory developed for finding derivatives with respect to real-valued matrices with independent elements was presented in Magnus and Neudecker (1988) for scalar, vector, and matrix functions. There, the matrix derivatives with respect to a real-valued matrix variable are found by means of the differential of the function. This theory is extended in this chapter to the case where the function depends on a complex-valued matrix variable and its complex conjugate, when all the elements of the matrix are independent. It will be shown how the complex differential of the function can be used to identify the derivative of the function with respect to both the complex-valued input matrix variable and its complex conjugate. This is a natural extension of the real-valued vector derivatives in Kreutz-Delgado (2008) and the real-valued matrix derivatives in Magnus and Neudecker (1988) to the case of complex-valued matrix derivatives. The complex-valued input variable and its complex conjugate should be treated as independent when finding complex matrix derivatives. For scalar complex-valued functions that depend on a complex-valued vector and its complex conjugate, a theory for finding derivatives with respect to complex-valued vectors, when all the vector components are independent, was given in Brandwood (1983). This was extended to a systematic and simple way of finding derivatives of scalar, vector, and matrix functions with respect to complex-valued matrices when the matrix elements are independent (Hjørungnes & Gesbert 2007a). In this chapter, the definition of the complex-valued matrix derivative will be given, and a procedure will be presented for how to obtain the complex-valued matrix derivative.
In this chapter, several examples of how the theory of complex-valued matrix derivatives can be used as an important tool to solve research problems taken from signal processing and communications. The developed theory can be used to solve problems in areas where the unknown matrices are complex-valued matrices. Examples of such areas are signal processing and communications. Often in these areas, the objective function is a real-valued function that depends on a continuous complex-valued matrix and its complex conjugate. In Hjørungnes and Ramstad (1999) and Hjørungnes (2000), matrix derivatives were used to optimize filter banks used for source coding. The book by Vaidyanathan et al. (2010) contains material on how to optimize communication systems by means of complex-valued derivatives. Complex-valued derivatives were applied to find the Cramer-Rao lower bound for complex-valued parameters in van den Bos (1994b); and Jagannatham and Rao (2004)
The rest of this chapter is organized as follows: Section 7.2 presents a problem from signal processing on how to find the derivative and the Hessian of a real-valued function that depends on the magnitude of the Fourier transform of the complex-valued argument vector. In Section 7.3, an example from signal processing is studied in which the sums of the squared absolute values of the off-diagonal elements in a covariance matrix are minimized. This problem of minimizing the off-diagonal elements has applications in blind carrier frequency offset (CFO) estimation.
In this chapter, most of the notation used in this book will be introduced. It is not assumed that the reader is familiar with topics such as Kronecker product, Hadamard product, or vectorization operator. Therefore, this chapter defines these concepts and gives some of their properties. The current chapter also provides background material for matrix manipulations that will be used later in the book. However, it contains just the minimum of material that will be used later because many excellent books in linear algebra are available for the reader to consult (Gantmacher 1959a-1959b; Horn & Johnson 1985; Strang 1988; Magnus & Neudecker 1988; Golub & van Loan 1989; Horn & Johnson 1991; Lütkepohl 1996; Harville 1997; Bernstein 2005).
This chapter is organized as follows: Section 2.2 introduces the basic notation and classification used for complex-valued variables and functions. A discussion of the differences between analytic and non-analytic functions is presented in Section 2.3. Basic matrix-related definitions are provided in Section 2.4. Several results involving matrix manipulations used in later chapters are found in Section 2.5. Section 2.6 offers exercises related to the material included in this chapter. Theoretical derivations and computer programming in MATLAB are topics of these exercises.
Over the past two decades there have been significant advances in the field of optimization. In particular, convex optimization has emerged as a powerful signal processing tool, and the variety of applications continues to grow rapidly. This book, written by a team of leading experts, sets out the theoretical underpinnings of the subject and provides tutorials on a wide range of convex optimization applications. Emphasis throughout is on cutting-edge research and on formulating problems in convex form, making this an ideal textbook for advanced graduate courses and a useful self-study guide. Topics covered range from automatic code generation, graphical models, and gradient-based algorithms for signal recovery, to semidefinite programming (SDP) relaxation and radar waveform design via SDP. It also includes blind source separation for image processing, robust broadband beamforming, distributed multi-agent optimization for networked systems, cognitive radio systems via game theory, and the variational inequality approach for Nash equilibrium solutions.
This introductory text explores the theory of graph spectra: a topic with applications across a wide range of subjects, including computer science, quantum chemistry and electrical engineering. The spectra examined here are those of the adjacency matrix, the Seidel matrix, the Laplacian, the normalized Laplacian and the signless Laplacian of a finite simple graph. The underlying theme of the book is the relation between the eigenvalues and structure of a graph. Designed as an introductory text for graduate students, or anyone using the theory of graph spectra, this self-contained treatment assumes only a little knowledge of graph theory and linear algebra. The authors include many developments in the field which arise as a result of rapidly expanding interest in the area. Exercises, spectral data and proofs of required results are also provided. The end-of-chapter notes serve as a practical guide to the extensive bibliography of over 500 items.