To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we consider the optimization of scalar filters for single-input single-output (SISO) channels. A number of optimization problems which arise in different contexts will be considered. In Sec. 10.2 we begin with the digital communication system of Fig. 10.1 for a fixed channel H(jω). We consider the optimization of the continuous-time prefilter (transmitted pulse shape) and postfilter (equalizer) to minimize the mean square reconstruction error under the zero-forcing condition on the product F(jω)H(jω)G(jω). The zero-forcing condition does not uniquely determine the above product. It will be shown that the optimal product (under the zero-forcing condition) is the so-called optimal compaction filter of the channel (Sec. 10.2.3). Usually the filters that result from the above optimization problem are ideal, unrealizable, filters and can only be approximated. The equivalent digital channel therefore requires further equalization. In Sec. 10.3 we consider the problem of jointly optimizing a digital prefilter–postfilter pair to minimize the mean square error. Both the zero-forcing and the non-ZF situation are considered.
Section 10.4 revisits Fig. 10.1 for an arbitrary channel H(jω) from a more general viewpoint and formulates some general conditions on the filters F(jω) and G(jω) for optimality. The most general forms of the postfilter and prefilter for optimality are established. These forms were first derived by Ericson [1971, 1973]. Using these results we can argue that the optimization of the continuoustime filters in Fig. 10.1 can always be reformulated as the optimization of a digital prefilter–postfilter pair.
Digital communication systems have been studied for many decades, and they have become an integral part of the technological world we live in. Many excellent books in recent years have told the story of this communication revolution, and have explained in considerable depth the theory and applications. Since the late 1990s particularly, there have been a number of significant contributions to digital communications from the signal processing community. This book presents a number of these recent developments, with emphasis on the use of filter bank precoders and equalizers. Optimization of these systems will be one of the main themes in this book. Both multiple-input multiple-output (MIMO) systems and single-input single-output (SISO) systems will be considered.
The book is divided into four parts. Part 1 contains introductory material on digital communication systems and signal processing aspects. In Part 2 we discuss the optimization of transceivers, with emphasis on MIMO channels. Part 3 provides mathematical background material for optimization of transceivers. This part can be used as a reference, and will be useful for readers wishing to pursue more detailed literature on optimization. Part 4 contains eight appendices on commonly used material such as matrix theory, Wiener filtering, and so forth. Thus, while it is assumed that the reader has some exposure to digital communications and signal processing at the introductory level, there is plenty of review material at the introductory level (Part 1) and at the advanced level (Parts 3 and 4).
Are you involved in implementing wireless mesh networks? As mesh networks move towards large-scale deployment, this highly practical book provides the information and insights you need. The technology is described, potential pitfalls in implementation are identified, clear hints and tips for success are provided, and real-world implementation examples are evaluated. Moreover, an introduction to wireless sensor networks (WSN) is included. This is an invaluable resource for electrical and communications engineers, software engineers, technology and information strategists in equipment, content and service providers, and spectrum regulators. It is also a useful guide for graduate students in wireless communications, and telecommunications.
Are you involved in designing the next generation of wireless networks? With spectrum becoming an ever scarcer resource, it is critical that new systems utilize all available frequency bands as efficiently as possible. The revolutionary technology presented in this book will be at the cutting edge of future wireless communications. Dynamic Spectrum Access and Management in Cognitive Radio Networks provides you with an all-inclusive introduction to this emerging technology, outlining the fundamentals of cognitive radio-based wireless communication and networking, spectrum sharing models, and the requirements for dynamic spectrum access. In addition to the different techniques and their applications in designing dynamic spectrum access methods, you'll also find state-of-the-art dynamic spectrum access schemes, including classifications of the different schemes and the technical details of each scheme. This is a perfect introduction for graduate students and researchers, as well as a useful self-study guide for practitioners.
Modern statistical methods use complex, sophisticated models that can lead to intractable computations. Saddlepoint approximations can be the answer. Written from the user's point of view, this book explains in clear language how such approximate probability computations are made, taking readers from the very beginnings to current applications. The core material is presented in chapters 1-6 at an elementary mathematical level. Chapters 7-9 then give a highly readable account of higher-order asymptotic inference. Later chapters address areas where saddlepoint methods have had substantial impact: multivariate testing, stochastic systems and applied probability, bootstrap implementation in the transform domain, and Bayesian computation and inference. No previous background in the area is required. Data examples from real applications demonstrate the practical value of the methods. Ideal for graduate students and researchers in statistics, biostatistics, electrical engineering, econometrics, and applied mathematics, this is both an entry-level text and a valuable reference.
This rigourous and self-contained book describes mathematical and, in particular, stochastic methods to assess the performance of networked systems. It consists of three parts. The first part is a review on probability theory. Part two covers the classical theory of stochastic processes (Poisson, renewal, Markov and queuing theory), which are considered to be the basic building blocks for performance evaluation studies. Part three focuses on the relatively new field of the physics of networks. This part deals with the recently obtained insights that many very different large complex networks - such as the Internet, World Wide Web, proteins, utility infrastructures, social networks - evolve and behave according to more general common scaling laws. This understanding is useful when assessing the end-to-end quality of communications services, for example, in Internet telephony, real-time video and interacting games. Containing problems and solutions, this book is ideal for graduate students taking courses in performance analysis.
The remaining chapters of the book deal with complex-valued random processes. In this chapter, we discuss wide-sense stationary (WSS) signals. In Chapter 9, we look at nonstationary signals, and in Chapter 10, we treat cyclostationary signals, which are an important subclass of nonstationary signals.
Our discussion of WSS signals continues the preliminary exposition given in Section 2.6. WSS processes have shift-invariant second-order statistics, which leads to the definition of a time-invariant power spectral density (PSD) – an intuitively pleasing idea. For improper signals, the PSD needs to be complemented by the complementary power spectral density (C-PSD), which is generally complex-valued. In Section 8.1, we will see that WSS processes allow an easy characterization of all possible PSD/C-PSD pairs and also a spectral representation of the process itself. Section 8.2 discusses widely linear shift-invariant filtering, with an application to analytic and complex baseband signals. We also introduce the noncausal widely linear minimum mean-squared error, or Wiener, filter for estimating a message signal from a noisy measurement. In order to find the causal approximation of the Wiener filter, we need to adapt existing spectral factorization algorithms to the improper case. This is done in Section 8.3, where we build causal synthesis, analysis, and Wiener filters for improper WSS vector-valued time series.
Section 8.4 introduces rotary-component and polarization analysis, which are widely used in a number of research areas, ranging from optics, geophysics, meteorology, and oceanography to radar.
This chapter lays the foundation for the remainder of the book by introducing key concepts and definitions for complex random vectors and processes. The structure of this chapter is as follows.
In Section 2.1, we relate descriptions of complex random vectors to the corresponding descriptions in terms of their real and imaginary parts. We will see that operations that are linear when applied to real and imaginary parts generally become widely linear (i.e., linear–conjugate-linear) when applied to complex vectors. We introduce a matrix algebra that enables a convenient description of these widely linear transformations.
Section 2.2 introduces a complete second-order statistical characterization of complex random vectors. The key finding is that the information in the standard, Hermitian, covariance matrix must be complemented by a second, complementary, covariance matrix. We establish the conditions that a pair of Hermitian and complementary covariance matrices must satisfy, and show what role the complementary covariance matrix plays in power and entropy.
In Section 2.3, we explain that probability distributions and densities for complex random vectors must be interpreted as joint distributions and densities of their real and imaginary parts. We present two important distributions: the complex multivariate Gaussian distribution and its generalization, the complex multivariate elliptical distribution. These distributions depend both on the Hermitian covariance matrix and on the complementary covariance matrix, and their well-known versions are obtained for the zero complementary covariance matrix.
Detection is the electrical engineer's term for the statistician's hypothesis testing. The problem is to determine which of two or more competing models best describes experimental measurements. If the competition is between two models, then the detection problem is a binary detection problem. Such problems apply widely to communication, radar, and sonar. But even a binary problem can be composite, which is to say that one or both of the hypotheses may consist of a set of models. We shall denote by H0 the hypothesis that the underlying model, or set of models, is M0 and by H1 the hypothesis that it is M1.
There are two main lines of development for detection theory: Neyman–Pearson and Bayes. The Neyman–Pearson theory is a frequentist theory that assigns no prior probability of occurrence to the competing models. Bayesian theory does. Moreover, the measure of optimality is different. To a frequentist the game is to maximize the detection probability under the constraint that the false-alarm probability is not greater than a prespecified value. To a Bayesian the game is to assign costs to incorrect decisions, and then to minimize the average (or Bayes) cost. The solution in any case is to evaluate the likelihood of the measurement under each hypothesis, and to choose the model whose likelihood is higher. Well – not quite. It is the likelihood ratio that is evaluated, and when this ratio exceeds a threshold, determined either by the false-alarm rate or by the Bayes cost, one or other of the hypotheses is accepted.