To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Peak signal power is an important factor in the implementation of multicarrier (MC) modulation schemes, like OFDM, in wireless and wireline communication systems. This 2007 book describes tools necessary for analyzing and controlling the peak-to-average power ratio in MC systems, and how these techniques are applied in practical designs. The author starts with an overview of multicarrier signals and basic tools and algorithms, before discussing properties of MC signals in detail: discrete and continuous maxima; statistical distribution of peak power; codes with constant peak-to-average power ratio are all covered, concluding with methods to decrease peak power in MC systems. Current knowledge, problems, methods and definitions are summarized using rigorous mathematics, with an overview of the tools for the engineer. The book is aimed at graduate students and researchers in electrical engineering, computer science and applied mathematics, and practitioners in the telecommunications industry.
We consider the problem of fitting a Gaussian autoregressive model to a time series, subject to conditional independence constraints. This is an extension of the classical covariance selection problem to time series. The conditional independence constraints impose a sparsity pattern on the inverse of the spectral density matrix, and result in nonconvex quadratic equality constraints in the maximum likelihood formulation of the model estimation problem. We present a semidefinite relaxation, and prove that the relaxation is exact when the sample covariance matrix is block-Toeplitz. We also give experimental results suggesting that the relaxation is often exact when the sample covariance matrix is not block-Toeplitz. In combination with model selection criteria the estimation method can be used for topology selection. Experiments with randomly generated and several real data sets are also included.
Introduction
Graphical models give a graph representation of relations between random variables. The simplest example is a Gaussian graphical model, in which an undirected graph with n nodes is used to describe conditional independence relations between the components of an n-dimensional random variable x ~ N(0, ∑). The absence of an edge between two nodes of the graph indicates that the corresponding components of x are independent, conditional on the other components. Other common examples of graphical models include contingency tables, which describe conditional independence relations in multinomial distributions, and Bayesian networks, which use directed acyclic graphs to represent causal or temporal relations.
This chapter concerns the use of convex optimization in real-time embedded systems, in areas such as signal processing, automatic control, real-time estimation, real-time resource allocation and decision making, and fast automated trading. By “embedded” we mean that the optimization algorithm is part of a larger, fully automated system, that executes automatically with newly arriving data or changing conditions, and without any human intervention or action. By “real-time” we mean that the optimization algorithm executes much faster than a typical or generic method with a human in the loop, in times measured in milliseconds or microseconds for small and medium size problems, and (a few) seconds for larger problems. In real-time embedded convex optimization the same optimization problem is solved many times, with different data, often with a hard real-time deadline. In this chapter we propose an automatic code generation system for real-time embedded convex optimization. Such a system scans a description of the problem family, and performs much of the analysis and optimization of the algorithm, such as choosing variable orderings used with sparse factorizations and determining storage structures, at code generation time. Compiling the generated source code yields an extremely efficient custom solver for the problem family. We describe a preliminary implementation, built on the Python-based modeling framework CVXMOD, and give some timing results for several examples.
Introduction
Advisory optimization
Mathematical optimization is traditionally thought of as an aid to human decision making.
Non-cooperative game theory is a branch of game theory for the resolution of conflicts among players (or economic agents), each behaving selfishly to optimize their own well-being subject to resource limitations and other constraints that may depend on the rivals' actions. While many telecommunication problems have traditionally been approached by using optimization, game models are being increasingly used; they seem to provide meaningful models for many applications where the interaction among several agents is by no means negligible, for example, the choice of power allocations, routing strategies, and prices. Furthermore, the deregulation of telecommunication markets and the explosive growth of the Internet pose many new problems that can be effectively tackled with game-theoretic tools. In this chapter, we present a comprehensive treatment of Nash equilibria based on the variational inequality and complementarity approach, covering the topics of existence of equilibria using degree theory, global uniqueness of an equilibrium using the P-property, local-sensitivity analysis using degree theory, iterative algorithms using fixed-point iterations, and a descent approach for computing variational equilibria based on the regularized Nikaido–Isoda function. We illustrate the existence theory using a communication game with QoS constraints. The results can be used for the further study of conflict resolution of selfish agents in telecommunication.
Introduction
The literature on non-cooperative games is vast. Rather than reviewing this extensive literature, we refer the readers to the recent survey [20], which we will use as the starting point of this chapter.
This chapter presents, in a self-contained manner, recent advances in the design and analysis of gradient-based schemes for specially structured, smooth and nonsmooth minimization problems. We focus on the mathematical elements and ideas for building fast gradient-based methods and derive their complexity bounds. Throughout the chapter, the resulting schemes and results are illustrated and applied on a variety of problems arising in several specific key applications such as sparse approximation of signals, total variation-based image-processing problems, and sensor-location problems.
Introduction
The gradient method is probably one of the oldest optimization algorithms going back as early as 1847 with the initial work of Cauchy. Nowadays, gradient-based methods have attracted a revived and intensive interest among researches both in theoretical optimization, and in scientific applications. Indeed, the very large-scale nature of problems arising in many scientific applications, combined with an increase in the power of computer technology have motivated a “return” to the “old and simple” methods that can overcome the curse of dimensionality; a task which is usually out of reach for the current more sophisticated algorithms.
One of the main drawbacks of gradient-based methods is their speed of convergence, which is known to be slow. However, with proper modeling of the problem at hand, combined with some key ideas, it turns out that it is possible to build fast gradient schemes for various classes of problems arising in applications and, in particular, signal-recovery problems.
The past two decades have witnessed the onset of a surge of research in optimization. This includes theoretical aspects, as well as algorithmic developments such as generalizations of interior-point methods to a rich class of convex-optimization problems. The development of general-purpose software tools together with insight generated by the underlying theory have substantially enlarged the set of engineering-design problems that can be reliably solved in an efficient manner. The engineering community has greatly benefited from these recent advances to the point where convex optimization has now emerged as a major signal-processing technique. On the other hand, innovative applications of convex optimization in signal processing combined with the need for robust and efficient methods that can operate in real time have motivated the optimization community to develop additional needed results and methods. The combined efforts in both the optimization and signal-processing communities have led to technical breakthroughs in a wide variety of topics due to the use of convex optimization. This includes solutions to numerous problems previously considered intractable; recognizing and solving convexoptimization problems that arise in applications of interest; utilizing the theory of convex optimization to characterize and gain insight into the optimal-solution structure and to derive performance bounds; formulating convex relaxations of difficult problems; and developing general purpose or application-driven specific algorithms, including those that enable large-scale optimization by exploiting the problem structure.
By
Wing-Kin Ma, Chinese University of Hong Kong,
Tsung-Han Chan, National Tsing Hua University,
Chong-Yung Chi, National Tsing Hua University,
Yue Wang, Virginia Polytechnic Institute and State University
Edited by
Daniel P. Palomar, Hong Kong University of Science and Technology,Yonina C. Eldar, Weizmann Institute of Science, Israel
In recent years, there has been a growing interest in blind separation of non-negative sources, known as simply non-negative blind source separation (nBSS). Potential applications of nBSS include biomedical imaging, multi/hyper-spectral imaging, and analytical chemistry. In this chapter, we describe a rather new endeavor of nBSS, where convex geometry is utilized to analyze the nBSS problem. Called convex analysis of mixtures of non-negative sources (CAMNS), the framework described here makes use of a very special assumption called local dominance, which is a reasonable assumption for source signals exhibiting sparsity or high contrast. Under the locally dominant and some usual nBSS assumptions, we show that the source signals can be perfectly identified by finding the extreme points of an observation-constructed polyhedral set. Two methods for practically locating the extreme points are also derived. One is analysis-based with some appealing theoretical guarantees, while the other is heuristic in comparison, but is intuitively expected to provide better robustness against model mismatches. Both are based on linear programming and thus can be effectively implemented. Simulation results on several data sets are presented to demonstrate the efficacy of the CAMNS-based methods over several other reported nBSS methods.
Introduction
Blind source separation (BSS) is a signal-processing technique, the purpose of which is to separate source signals from observations, without information of how the source signals are mixed in the observations. BSS presents a technically very challenging topic to the signal processing community, but it has stimulated significant interest for many years due to its relevance to a wide variety of applications.
By
Yongwei Huang, Hong Kong University of Science and Technology,
Antonio De Maio, Università degli Studi di Napoli – Federico II,
Shuzhong Zhang, Chinese University of Hong Kong
Edited by
Daniel P. Palomar, Hong Kong University of Science and Technology,Yonina C. Eldar, Weizmann Institute of Science, Israel
In this chapter, we study specific rank-1 decomposition techniques for Hermitian positive semidefinite matrices. Based on the semidefinite programming relaxation method and the decomposition techniques, we identify several classes of quadratically constrained quadratic programming problems that are polynomially solvable. Typically, such problems do not have too many constraints. As an example, we demonstrate how to apply the new techniques to solve an optimal code design problem arising from radar signal processing.
Introduction and notation
Semidefinite programming (SDP) is a relatively new subject of research in optimization. Its success has caused major excitement in the field. One is referred to Boyd and Vandenberghe [11] for an excellent introduction to SDP and its applications. In this chapter, we shall elaborate on a special application of SDP for solving quadratically constrained quadratic programming (QCQP) problems. The techniques we shall introduce are related to how a positive semidefinite matrix can be decomposed into a sum of rank-1 positive semidefinite matrices, in a specific way that helps to solve nonconvex quadratic optimization with quadratic constraints. The advantage of the method is that the convexity of the original quadratic optimization problem becomes irrelevant; only the number of constraints is important for the method to be effective. We further present a study on how this method helps to solve a radar code design problem. Through this investigation, we aim to make a case that solving nonconvex quadratic optimization by SDP is a viable approach.
A point-to-point communication system transfers a message from one point to another through a noisy environment called a communication channel. A familiar example of a communication channel is formed by the propagation of an electromagnetic wave from a transmitting antenna to a receiving antenna. The message is carried by the time-varying parameters of the electromagnetic wave. Another example of a communication channel is a waveform propagating through a coaxial cable that connects a jack mounted on an office wall to another such jack on another wall or to a central node. In these examples, the waveform as it appears at the receiver is contaminated by noise, by interference, and by other impairments. The transmitted message must be protected against such impairments and distortion in the channel. Early communication systems were designed to protect their messages from the environment by the simple expedient of transmitting at low data rates with high power. Later, message design techniques were introduced that led to the development of far more sophisticated communication systems with much better performance. Modern message design is the art of piecing together a number of waveform ideas in order to transmit as many bits per second as is practical within the available power and bandwidth. It is by the performance at low transmitted energy per bit that one judges the quality of a digital communication system. The purpose of this book is to develop modern waveform techniques for the digital transmission of information.
The field of telecommunication consists of the theory and the practice of communication at a distance, principally electronic communication. Many systems for telecommunication now take the form of large, complex, interconnected data networks with both wired and wireless segments, and the design of such systems is based on a rich theory. Communication theory studies methods for the design of signaling waveforms to transmit information from point to point, as within a telecommunication system. Communication theory is that part of information theory that is concerned with the explicit design of suitable waveforms to convey messages and with the performance of those waveforms when received in the presence of noise and other channel impairments. Digital telecommunication theory, or modem theory, is that part of communication theory in which digital modulation and demodulation techniques play a prominent role in the communication process, either because the information to be transmitted is digital or because the information is temporarily represented in digital form for the purpose of transmission.
Digital communication systems are in widespread use and are now in the process of sweeping away even the time-honored analog communication systems, such as those used in radio, television, and telephony. The main task of communication theory is the design of efficient waveforms for the transmission of information over band-limited or power-limited channels. The most sweeping conclusion of information theory is that all communication is essentially digital.
The function of a digital demodulator is to reconvert a waveform received in noise back into the stream of data symbols from the discrete data alphabet. We usually regard this datastream as a binary datastream. A demodulator is judged by its ability to recover the user datastream with low probability of bit error even when the received channel waveform is contaminated by distortion, interference, and noise. The probability of symbol error or bit error at the demodulator output is also called the symbol error rate or the bit error rate. The demodulator is designed to make these error rates small. We shall concentrate our discussion on optimum demodulation in the presence of additive noise because additive noise is the most fundamental disturbance in a communication system.
The energy per data bit is the primary physical quantity that determines the ability of the communication system to tolerate noise. For this purpose, energy (or power) always refers to that portion of the energy in the waveform that reaches the receiver. This will be only a small fraction of the energy sent by the transmitter.
The study of the optimum demodulation of a waveform in additive noise is an application of the statistical theory of hypothesis testing. The basic principles are surprisingly simple and concise. The most basic principle, and the heart of this chapter, is the principle of the matched filter. A very wide variety of modulation waveforms are demodulated by the same general method of passing a received signal through a matched filter and sampling the output of that matched filter.
The communication problem can be given a new dimension of complexity by the introduction of an adversary. The adversary may have a variety of goals. The goal may be to interrupt communication, to detect the occurrence of communication, to determine the specific message transmitted, or to determine the location or the identity of the transmitter. The communication problem now takes on aspects of the theory of games. The transmitter and receiver comprise one team while the adversary comprises the other team.
An adversary may try to interrupt communication by falsifying the messages or by inserting noise into the channel. In the former case, the adversary is called a spoofer while in the latter case, the adversary is called a jammer. An adversary who intends to read the specific message transmitted is called a cryptanalyst. An adversary who intends to determine the location or the identity of the transmitter or to detect the occurrence of communication is called a signal exploiter.
Waveform techniques to counter a jammer or an exploiter are similar; both try to spread the waveform over a wide bandwidth. Such waveforms are called antijam waveforms or antiexploitation waveforms. Techniques to counter a spoofer or a cryptanalyst tend to be similar: these may use a secret permutation on the set of messages to represent the actual message by a surrogate message formed in an agreed, invertible way based on a secret key.
The modulator and demodulator make a waveform channel into a discrete communication channel. Because of channel noise, the discrete communication channel is a noisy communication channel; there may be errors or other forms of lost data. A data transmission code is a code that makes a noisy discrete channel into a reliable channel. Despite noise or errors that may exist in the channel output, the output of the decoder for a good data-transmission code is virtually error-free. In this chapter, we shall study some practical codes for data transmission. These codes are designed for noisy channels that have no constraints on the sequence of transmitted symbols. Then a data transmission code can be used to make the noisy unconstrained channel into a reliable channel.
For the kinds of discrete channels formed by the demodulators of Chapter 3, the output is simply a regenerated stream of channel input symbols, some of which may be in error. Such channels are called hard-decision channels. The data transmission code is then called an error-control code or an error-correcting code. More generally, however, the demodulator may be designed to qualify its output in some way. Viewed from modulator input to demodulator output, we may find a channel output that is less specific than a hard-decision channel, perhaps including erasures or other forms of tentative data such as likelihood data on the set of possible output symbols.
The demodulation of a passband waveform or of a complex baseband waveform uses methods similar to those used to demodulate baseband signals. However, there are many new details that emerge in the larger setting of passband or complex baseband demodulation. This is because a complex baseband function (or a passband function) can be expressed either in terms of real and imaginary components or in terms of amplitude and phase. It is obvious that phase is meaningful only if there is an absolute phase reference. A new set of topics arises when the modulator and demodulator do not share a common phase reference. This is the distinction between coherent and noncoherent demodulation. When the phase reference is known to the demodulator, the demodulator is called a coherent demodulator. When the phase reference is not known to the demodulator, that demodulator is called a noncoherent demodulator.
We begin the chapter with a study of the matched filter at passband. Then we use the matched filter as a central component in the development of a variety of demodulators, both coherent and noncoherent, for the passband waveforms that were introduced in Chapter 5.
The methods for the demodulation of baseband sequences that were described in Chapter 4 can be restated in the setting of passband waveforms. We shall prefer, however, the equivalent formulation in terms of complex baseband waveforms. It becomes obvious immediately how to generalize methods of demodulation from sequences of real numbers to sequences of complex numbers, so the chapter starts out with a straightforward reformulation of the topic of demodulation.
Rather than modulate one data symbol at a time into a channel waveform, it is possible to modulate the entire datastream as an interlocked unit into the channel waveform. The resulting waveform may exhibit symbol interdependence that is created intentionally to improve the performance of the demodulator. Although the symbol interdependence does have some similarity to intersymbol interference, in this situation it is designed deliberately to improve the minimum euclidean distance between sequences, and so to reduce the probability of demodulation error.
The methods developed in Chapter 4 for demodulating interdependent sequences led us to a positive view of intersymbol interdependence. This gives us the incentive to introduce intersymbol interdependence intentionally into a waveform to make sequences more distinguishable. The digital modulation codes that result are a form of data transmission code combined with the modulation waveform. The output of the data encoder is immediately in the form of an input to the waveform channel. The modulator only needs to apply the proper pulse shape to the symbols of the code sequence.
In this chapter, we shall study trellis-coded modulation waveforms, partial-response signaling waveforms, and continuous-phase modulation waveforms. Of these various methods, trellis-coded modulation is the more developed, and is in widespread use at the present time.
Partial-response signaling
The simplest coded-modulation waveforms are called partial-response signaling waveforms. These coded waveforms can be motivated by recalling the method of decision feedback equalization.
A channel may introduce unpredictable changes into the waveform passing through it. In a passband channel, such as a radio frequency channel, unpredictable phase shifts of the carrier may occur in the atmosphere, in antennas, and in other system elements or because of uncertainty in the time of propagation. In order to demodulate a digital waveform coherently, a coherent replica of the carrier is needed in the receiver. Because the receiver does not know the carrier phase independently of the received signal, the receiver must locally regenerate a coherent replica of the carrier. Uncertainty in the phase of the received waveform introduces the task of phase synchronization in the receiver.
Uncertainty in the time of propagation also introduces problems of time synchronization. The local clock must be synchronized with the incoming datastream so that incoming symbols and words can be correctly framed and assigned their proper indices. Time synchronization may be subdivided into two tasks: symbol synchronization, and block or frame synchronization. These two kinds of time synchronization are quite different. Symbol synchronization is a fine time adjustment that adjusts the sampling instants to their correct value. It exploits the shape of the individual pulses making up the waveform to adjust the time reference. The content of the datastream itself plays no role in symbol synchronization. Block synchronization takes place on a much longer time scale. It looks for special patterns embedded in the datastream so that it can find the start of a message or break the message into constituent parts.
This chapter approaches the theory of modem design starting from well-accepted basic principles of inference. In particular, we will study the maximum-likelihood principle and the maximum-posterior principle. By studying the optimum demodulation of passband sequences, we shall develop an understanding of the maximum-likelihood principle and its application. In Chapter 8, we will also treat the topic of synchronization as applied to both carrier recovery and time recovery by using the maximum-likelihood principle.
It is appropriate at this point to develop demodulation methods based on the likelihood function. The maximum-likelihood principle will enable us to derive optimal methods of demodulating in the presence of intersymbol interdependence and, as a side benefit, to establish the optimality of the demodulators already discussed in Chapter 3.
The maximum-likelihood principle is a general method of inference applying to many problems of decision and estimation besides those of digital communications. The development will proceed as follows. First we will introduce the maximum-likelihood principle as a general method to form a decision under the criterion of minimum probability of decision error when given a finite set of measurements. Then we will approximate the continuous-time waveform v(t) by a finite set of discrete-time samples to which we apply the maximum-likelihood principle. Finally, we will take the limit as the number of samples of v(t) goes to infinity to obtain the maximum-likelihood principle for the waveform measurement v(t).
The likelihood function
We begin with the general decision problem, not necessarily the problem of demodulation, of deciding between M hypotheses when given a measurement.