To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Analog to digital transformation is the first necessary step to load multimedia signals into digital devices. It contains two operations called sampling and quantization. The theoretical background of sampling is given by the famous sampling theorem. The first attempts to formulate and prove the sampling theorem date back to the beginning of the twentieth century. In this chapter we present Shannon's elegant proof of the sampling theorem. Consequences of sampling “too slowly” in the time and frequency domains are discussed. Quantization is the main operation which determines the quality–compression ratio tradeoff in all lossy compression systems. We consider different types of quantizer commonly used in modern multimedia compression systems.
Analog and digital signals
First, we introduce some definitions.
A function f(x) is continuous at a point x = a if limx→af(x) = f(a). We say a function is continuous if it is continuous at every point in its domain (the set of its input values).
We call a set of elements a discrete set if it contains a finite or countable number of elements (elements of a countable set can be enumerated).
In the real world analog signals are continuous functions of continuous arguments such as time, space, or any other continuous physical variables, although we often use mathematical models with not continuous analog signals such as the saw-tooth signal. We consider mainly time signals which can take on a continuum of values over a defined interval of time.
The advent of fiber optic transmission systems and wavelength division multiplexing (WDM) have led to a dramatic increase in the usable bandwidth of single fiber systems. This book provides detailed coverage of survivability (dealing with the risk of losing large volumes of traffic data due to a failure of a node or a single fiber span) and traffic grooming (managing the increased complexity of smaller user requests over high capacity data pipes), both of which are key issues in modern optical networks. A framework is developed to deal with these problems in wide-area networks, where the topology used to service various high-bandwidth (but still small in relation to the capacity of the fiber) systems evolves toward making use of a general mesh. Effective solutions, exploiting complex optimization techniques, and heuristic methods are presented to keep network problems tractable. Newer networking technologies and efficient design methodologies are also described.
Multiple-input multiple-output (MIMO) technology constitutes a breakthrough in the design of wireless communications systems, and is already at the core of several wireless standards. Exploiting multipath scattering, MIMO techniques deliver significant performance enhancements in terms of data transmission rate and interference reduction. This 2007 book is a detailed introduction to the analysis and design of MIMO wireless systems. Beginning with an overview of MIMO technology, the authors then examine the fundamental capacity limits of MIMO systems. Transmitter design, including precoding and space-time coding, is then treated in depth, and the book closes with two chapters devoted to receiver design. Written by a team of leading experts, the book blends theoretical analysis with physical insights, and highlights a range of key design challenges. It can be used as a textbook for advanced courses on wireless communications, and will also appeal to researchers and practitioners working on MIMO wireless systems.
When is a random network (almost) connected? How much information can it carry? How can you find a particular destination within the network? And how do you approach these questions - and others - when the network is random? The analysis of communication networks requires a fascinating synthesis of random graph theory, stochastic geometry and percolation theory to provide models for both structure and information flow. This book is the first comprehensive introduction for graduate students and scientists to techniques and problems in the field of spatial random networks. The selection of material is driven by applications arising in engineering, and the treatment is both readable and mathematically rigorous. Though mainly concerned with information-flow-related questions motivated by wireless data networks, the models developed are also of interest in a broader context, ranging from engineering to social networks, biology, and physics.
The third generation (3G) cellular system UMTS is advanced, optimised and complex. The many existing books on UMTS attempt to explain all the intricacies of the system and as a result are large and equally complex. This book takes a different approach and explains UMTS in a concise, clear and readily understandable style. Written by a professional technical trainer, and based on training courses delivered on UMTS to telecommunication companies worldwide, Essentials of UMTS will enable you to grasp the key concepts quickly. It assumes no previous knowledge of mobile telecommunication theory, and is structured around the operation of the system, clearly setting out how the different components interact with each other, and how the system as a whole behaves. Engineers, project managers and marketing executives working for equipment manufacturers and network operators will find this concise guide to UMTS invaluable.
Many important engineering problems can be cast in the form of a quadratically constrained quadratic program (QCQP) or a fractional QCQP. In general, these problems are nonconvex and NP-hard. This chapter introduces a semidefinite programming (SDP) relaxation procedure for this class of quadratic optimization problems which can generate a provably approximately optimal solution with a randomized polynomial time complexity. We illustrate the use of SDP relaxation in the context of downlink transmit beamforming, and show that the SDP relaxation approach can either generate the global optimum solution, or provide an approximately optimal solution with a guaranteed worst-case approximation performance. Moreover, we describe how the SDP relaxation approach can be used in magnitude filter design and in magnetic resonance imaging systems.
Introduction
In this chapter, we consider several classes of nonconvex quadratic constrained quadratic programs (QCQPs) and a class of nonconvex fractional QCQPs. The importance of these classes of problems lies in their wide-ranging applications in signal processing and communications which include:
the Boolean least-squares (LS) problem in digital communications [1];
the noncoherent maximum-likelihood detection problem in multiple-input multipleoutput (MIMO) communications [2, 3];
the MAXCUT problem in network optimization [4];
the large-margin parameter estimation problem in automatic speech recognition [5–8];
the optimum coded waveform design for radar detection [9];
the image segmentation problem in pattern recognition [10];
the magnitude filter design problem in digital signal processing [11];
the transmit B1 shim and specific absorption rate computation in magnetic resonance imaging (MRI) systems [12, 13];
Due to their computational efficiency and strong empirical performance, semidefinite relaxation (SDR)-based algorithms have gained much attention in multiple-input, multiple-output (MIMO) detection. However, the theoretical performance of those algorithms, especially when applied to constellations other than the binary phase-shift keying (BPSK) constellation, is still not very well-understood. In this chapter we describe a recently-developed approach for analyzing the approximation guarantees of various SDR-based algorithms in the low signal-to-noise ratio (SNR) region. Using such an approach, we show that in the case of M-ary phase-shift keying (MPSK) and quadrature amplitude modulation (QAM) constellations, various SDR-based algorithms will return solutions with near-optimal log-likelihood values with high probability. The results described in this chapter can be viewed as average-case analyses of certain SDP relaxations, where the input distribution is motivated by physical considerations. More importantly, they give some theoretical justification for using SDR-based algorithms for MIMO detection in the low SNR region.
Introduction
Semidefinite programming (SDP) has now become an important algorithm design tool for a wide variety of optimization problems. From a practical standpoint, SDP-based algorithms have proven to be effective in dealing with various fundamental engineering problems, such as control system design [1, 2], structural design [3], signal detection [4, 5], and network localization [6–8]. From a theoretical standpoint, SDP is playing an important role in advancing the theory of algorithms.
The past two decades have witnessed the onset of a surge of research in optimization. This includes theoretical aspects, algorithmic developments such as generalizations of interior-point methods to a rich class of convex-optimization problems, and many new engineering applications. The development of general-purpose software tools as well as the insight generated by the underlying theory have contributed to the emergence of convex optimization as a major signal-processing tool; this has made a significant impact on numerous problems previously considered intractable. Given this success of convex optimization, many new applications are continuously flourishing. This book aims at providing the reader with a series of tutorials on a wide variety of convex-optimization applications in signal processing and communications, written by worldwide leading experts, and contributing to the diffusion of these new developments within the signalprocessing community. The topics included are automatic code generation for real-time solvers, graphical models for autoregressive processes, gradient-based algorithms for signal-recovery applications, semidefinite programming (SDP) relaxation with worstcase approximation performance, radar waveform design via SDP, blind non-negative source separation for image processing, modern sampling theory, robust broadband beamforming techniques, distributed multiagent optimization for networked systems, cognitive radio systems via game theory, and the variational-inequality approach for Nash-equilibrium solutions.
This chapter presents distributed algorithms for cooperative optimization among multiple agents connected through a network. The goal is to optimize a global-objective function which is a combination of local-objective functions known by the agents only. We focus on two related approaches for the design of distributed algorithms for this problem. The first approach relies on using Lagrangian-decomposition and dual-subgradient methods. We show that this methodology leads to distributed algorithms for optimization problems with special structure. The second approach involves combining consensus algorithms with subgradient methods. In both approaches, our focus is on providing convergence-rate analysis for the generated solutions that highlight the dependence on problem parameters.
Introduction and motivation
There has been much recent interest in distributed control and coordination of networks consisting of multiple agents, where the goal is to collectively optimize a global objective. This is motivated mainly by the emergence of large-scale networks and new networking applications such as mobile ad hoc networks and wireless-sensor networks, characterized by the lack of centralized access to information and time-varying connectivity. Control and optimization algorithms deployed in such networks should be completely distributed, relying only on local observations and information, robust against unexpected changes in topology, such as link or node failures, and scalable in the size of the network.
This chapter studies the problem of distributed optimization and control of multiagent networked systems. More formally, we consider a multiagent network model, where m agents exchange information over a connected network.
Sampling theory has benefited from a surge of research in recent years, due in part to intense research in wavelet theory and the connections made between the two fields. In this chapter we present several extensions of the Shannon theorem, which treat a wide class of input signals, as well as nonideal-sampling and constrained-recovery procedures. This framework is based on an optimization viewpoint, which takes into account both the goodness of fit of the reconstructed signal to the given samples, as well as relevant prior knowledge on the original signal. Our exposition is based on a Hilbert-space interpretation of sampling techniques, and relies on the concepts of bases (frames) and projections. The reconstruction algorithms developed in this chapter lead to improvement over standard interpolation approaches in signal- and image-processing applications.
Introduction
Sampling theory treats the recovery of a continuous-time signal from a discrete set of measurements. This field attracted significant attention in the engineering community ever since the pioneering work of Shannon [1] (also attributed to Whitaker [2], Kotelnikov [3], and Nyquist [4]) on sampling bandlimited signals. Discrete-time signal processing (DSP) inherently relies on sampling a continuous-time signal to obtain a discrete-time representation. Therefore, with the rapid development of digital applications, the theory of sampling has gained importance.
Traditionally, sampling theories addressed the problem of perfectly reconstructing a given class of signals from their samples.
Game theory is a field of applied mathematics that describes and analyzes scenarios with interactive decisions. In recent years, there has been a growing interest in adopting cooperative and non-cooperative game-theoretic approaches to model many communications and networking problems, such as power control and resource sharing in wireless/wired and peer-to-peer networks. In this chapter we show how many challenging unsolved resource-allocation problems in the emerging field of cognitive radio (CR) networks fit naturally in the game-theoretical paradigm. This provides us with the mathematical tools necessary to analyze the proposed equilibrium problems for CR systems (e.g., existence and uniqueness of the solution) and to devise distributed algorithms along with their convergence properties. The proposed algorithms differ in performance, level of protection of the primary users, computational effort and signaling among primary and secondary users, convergence analysis, and convergence speed; which makes them suitable for many different CR systems. We also propose a more general framework suitable for investigating and solving more sophisticated equilibrium problems in CR systems when classical game theory may fail, based on variation inequality (VI) that constitutes a very general class of problems in nonlinear analysis.
Introduction and motivation
In recent years, increasing demand of wireless services has made the radio spectrum a very scarce and precious resource. Moreover, most current wireless networks characterized by fixed-spectrum assignment policies are known to be very inefficient considering that licensed bandwidth demands are highly varying along the time or space dimensions (according to the Federal Communications Commission [FCC], only 15% to 85% of the licensed spectrum is utilized on average [1]).
The statistical bootstrap is one of the methods that can be used to calculate estimates of a certain number of unknown parameters of a random process or a signal observed in noise, based on a random sample. Such situations are common in signal processing and the bootstrap is especially useful when only a small sample is available or an analytical analysis is too cumbersome or even impossible. This book covers the foundations of the bootstrap, its properties, its strengths and its limitations. The authors focus on bootstrap signal detection in Gaussian and non-Gaussian interference as well as bootstrap model selection. The theory developed in the book is supported by a number of useful practical examples written in MATLAB. The book is aimed at graduate students and engineers, and includes applications to real-world problems in areas such as radar and sonar, biomedical engineering and automotive engineering.
Several worst-case performance optimization-based broadband adaptive beamforming techniques with an improved robustness against array manifold errors are developed. The proposed beamformers differ from the existing broadband robust techniques in that their robustness is directly matched to the amount of uncertainty in the array manifold, and the suboptimal subband decomposition step is avoided. Convex formulations of the proposed beamformer designs are derived based on second-order cone programming (SOCP) and semidefinite programming (SDP). Simulation results validate an improved robustness of the proposed robust beamformers relative to several state-of-the-art robust broadband techniques.
Introduction
Adaptive array processing has received considerable attention during the last four decades, particularly in the fields of sonar, radar, speech acquisition and, more recently, wireless communications [1,2]. The main objective of adaptive beamforming algorithms is to suppress the interference and noise while preserving the desired signal components. One of the early adaptive beamforming algorithms for broadband signals is the linearly constrained minimum variance (LCMV) algorithm developed by Frost in [3] and extensively studied in the follow-up literature [1, 4, 5]. Frost's broadband array processor includes a presteering delay front-end whose function is to steer the array towards the desired signal so that each of its frequency components appears in-phase across the array after the presteering delays. Each presteering delay is then followed by a finite impulse response (FIR) adaptive filter and the outputs of all these filters are summed together to obtain the array output.
Peak signal power is an important factor in the implementation of multicarrier (MC) modulation schemes, like OFDM, in wireless and wireline communication systems. This 2007 book describes tools necessary for analyzing and controlling the peak-to-average power ratio in MC systems, and how these techniques are applied in practical designs. The author starts with an overview of multicarrier signals and basic tools and algorithms, before discussing properties of MC signals in detail: discrete and continuous maxima; statistical distribution of peak power; codes with constant peak-to-average power ratio are all covered, concluding with methods to decrease peak power in MC systems. Current knowledge, problems, methods and definitions are summarized using rigorous mathematics, with an overview of the tools for the engineer. The book is aimed at graduate students and researchers in electrical engineering, computer science and applied mathematics, and practitioners in the telecommunications industry.
We consider the problem of fitting a Gaussian autoregressive model to a time series, subject to conditional independence constraints. This is an extension of the classical covariance selection problem to time series. The conditional independence constraints impose a sparsity pattern on the inverse of the spectral density matrix, and result in nonconvex quadratic equality constraints in the maximum likelihood formulation of the model estimation problem. We present a semidefinite relaxation, and prove that the relaxation is exact when the sample covariance matrix is block-Toeplitz. We also give experimental results suggesting that the relaxation is often exact when the sample covariance matrix is not block-Toeplitz. In combination with model selection criteria the estimation method can be used for topology selection. Experiments with randomly generated and several real data sets are also included.
Introduction
Graphical models give a graph representation of relations between random variables. The simplest example is a Gaussian graphical model, in which an undirected graph with n nodes is used to describe conditional independence relations between the components of an n-dimensional random variable x ~ N(0, ∑). The absence of an edge between two nodes of the graph indicates that the corresponding components of x are independent, conditional on the other components. Other common examples of graphical models include contingency tables, which describe conditional independence relations in multinomial distributions, and Bayesian networks, which use directed acyclic graphs to represent causal or temporal relations.
This chapter concerns the use of convex optimization in real-time embedded systems, in areas such as signal processing, automatic control, real-time estimation, real-time resource allocation and decision making, and fast automated trading. By “embedded” we mean that the optimization algorithm is part of a larger, fully automated system, that executes automatically with newly arriving data or changing conditions, and without any human intervention or action. By “real-time” we mean that the optimization algorithm executes much faster than a typical or generic method with a human in the loop, in times measured in milliseconds or microseconds for small and medium size problems, and (a few) seconds for larger problems. In real-time embedded convex optimization the same optimization problem is solved many times, with different data, often with a hard real-time deadline. In this chapter we propose an automatic code generation system for real-time embedded convex optimization. Such a system scans a description of the problem family, and performs much of the analysis and optimization of the algorithm, such as choosing variable orderings used with sparse factorizations and determining storage structures, at code generation time. Compiling the generated source code yields an extremely efficient custom solver for the problem family. We describe a preliminary implementation, built on the Python-based modeling framework CVXMOD, and give some timing results for several examples.
Introduction
Advisory optimization
Mathematical optimization is traditionally thought of as an aid to human decision making.
Non-cooperative game theory is a branch of game theory for the resolution of conflicts among players (or economic agents), each behaving selfishly to optimize their own well-being subject to resource limitations and other constraints that may depend on the rivals' actions. While many telecommunication problems have traditionally been approached by using optimization, game models are being increasingly used; they seem to provide meaningful models for many applications where the interaction among several agents is by no means negligible, for example, the choice of power allocations, routing strategies, and prices. Furthermore, the deregulation of telecommunication markets and the explosive growth of the Internet pose many new problems that can be effectively tackled with game-theoretic tools. In this chapter, we present a comprehensive treatment of Nash equilibria based on the variational inequality and complementarity approach, covering the topics of existence of equilibria using degree theory, global uniqueness of an equilibrium using the P-property, local-sensitivity analysis using degree theory, iterative algorithms using fixed-point iterations, and a descent approach for computing variational equilibria based on the regularized Nikaido–Isoda function. We illustrate the existence theory using a communication game with QoS constraints. The results can be used for the further study of conflict resolution of selfish agents in telecommunication.
Introduction
The literature on non-cooperative games is vast. Rather than reviewing this extensive literature, we refer the readers to the recent survey [20], which we will use as the starting point of this chapter.