To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapters 4 through 7 we have developed expressions of expectation and variance for various performance measures of a given schedule of jobs. We have considered schedules for different machine configurations. For some of the performance measures of these schedules, necessary assumptions are made on the type of processing time distribution used to enable development of analytical expressions. Table 8.1 gives an overview of the machine configurations and performance measures that we have considered, and it also depicts the assumptions made on the processing time distributions used for each of these cases. This table also presents information on the level of accuracy of the resulting expressions for the expectation and variance of a performance measure and whether the analysis relies on Clark's method (Clark, 1961).
Clark's method for approximating the expectation and variance of the maximum of a set of random variables is based on the assumption of normal distributions for all random variables. In this chapter we relax this assumption and consider the case of general processing time distributions. Our analysis relies on the use of finite-mixture models.
Finite-Mixture Models
The use of a finite mixture of distributions provides a flexible methodology to represent a variety of random phenomena.
Principles of Optimal Design puts the concept of optimal design on a rigorous foundation and demonstrates the intimate relationship between the mathematical model that describes a design and the solution methods that optimize it. Since the first edition was published, computers have become ever more powerful, design engineers are tackling more complex systems, and the term optimization is now routinely used to denote a design process with increased speed and quality. This second edition takes account of these developments and brings the original text thoroughly up to date. The book now includes a discussion of trust region and convex approximation algorithms. A new chapter focuses on how to construct optimal design models. Three new case studies illustrate the creation of optimization models. The final chapter on optimization practice has been expanded to include computation of derivatives, interpretation of algorithmic results, and selection of algorithms and software. Both students and practising engineers will find this book a valuable resource for design project work.
Filtering and system identification are powerful techniques for building models of complex systems. This 2007 book discusses the design of reliable numerical methods to retrieve missing information in models derived using these techniques. Emphasis is on the least squares approach as applied to the linear state-space model, and problems of increasing complexity are analyzed and solved within this framework, starting with the Kalman filter and concluding with the estimation of a full model, noise statistics and state estimator directly from the data. Key background topics, including linear matrix algebra and linear system theory, are covered, followed by different estimation and identification methods in the state-space model. With end-of-chapter exercises, MATLAB simulations and numerous illustrations, this book will appeal to graduate students and researchers in electrical, mechanical and aerospace engineering. It is also useful for practitioners. Additional resources for this title, including solutions for instructors, are available online at www.cambridge.org/9780521875127.
Many important engineering problems can be cast in the form of a quadratically constrained quadratic program (QCQP) or a fractional QCQP. In general, these problems are nonconvex and NP-hard. This chapter introduces a semidefinite programming (SDP) relaxation procedure for this class of quadratic optimization problems which can generate a provably approximately optimal solution with a randomized polynomial time complexity. We illustrate the use of SDP relaxation in the context of downlink transmit beamforming, and show that the SDP relaxation approach can either generate the global optimum solution, or provide an approximately optimal solution with a guaranteed worst-case approximation performance. Moreover, we describe how the SDP relaxation approach can be used in magnitude filter design and in magnetic resonance imaging systems.
Introduction
In this chapter, we consider several classes of nonconvex quadratic constrained quadratic programs (QCQPs) and a class of nonconvex fractional QCQPs. The importance of these classes of problems lies in their wide-ranging applications in signal processing and communications which include:
the Boolean least-squares (LS) problem in digital communications [1];
the noncoherent maximum-likelihood detection problem in multiple-input multipleoutput (MIMO) communications [2, 3];
the MAXCUT problem in network optimization [4];
the large-margin parameter estimation problem in automatic speech recognition [5–8];
the optimum coded waveform design for radar detection [9];
the image segmentation problem in pattern recognition [10];
the magnitude filter design problem in digital signal processing [11];
the transmit B1 shim and specific absorption rate computation in magnetic resonance imaging (MRI) systems [12, 13];
Due to their computational efficiency and strong empirical performance, semidefinite relaxation (SDR)-based algorithms have gained much attention in multiple-input, multiple-output (MIMO) detection. However, the theoretical performance of those algorithms, especially when applied to constellations other than the binary phase-shift keying (BPSK) constellation, is still not very well-understood. In this chapter we describe a recently-developed approach for analyzing the approximation guarantees of various SDR-based algorithms in the low signal-to-noise ratio (SNR) region. Using such an approach, we show that in the case of M-ary phase-shift keying (MPSK) and quadrature amplitude modulation (QAM) constellations, various SDR-based algorithms will return solutions with near-optimal log-likelihood values with high probability. The results described in this chapter can be viewed as average-case analyses of certain SDP relaxations, where the input distribution is motivated by physical considerations. More importantly, they give some theoretical justification for using SDR-based algorithms for MIMO detection in the low SNR region.
Introduction
Semidefinite programming (SDP) has now become an important algorithm design tool for a wide variety of optimization problems. From a practical standpoint, SDP-based algorithms have proven to be effective in dealing with various fundamental engineering problems, such as control system design [1, 2], structural design [3], signal detection [4, 5], and network localization [6–8]. From a theoretical standpoint, SDP is playing an important role in advancing the theory of algorithms.
The past two decades have witnessed the onset of a surge of research in optimization. This includes theoretical aspects, algorithmic developments such as generalizations of interior-point methods to a rich class of convex-optimization problems, and many new engineering applications. The development of general-purpose software tools as well as the insight generated by the underlying theory have contributed to the emergence of convex optimization as a major signal-processing tool; this has made a significant impact on numerous problems previously considered intractable. Given this success of convex optimization, many new applications are continuously flourishing. This book aims at providing the reader with a series of tutorials on a wide variety of convex-optimization applications in signal processing and communications, written by worldwide leading experts, and contributing to the diffusion of these new developments within the signalprocessing community. The topics included are automatic code generation for real-time solvers, graphical models for autoregressive processes, gradient-based algorithms for signal-recovery applications, semidefinite programming (SDP) relaxation with worstcase approximation performance, radar waveform design via SDP, blind non-negative source separation for image processing, modern sampling theory, robust broadband beamforming techniques, distributed multiagent optimization for networked systems, cognitive radio systems via game theory, and the variational-inequality approach for Nash-equilibrium solutions.
This chapter presents distributed algorithms for cooperative optimization among multiple agents connected through a network. The goal is to optimize a global-objective function which is a combination of local-objective functions known by the agents only. We focus on two related approaches for the design of distributed algorithms for this problem. The first approach relies on using Lagrangian-decomposition and dual-subgradient methods. We show that this methodology leads to distributed algorithms for optimization problems with special structure. The second approach involves combining consensus algorithms with subgradient methods. In both approaches, our focus is on providing convergence-rate analysis for the generated solutions that highlight the dependence on problem parameters.
Introduction and motivation
There has been much recent interest in distributed control and coordination of networks consisting of multiple agents, where the goal is to collectively optimize a global objective. This is motivated mainly by the emergence of large-scale networks and new networking applications such as mobile ad hoc networks and wireless-sensor networks, characterized by the lack of centralized access to information and time-varying connectivity. Control and optimization algorithms deployed in such networks should be completely distributed, relying only on local observations and information, robust against unexpected changes in topology, such as link or node failures, and scalable in the size of the network.
This chapter studies the problem of distributed optimization and control of multiagent networked systems. More formally, we consider a multiagent network model, where m agents exchange information over a connected network.
Sampling theory has benefited from a surge of research in recent years, due in part to intense research in wavelet theory and the connections made between the two fields. In this chapter we present several extensions of the Shannon theorem, which treat a wide class of input signals, as well as nonideal-sampling and constrained-recovery procedures. This framework is based on an optimization viewpoint, which takes into account both the goodness of fit of the reconstructed signal to the given samples, as well as relevant prior knowledge on the original signal. Our exposition is based on a Hilbert-space interpretation of sampling techniques, and relies on the concepts of bases (frames) and projections. The reconstruction algorithms developed in this chapter lead to improvement over standard interpolation approaches in signal- and image-processing applications.
Introduction
Sampling theory treats the recovery of a continuous-time signal from a discrete set of measurements. This field attracted significant attention in the engineering community ever since the pioneering work of Shannon [1] (also attributed to Whitaker [2], Kotelnikov [3], and Nyquist [4]) on sampling bandlimited signals. Discrete-time signal processing (DSP) inherently relies on sampling a continuous-time signal to obtain a discrete-time representation. Therefore, with the rapid development of digital applications, the theory of sampling has gained importance.
Traditionally, sampling theories addressed the problem of perfectly reconstructing a given class of signals from their samples.
Game theory is a field of applied mathematics that describes and analyzes scenarios with interactive decisions. In recent years, there has been a growing interest in adopting cooperative and non-cooperative game-theoretic approaches to model many communications and networking problems, such as power control and resource sharing in wireless/wired and peer-to-peer networks. In this chapter we show how many challenging unsolved resource-allocation problems in the emerging field of cognitive radio (CR) networks fit naturally in the game-theoretical paradigm. This provides us with the mathematical tools necessary to analyze the proposed equilibrium problems for CR systems (e.g., existence and uniqueness of the solution) and to devise distributed algorithms along with their convergence properties. The proposed algorithms differ in performance, level of protection of the primary users, computational effort and signaling among primary and secondary users, convergence analysis, and convergence speed; which makes them suitable for many different CR systems. We also propose a more general framework suitable for investigating and solving more sophisticated equilibrium problems in CR systems when classical game theory may fail, based on variation inequality (VI) that constitutes a very general class of problems in nonlinear analysis.
Introduction and motivation
In recent years, increasing demand of wireless services has made the radio spectrum a very scarce and precious resource. Moreover, most current wireless networks characterized by fixed-spectrum assignment policies are known to be very inefficient considering that licensed bandwidth demands are highly varying along the time or space dimensions (according to the Federal Communications Commission [FCC], only 15% to 85% of the licensed spectrum is utilized on average [1]).
Several worst-case performance optimization-based broadband adaptive beamforming techniques with an improved robustness against array manifold errors are developed. The proposed beamformers differ from the existing broadband robust techniques in that their robustness is directly matched to the amount of uncertainty in the array manifold, and the suboptimal subband decomposition step is avoided. Convex formulations of the proposed beamformer designs are derived based on second-order cone programming (SOCP) and semidefinite programming (SDP). Simulation results validate an improved robustness of the proposed robust beamformers relative to several state-of-the-art robust broadband techniques.
Introduction
Adaptive array processing has received considerable attention during the last four decades, particularly in the fields of sonar, radar, speech acquisition and, more recently, wireless communications [1,2]. The main objective of adaptive beamforming algorithms is to suppress the interference and noise while preserving the desired signal components. One of the early adaptive beamforming algorithms for broadband signals is the linearly constrained minimum variance (LCMV) algorithm developed by Frost in [3] and extensively studied in the follow-up literature [1, 4, 5]. Frost's broadband array processor includes a presteering delay front-end whose function is to steer the array towards the desired signal so that each of its frequency components appears in-phase across the array after the presteering delays. Each presteering delay is then followed by a finite impulse response (FIR) adaptive filter and the outputs of all these filters are summed together to obtain the array output.
We consider the problem of fitting a Gaussian autoregressive model to a time series, subject to conditional independence constraints. This is an extension of the classical covariance selection problem to time series. The conditional independence constraints impose a sparsity pattern on the inverse of the spectral density matrix, and result in nonconvex quadratic equality constraints in the maximum likelihood formulation of the model estimation problem. We present a semidefinite relaxation, and prove that the relaxation is exact when the sample covariance matrix is block-Toeplitz. We also give experimental results suggesting that the relaxation is often exact when the sample covariance matrix is not block-Toeplitz. In combination with model selection criteria the estimation method can be used for topology selection. Experiments with randomly generated and several real data sets are also included.
Introduction
Graphical models give a graph representation of relations between random variables. The simplest example is a Gaussian graphical model, in which an undirected graph with n nodes is used to describe conditional independence relations between the components of an n-dimensional random variable x ~ N(0, ∑). The absence of an edge between two nodes of the graph indicates that the corresponding components of x are independent, conditional on the other components. Other common examples of graphical models include contingency tables, which describe conditional independence relations in multinomial distributions, and Bayesian networks, which use directed acyclic graphs to represent causal or temporal relations.
The starting point in the formulation of any numerical problem is to take an intuitive idea about the problem in question and to translate it into precise mathematical language. This book provides step-by-step descriptions of how to formulate numerical problems and develops techniques for solving them. A number of engineering case studies motivate the development of efficient algorithms that involve, in some cases, transformation of the problem from its initial formulation into a more tractable form. Five general problem classes are considered: linear systems of equations, non-linear systems of equations, unconstrained optimization, equality-constrained optimization and inequality-constrained optimization. The book contains many worked examples and homework exercises and is suitable for students of engineering or operations research taking courses in optimization. Supplementary material including solutions, lecture slides and appendices are available online at www.cambridge.org/9780521855648.
This chapter concerns the use of convex optimization in real-time embedded systems, in areas such as signal processing, automatic control, real-time estimation, real-time resource allocation and decision making, and fast automated trading. By “embedded” we mean that the optimization algorithm is part of a larger, fully automated system, that executes automatically with newly arriving data or changing conditions, and without any human intervention or action. By “real-time” we mean that the optimization algorithm executes much faster than a typical or generic method with a human in the loop, in times measured in milliseconds or microseconds for small and medium size problems, and (a few) seconds for larger problems. In real-time embedded convex optimization the same optimization problem is solved many times, with different data, often with a hard real-time deadline. In this chapter we propose an automatic code generation system for real-time embedded convex optimization. Such a system scans a description of the problem family, and performs much of the analysis and optimization of the algorithm, such as choosing variable orderings used with sparse factorizations and determining storage structures, at code generation time. Compiling the generated source code yields an extremely efficient custom solver for the problem family. We describe a preliminary implementation, built on the Python-based modeling framework CVXMOD, and give some timing results for several examples.
Introduction
Advisory optimization
Mathematical optimization is traditionally thought of as an aid to human decision making.
Non-cooperative game theory is a branch of game theory for the resolution of conflicts among players (or economic agents), each behaving selfishly to optimize their own well-being subject to resource limitations and other constraints that may depend on the rivals' actions. While many telecommunication problems have traditionally been approached by using optimization, game models are being increasingly used; they seem to provide meaningful models for many applications where the interaction among several agents is by no means negligible, for example, the choice of power allocations, routing strategies, and prices. Furthermore, the deregulation of telecommunication markets and the explosive growth of the Internet pose many new problems that can be effectively tackled with game-theoretic tools. In this chapter, we present a comprehensive treatment of Nash equilibria based on the variational inequality and complementarity approach, covering the topics of existence of equilibria using degree theory, global uniqueness of an equilibrium using the P-property, local-sensitivity analysis using degree theory, iterative algorithms using fixed-point iterations, and a descent approach for computing variational equilibria based on the regularized Nikaido–Isoda function. We illustrate the existence theory using a communication game with QoS constraints. The results can be used for the further study of conflict resolution of selfish agents in telecommunication.
Introduction
The literature on non-cooperative games is vast. Rather than reviewing this extensive literature, we refer the readers to the recent survey [20], which we will use as the starting point of this chapter.
This chapter presents, in a self-contained manner, recent advances in the design and analysis of gradient-based schemes for specially structured, smooth and nonsmooth minimization problems. We focus on the mathematical elements and ideas for building fast gradient-based methods and derive their complexity bounds. Throughout the chapter, the resulting schemes and results are illustrated and applied on a variety of problems arising in several specific key applications such as sparse approximation of signals, total variation-based image-processing problems, and sensor-location problems.
Introduction
The gradient method is probably one of the oldest optimization algorithms going back as early as 1847 with the initial work of Cauchy. Nowadays, gradient-based methods have attracted a revived and intensive interest among researches both in theoretical optimization, and in scientific applications. Indeed, the very large-scale nature of problems arising in many scientific applications, combined with an increase in the power of computer technology have motivated a “return” to the “old and simple” methods that can overcome the curse of dimensionality; a task which is usually out of reach for the current more sophisticated algorithms.
One of the main drawbacks of gradient-based methods is their speed of convergence, which is known to be slow. However, with proper modeling of the problem at hand, combined with some key ideas, it turns out that it is possible to build fast gradient schemes for various classes of problems arising in applications and, in particular, signal-recovery problems.