To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Convexity is one of the most fruitful concepts in classical optimization. Geodesic convexity generalizes that concept to optimization on Riemannian manifolds. There are several ways to carry out such a generalization: This chapter favors permissive definitions which are sufficient to retain the most important properties for optimization purposes (e.g., local optima are global optima). Alternative definitions are discussed, highlighting the fact that all coincide for the special case of Hadamard manifolds (essentially, negatively curved Riemannian manifolds). The chapter continues with a discussion of the special properties of differentiable geodesically (strictly, strongly) convex functions, and builds on them to show global linear convergence of Riemannian gradient descent, assuming strong geodesic convexity and Lipschitz continuous gradients (via the Polyak–Łojasiewicz inequality). The chapter closes with two examples of manifolds where geodesic convexity has proved useful, namely, the positive orthant with a log-barrier metric (recovering geometric programming), and the cone of positive definite matrices with the log-Euclidean and the affine invariant Riemannian metrics.
In this chapter we review the basic concepts of digital modulation and demodulation in the absence of noise. Digital modulation is the process of mapping information bits into transmitted waveforms. Demodulation is the process of mapping received signals back into bits. Concepts from signals and systems that are needed in understanding modulation and demodulation are reviewed.
Because sinusoidal signals are fundamental in communication systems, we first review representations of sinusoidal signals. Next, we show how vectors can be mapped into waveforms, as described in Chapter 1. For binary modulation, 1 bit of information (a 0 or a 1) is mapped into one of two vectors. These vectors along with a set of orthonormal waveforms (described below) determine two signals. So 1 bit of information will be mapped into one of two signals that is transmitted.
As an entry point to differential geometry, this chapter defines embedded submanifolds as subsets of linear spaces which can be locally defined by equations satisfying certain regularity conditions. Such sets can be linearized, yielding the notion of tangent space. The chapter further defines what it means for a map to and from a submanifold to be smooth, and how to differentiate such maps. The (disjoint) union of all tangent spaces forms the tangent bundle which is also a manifold. That makes it possible to define vector fields (maps which select a tangent vector at each point) and retractions (smooth maps which generate curves passing through any point with any given velocity). The chapter then proceeds to endow each tangent space with an inner product (turning each one into a Euclidean space). Under some regularity conditions, this extra structure turns the manifold into a Riemannian manifold. This makes it possible to define the Riemannian gradient of a real function. Taken together, these concepts are sufficient to build simple algorithms in the next chapter. An optional closing section defines local frames: They are useful for proofs but can be skipped for practical matters.
The main purpose of this chapter is to motivate and analyze the Riemannian trust-region method (RTR). This optimization algorithm shines brightest when it uses both the Riemannian gradient and the Riemannian Hessian. It applies for optimization on manifolds in general, thus for embedded submanifolds of linear spaces in particular. For that setting, the previous chapters introduce the necessary geometric tools. Toward RTR, the chapter first introduces a Riemannian version of Newton's method. It is motivated by first developing second-order optimality conditions. Each iteration of Newton's method requires solving a linear system of equations in a tangent space. To this end, the classical conjugate gradients method (CG) is reviewed. Then, RTR is presented with a worst-case convergence analysis guaranteeing it can find points which approximately satisfy first- and second-order necessary optimality conditions under some assumptions. Subproblems can be solved with a variant of CG called truncated-CG (tCG). The chapter closes with three optional sections: one about local convergence, one providing simpler conditions to ensure convergence, and one about checking Hessians numerically.
This chapter is concerned with error control coding, sometimes called forward error correction (FEC) using block codes. Shannon’s result in Chapter 1 and the cutoff rate theorem in Chapter 6 indicate that performance of communication systems will generally be better if the dimensionality of the signal set is large. Error control codes are used to define large-dimensional signal sets.
Optimization on Riemannian manifolds-the result of smooth geometry and optimization merging into one elegant modern framework-spans many areas of science and engineering, including machine learning, computer vision, signal processing, dynamical systems and scientific computing. This text introduces the differential geometry and Riemannian geometry concepts that will help students and researchers in applied mathematics, computer science and engineering gain a firm mathematical grounding to use these tools confidently in their research. Its charts-last approach will prove more intuitive from an optimizer's viewpoint, and all definitions and theorems are motivated to build time-tested optimization algorithms. Starting from first principles, the text goes on to cover current research on topics including worst-case complexity and geodesic convexity. Readers will appreciate the tricks of the trade for conducting research and for numerical implementations sprinkled throughout the book.
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This final volume, Learning, builds on the foundational topics established in volume I to provide a thorough introduction to learning methods, addressing techniques such as least-squares methods, regularization, online learning, kernel methods, feedforward and recurrent neural networks, meta-learning, and adversarial attacks. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 350 end-of-chapter problems (including complete solutions for instructors), 280 figures, 100 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Foundations and Inference, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, data and inference.
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This first volume, Foundations, introduces core topics in inference and learning, such as matrix theory, linear algebra, random variables, convex optimization and stochastic optimization, and prepares students for studying their practical application in later volumes. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 600 end-of-chapter problems (including solutions for instructors), 100 figures, 180 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Inference and Learning, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.
With the rapid development of unmanned aerial vehicle (UAV), extensive attentions have been paid to UAV-aided data collection in wireless sensor networks. However, it is very challenging to maintain the information freshness of the sensor nodes (SNs) subject to the UAV’s limited energy capacity and/or the large network scale. This chapter introduces two modes of data collection: single and continuous data collection with the aid of UAV, respectively. In the former case, the UAVs are dispatched to gather sensing data from each SN just once according to a preplanned data collection strategy. To keep information fresh, a multistage approach is proposed to find a set of data collection points at which the UAVs hover to collect and the age-optimal flight trajectory of each UAV. In the later case, the UAVs perform data collection continuously and make real-time decisions on the uploading SN and flight direction at each step. A deep reinforcement learning (DRL) framework incorporating the deep Q-network (DQN) algorithm is proposed to find the age-optimal data collection solution subject to the maximum flight velocity and energy capacity of each UAV. Numerical results are presented to show the effectiveness of the proposed methods in different scenarios.
The Age of Information metric, which is a measure of the freshness of a continually updating piece of information as observed at a remote monitor, has been studied for a variety of different update monitoring systems. In this chapter, we introduce three network control mechanisms for controlling the age, namely, buffer size, packet deadlines, and packet management. In the case of packet deadlines, we analyze update monitoring system for the cases of a fixed deadline and a random exponential deadline and derive closed-form expressions for the average age. We also derive a closed-form expression for the optimal average deadline for the random exponential case.
In this chapter, we study the economic issues of fresh data trading markets, where the data freshness is captured by Age-of-Information (AoI). In our model, a destination user requests, and pays for, fresh data updates from a source provider. In this work, the destination incurs an age-related cost, modeled as a general increasing function of the AoI. To understand economic viability and profitability of fresh data markets, we consider a pricing mechanism to maximize the source’s profit, while the destination chooses a data update schedule to trade off its payments to the source and its age-related cost. The problem is exacerbated when the source has incomplete information regarding the destination’s age-related cost, which requires one to exploit (economic) mechanism design to induce the truthful information. This chapter attempts to build such a fresh data trading framework that centers around the following two key questions: (a) How should a source choose the pricing scheme to maximize its profit in a fresh data market under complete market information? (b) Under incomplete information, how should a source design an optimal mechanism to maximize its profit while ensuring the destination’s truthful report of its age-related cost information?
Optimization of information freshness in wireless networks has usually been performed based on queueing analysis that captures only the temporal traffic dynamics associated with the transmitters and receivers. However, the effect of interference, which is mainly dominated by the interferers’ geographic locations, is not well understood. This chapter presents a theoretical framework for the analysis of the Age of Information (AoI) from a joint queueing-geometry perspective. We also provide the design of a decentralized scheduling policy that exploits local observation to make transmission decisions that minimize the AoI. To quantify the performance, we derive analytical expressions for the average AoI. Numerical results validate the accuracy of the analyses as well as the efficacy of the proposed scheme in reducing the AoI.