To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
These modulation techniques are widely used in practice. We quantify the trade-off between data rate and energy for these techniques and compare performance with the capacity limits discussed in Chapter 1. We begin by discussing MPSK, where the information determines the phase of a sinusoidal signal. Second, we discuss PAM and QAM, in which the amplitude in one and two dimensions, respectively, are varied depending on the data. Third, we discuss orthogonal modulation in which the bandwidth efficiency is very low, but the required energy is also very low.
In this chapter we first discuss the relationship between transmitted signals and received signals. There are three effects of the propagation medium on the transmitted signals: path loss, shadowing, and multipath fading. Path loss refers to the relation between the average received power and the transmitted power as a function of distance. Shadowing refers to the situation where buildings or other objects might block the line of sight between the transmitter and receiver.
The optimization algorithms from Chapters 4 and 6 require only rather simple tools from Riemannian geometry, all covered in Chapters 3 and 5 for embedded submanifolds then generalized in Chapter 8. This chapter provides additional geometric tools to gain deeper insight and help develop more sophisticated algorithms. It opens with the Riemannian distance then discusses exponential maps as retractions which generate geodesics. This is paired with a careful discussion of what it means to invert the exponential map. Then, the chapter defines parallel transport to compare tangent vectors in different tangent spaces. Later, the chapter defines transporters which can been seen as a relaxed type of parallel transport. Before that, we take a deep dive into the notion of Lipschitz continuity for gradients and Hessians on Riemannian manifolds, aiming to connect these concepts with the Lipschitz-type regularity assumptions we required to analyze gradient descent and trust regions. The chapter closes with a discussion of how to approximate Riemannian Hessians with finite differences of gradients via transporters, and with an introduction to the differentiation of tensor fields of all orders.
Convexity is one of the most fruitful concepts in classical optimization. Geodesic convexity generalizes that concept to optimization on Riemannian manifolds. There are several ways to carry out such a generalization: This chapter favors permissive definitions which are sufficient to retain the most important properties for optimization purposes (e.g., local optima are global optima). Alternative definitions are discussed, highlighting the fact that all coincide for the special case of Hadamard manifolds (essentially, negatively curved Riemannian manifolds). The chapter continues with a discussion of the special properties of differentiable geodesically (strictly, strongly) convex functions, and builds on them to show global linear convergence of Riemannian gradient descent, assuming strong geodesic convexity and Lipschitz continuous gradients (via the Polyak–Łojasiewicz inequality). The chapter closes with two examples of manifolds where geodesic convexity has proved useful, namely, the positive orthant with a log-barrier metric (recovering geometric programming), and the cone of positive definite matrices with the log-Euclidean and the affine invariant Riemannian metrics.
In this chapter we review the basic concepts of digital modulation and demodulation in the absence of noise. Digital modulation is the process of mapping information bits into transmitted waveforms. Demodulation is the process of mapping received signals back into bits. Concepts from signals and systems that are needed in understanding modulation and demodulation are reviewed.
Because sinusoidal signals are fundamental in communication systems, we first review representations of sinusoidal signals. Next, we show how vectors can be mapped into waveforms, as described in Chapter 1. For binary modulation, 1 bit of information (a 0 or a 1) is mapped into one of two vectors. These vectors along with a set of orthonormal waveforms (described below) determine two signals. So 1 bit of information will be mapped into one of two signals that is transmitted.
As an entry point to differential geometry, this chapter defines embedded submanifolds as subsets of linear spaces which can be locally defined by equations satisfying certain regularity conditions. Such sets can be linearized, yielding the notion of tangent space. The chapter further defines what it means for a map to and from a submanifold to be smooth, and how to differentiate such maps. The (disjoint) union of all tangent spaces forms the tangent bundle which is also a manifold. That makes it possible to define vector fields (maps which select a tangent vector at each point) and retractions (smooth maps which generate curves passing through any point with any given velocity). The chapter then proceeds to endow each tangent space with an inner product (turning each one into a Euclidean space). Under some regularity conditions, this extra structure turns the manifold into a Riemannian manifold. This makes it possible to define the Riemannian gradient of a real function. Taken together, these concepts are sufficient to build simple algorithms in the next chapter. An optional closing section defines local frames: They are useful for proofs but can be skipped for practical matters.
The main purpose of this chapter is to motivate and analyze the Riemannian trust-region method (RTR). This optimization algorithm shines brightest when it uses both the Riemannian gradient and the Riemannian Hessian. It applies for optimization on manifolds in general, thus for embedded submanifolds of linear spaces in particular. For that setting, the previous chapters introduce the necessary geometric tools. Toward RTR, the chapter first introduces a Riemannian version of Newton's method. It is motivated by first developing second-order optimality conditions. Each iteration of Newton's method requires solving a linear system of equations in a tangent space. To this end, the classical conjugate gradients method (CG) is reviewed. Then, RTR is presented with a worst-case convergence analysis guaranteeing it can find points which approximately satisfy first- and second-order necessary optimality conditions under some assumptions. Subproblems can be solved with a variant of CG called truncated-CG (tCG). The chapter closes with three optional sections: one about local convergence, one providing simpler conditions to ensure convergence, and one about checking Hessians numerically.
This chapter is concerned with error control coding, sometimes called forward error correction (FEC) using block codes. Shannon’s result in Chapter 1 and the cutoff rate theorem in Chapter 6 indicate that performance of communication systems will generally be better if the dimensionality of the signal set is large. Error control codes are used to define large-dimensional signal sets.
Optimization on Riemannian manifolds-the result of smooth geometry and optimization merging into one elegant modern framework-spans many areas of science and engineering, including machine learning, computer vision, signal processing, dynamical systems and scientific computing. This text introduces the differential geometry and Riemannian geometry concepts that will help students and researchers in applied mathematics, computer science and engineering gain a firm mathematical grounding to use these tools confidently in their research. Its charts-last approach will prove more intuitive from an optimizer's viewpoint, and all definitions and theorems are motivated to build time-tested optimization algorithms. Starting from first principles, the text goes on to cover current research on topics including worst-case complexity and geodesic convexity. Readers will appreciate the tricks of the trade for conducting research and for numerical implementations sprinkled throughout the book.
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This final volume, Learning, builds on the foundational topics established in volume I to provide a thorough introduction to learning methods, addressing techniques such as least-squares methods, regularization, online learning, kernel methods, feedforward and recurrent neural networks, meta-learning, and adversarial attacks. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 350 end-of-chapter problems (including complete solutions for instructors), 280 figures, 100 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Foundations and Inference, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, data and inference.
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This first volume, Foundations, introduces core topics in inference and learning, such as matrix theory, linear algebra, random variables, convex optimization and stochastic optimization, and prepares students for studying their practical application in later volumes. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 600 end-of-chapter problems (including solutions for instructors), 100 figures, 180 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Inference and Learning, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.
With the rapid development of unmanned aerial vehicle (UAV), extensive attentions have been paid to UAV-aided data collection in wireless sensor networks. However, it is very challenging to maintain the information freshness of the sensor nodes (SNs) subject to the UAV’s limited energy capacity and/or the large network scale. This chapter introduces two modes of data collection: single and continuous data collection with the aid of UAV, respectively. In the former case, the UAVs are dispatched to gather sensing data from each SN just once according to a preplanned data collection strategy. To keep information fresh, a multistage approach is proposed to find a set of data collection points at which the UAVs hover to collect and the age-optimal flight trajectory of each UAV. In the later case, the UAVs perform data collection continuously and make real-time decisions on the uploading SN and flight direction at each step. A deep reinforcement learning (DRL) framework incorporating the deep Q-network (DQN) algorithm is proposed to find the age-optimal data collection solution subject to the maximum flight velocity and energy capacity of each UAV. Numerical results are presented to show the effectiveness of the proposed methods in different scenarios.