To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we consider a class of codes known as trellis codes. Unlike block codes, trellis codes can encode an arbitrary length sequence of information symbols to produce a sequence of coded symbols. So the notion of block length is not directly applicable. A subclass of codes known as convolutional codes, which are linear trellis codes, has found widespread applications in many communication systems.
Optimization on Riemannian manifolds–the result of smooth geometry and optimization merging into one elegant modern framework–spans many areas of science and engineering, including machine learning, computer vision, signal processing, dynamical systems and scientific computing.
This text introduces the differential geometry and Riemannian geometry concepts that will help students and researchers in applied mathematics, computer science and engineering gain a firm mathematical grounding to use these tools confidently in their research. Its charts-last approach will prove more intuitive from an optimizer's viewpoint, and all definitions and theorems are motivated to build time-tested optimization algorithms. Starting from first principles, the text goes on to cover current research on topics including worst-case complexity and geodesic convexity. Readers will appreciate the tricks of the trade sprinkled throughout the book for conducting research in this area and for writing effective numerical implementations.
Before we define any technical terms, this chapter describes simple optimization problems as they arise in data science, imaging and robotics, with a focus on the natural domain of definition of the variables (the unknowns). In so doing, we proceed through a sequence of problems whose search spaces are a Euclidean space or a linear subspace thereof (which still falls within the realm of classical unconstrained optimization), then a sphere and a product of spheres. We further encounter the set of matrices with orthonormal columns (Stiefel manifold) and a quotient thereof which only considers the subspace generated by the orthonormal columns (Grassmann manifold). Continuing, we then discuss optimization problems where the unknowns are a collection of rotations (orthogonal matrices), a matrix of fixed size and rank, and a positive definite matrix. In closing, we discuss how a classical change of variables in semidefinite programming known as the Burer–Monteiro factorization can sometimes also lead to optimization on a smooth manifold, exhibiting a benign non-convexity phenomenon.
In this chapter we consider a communication system that transmits a single bit of information using one of two signals. The receiver filters the received signal, samples the filter output, and then makes a decision about which of the two signals was transmitted. We first consider an example in which the two signals are just rectangular pulses with opposite sign. For those signals in additive white Gaussian noise (AWGN) we analyze the probability of error for a receiver that uses a filter matched to the transmitted signal. Second, we consider optimizing the system over all possible filters, signals, and decision rules. The optimal filter and signals are derived for binary modulation in which one of two signals is transmitted. Finally, the effect of imperfect receivers is considered. Approaches to analyzing a system with intersymbol interference (ISI) are discussed.
Digital communication systems are ubiquitous. Examples of digital communication systems include cell phones, Bluetooth, WiFi, and cable modems. This book explores in depth how these communication systems work and the fundamental limits on the performance of digital communication systems. We begin in this chapter with a high-level description of digital communication systems so as to understand the trade-offs in designing a communication system. We also explore the fundamental limits that can be achieved in terms of the data rate possible for a given bandwidth and the energy needed for a given level of noise.
Noise is an important aspect of what limits the performance of communication systems. As such, it is important to understand the statistical properties of noise. Noise at the input of a receiver will affect the performance of a communication system. The received signal consists of the desired signal plus noise. Because receivers filter the received signal, it is important to be able to characterize the noise out of a linear system (i.e. a filter).
This chapter details how to work on several manifolds of practical interest, focusing on embedded submanifolds of linear spaces. It provides two tables which point to Manopt implementations of those manifolds, and to the various places in the book where it is explained how to work with products of manifolds. The manifolds detailed in this chapter include Euclidean spaces, unit spheres, the Stiefel manifold (orthonormal matrices), the orthogonal group and associated group of rotations, the manifold of matrices with a given size and rank and hyperbolic space in the hyperboloid model. It further discusses geometric tools for optimization on a manifold defined by (regular) constraints $h(x) = 0$ in general. That last section notably makes it possible to connect concepts from Riemannian optimization with classical concepts from constrained optimization in linear spaces, namely, Lagrange multipliers and KKT conditions under linear independence constraint qualifications (LICQ).
The main purpose of this chapter is to define and analyze Riemannian gradient descent methods. This family of algorithms aims to minimize real-valued functions (called cost functions) on manifolds. They apply to general manifolds, hence in particular also to embedded submanifolds of linear spaces. The previous chapter provides all necessary geometric tools for that setting. The initial technical steps involve constructing first-order Taylor expansions of the cost function along smooth curves, and identifying necessary optimality conditions (at a solution, the Riemannian gradient must vanish). Then, the chapter presents the algorithm and proposes a worst-case iteration complexity analysis. The main conclusion is that, under a Lipschitz-type assumption on the gradient of the cost function composed with the retraction, the algorithm finds a point with gradient smaller than $\varepsilon$ in at most a multiple of $\varepsilon^2$ iterations. The chapter ends with three optional sections: They discuss local convergence rates, detail how to compute gradients in practice and describe how to check that a gradient is correctly implemented.