To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Signal-processing computations may arise naturally in a finite field, so it is appropriate to construct fast algorithms in the finite field GF(q). Computations in a finite field might also arise as a surrogate for a computation that originally arises in the real field or the complex field. In this situation, a computational task in one field is embedded into a different field, where that computational task is executed and the answer is passed back to the original field. There are several reasons why one might do this. It may be that the computation is easier to perform in the new field, so one saves work or can use a simpler implementation. It may be that devices that do arithmetic in one field may be readily available and can be used to do computations for a second field, if those computations are suitably reformulated. In other situations, one may want to devise a standard computational module that performs bulk computations and to use that module for a diversity of signal-processing tasks. In seeking standardization, one may want to fit one kind of computational task into a different kind of structure.
Another reason for using a surrogate field is to improve computational precision. Computations in a finite field are exact; there is no roundoff error. If a problem involving real or complex numbers can be embedded into a finite field to perform a calculation, it may be possible to reduce the computational noise in the answer.
A standard method of solving a system of n linear equations in n unknowns is to write the system of equations as a matrix equation A f = g, and to solve it either by computing the matrix inverse and writing f = A−1g or, alternatively, by using the method known as gaussian elimination. The standard methods of computing a matrix inverse have complexity proportional to n3. Sometimes, the matrix has a special structure that can be exploited to obtain a faster algorithm.
A Toeplitz system of equations is a system of linear equations described by a Toeplitz matrix A. The problem of solving a Toeplitz system of equations arises in a great many applications, including spectral estimation, linear prediction, autoregressive filter design, and error-control codes. Because the Toeplitz system is highly structured, methods of solution are available that are far superior to the general methods of solving systems of linear equations. These methods are the subject of this chapter, and are valid in any algebraic field.
The algorithms of this chapter are somewhat distant from those we have studied for convolution and for Fourier transforms. Convolutions and transforms are essentially problems of matrix multiplication, whereas this chapter deals with the solution of a system of linear equations. The solution of a system of linear equations is closer to the task of matrix inversion. It should be no surprise that we do not build on earlier algorithms directly, though techniques such as doubling may prove useful.
Number theory has already been seen in earlier chapters of this book. It was used in the design of fast Fourier transform algorithms. We did make use of some ideas that only now will be proved. This chapter, which is a mathematical interlude, will develop the basic facts of number theory – some that were used earlier in the book and some that we may need later.
We also return to the study of fields to develop the topic of an extension field more fully. The structure of algebraic fields will be important to the construction of number theory transforms in Chapter 10 and also to the construction of some multidimensional convolution algorithms in Chapter 11 and for some multidimensional Fourier transform algorithms in Chapter 12.
Elementary number theory
Within the integer quotient ring Zq, some of the elements may be coprime to q, and, unless q is a prime, others will divide q. It is important to us to know how many elements there are of each type.
Definition 9.1.1 (Euler)The totient function, denoted ϕ(q), where q is an integer larger than one, is the number of nonzero elements in Zq that are coprime to q. For q equal to one, ϕ(q) = 1.
When q is a prime p, then all the nonzero elements of Zq are coprime to p, and so ϕ(p) = p – 1 whenever p is a prime.
One of our major goals is the development of a collection of techniques for computing the discrete Fourier transform. We shall find many such techniques, each with different advantages and each best-used in different circumstances. There are two basic strategies. One strategy is to change a one-dimensional Fourier transform into a two-dimensional Fourier transform of a form that is easier to compute. The second strategy is to change a one-dimensional Fourier transform into a small convolution, which is then computed by using the techniques described in Chapter 5. Good algorithms for computing the discrete Fourier transform will use either or both of these strategies to minimize the computational load. In Chapter 6, we shall describe how the fast Fourier transform algorithms are used to perform, in conjunction with the convolution theorem, the cyclic convolutions that are used to compute the long linear convolutions forming the output of a digital filter.
Throughout the chapter, we shall usually regard the complex field as the field of the computation, or perhaps the real field. However, most of the algorithms we study do not depend on the particular field over which the Fourier transform is defined. In such cases, the algorithms are valid in an arbitrary field. In some cases, the general idea behind an algorithm does not depend on the field over which the Fourier transform is defined, but some small detail of the algorithm may depend on the field.
In earlier chapters, we saw ways in which the Fourier transform can be broken into pieces and ways in which a convolution algorithm can be turned into an algorithm for the Fourier transform. It also is possible to break a multidimensional Fourier transform into pieces and to turn an algorithm for the multidimensional convolution into an algorithm for the multidimensional Fourier transform. The possibilities now are even richer than they were in the one-dimensional case. We shall discuss a variety of methods.
The algorithms for multidimensional Fourier transforms are studied in the easy way by studying only the algorithms for the two-dimensional Fourier transform as representative of the more general case. The discussion and the formulas for the two-dimensional Fourier transforms can be extended directly to a discussion of the multidimensional Fourier transforms.
Multidimensional Fourier transforms arise naturally from problems that are intrinsically multidimensional. They also arise artificially as a way of computing a one-dimensional Fourier transform. This chapter includes such methods for computing a one-dimensional Fourier transform, and so we will give practical methods for building large one-dimensional Fourier transform algorithms from the small one-dimensional Fourier transform algorithms that were studied in Chapter 3.
Small-radix Cooley–Tukey algorithms
A two-dimensional discrete Fourier transform can be computed by applying the Cooley–Tukey FFT first to each row and then to each column.
Just as one can define a one-dimensional convolution, so one can define a multidimensional convolution. Multidimensional linear convolutions and cyclic convolutions can be defined in any field of interest on multidimensional arrays of data and are useful in many ways. We have seen in earlier chapters that multidimensional arrays and multidimensional convolutions can be created artificially as part of an algorithm for processing a one-dimensional data vector. Multidimensional arrays also arise naturally in many signal-processing problems, especially in the processing of image data such as satellite reconnaissance photographs, medical imagery including X-ray images, seismic records, and electron micrographs.
This chapter will begin the study of fast algorithms for multidimensional convolutions by nesting fast algorithms for one-dimensional convolutions. Then we shall study ways to construct a fast algorithm for a one-dimensional cyclic convolution by temporarily mapping it into a multidimensional convolution, a procedure that is known as the Agarwal–Cooley convolution algorithm. The Agarwal–Cooley algorithm for one-dimensional cyclic convolution is a powerful adjunct to the convolution methods studied in Chapter 6, which become unwieldy for large blocklength. It gives a way to build algorithms for large one-dimensional cyclic convolutions by combining the small convolution algorithms. Then, in the latter half of the chapter, we shall study methods that are derived specifically to compute two-dimensional convolutions.
Nested convolution algorithms
A two-dimensional convolution is an operation on a pair of two-dimensional arrays.
Algorithms for computation are found everywhere, and efficient versions of these algorithms are highly valued by those who use them. We are mainly concerned with certain types of computation, primarily those related to signal processing, including the computations found in digital filters, discrete Fourier transforms, correlations, and spectral analysis. Our purpose is to present the advanced techniques for fast digital implementation of these computations. We are not concerned with the function of a digital filter or with how it should be designed to perform a certain task; our concern is only with the computational organization of its implementation. Nor are we concerned with why one should want to compute, for example, a discrete Fourier transform; our concern is only with how it can be computed efficiently. Surprisingly, there is an extensive body of theory dealing with this specialized topic – the topic of fast algorithms.
Introduction to fast algorithms
An algorithm, like most other engineering devices, can be described either by an input/output relationship or by a detailed explanation of its internal construction. When one applies the techniques of signal processing to a new problem one is concerned only with the input/output aspects of the algorithm. Given a signal, or a data record of some kind, one is concerned with what should be done to this data, that is, with what the output of the algorithm should be when such and such a data record is the input.
Analysing and designing reliable and fast wireless networks requires an understanding of the theory underpinning these systems and the engineering complexities of their implementation. This text describes the underlying principles and major applications of high-speed wireless technologies, with emphasis on ultra-wideband (UWB) wireless systems, 3G long term evolution, and 4G mobile networks. Key topics such as cross-layer optimization are discussed in detail and various forms of UWB, including multi-band OFDM UWB, are covered. Recent research developments are described before identifying the scope and direction for future research. The overlay problem (interference problem) in UWB is discussed, and the author aims to illustrate that OFDM is not the best wireless access technique for high speed transmission. Covering the latest technologies in the area, this book will be a valuable resource for graduate students of electrical and computer engineering as well as practitioners in the wireless communications industry.
Network operators are faced with the challenge of maximizing the quality of voice transmissions in wireless communications without impairing speech or data transmission. This book provides a comprehensive survey of voice quality algorithms, features, interactions and trade-offs at the device and system levels. The book elaborates on the root cause of impairments and ways for resolving them, as well as methodologies for measuring and quantifying voice quality before and after applying the remedies. A 'troubleshooting and case studies' chapter provides a useful approach to identifying and solving network impairments. Avoiding complex mathematics, the approach is based on real and sizable field experience supported by scientific and laboratory analysis. This title is suitable for practitioners in the wireless communications industry and graduate students in electrical engineering. Further resources, including a range of audio examples, are available online at www.cambridge.org/ 9781107407183.
Optical Switching Networks describes all the major switching paradigms developed for modern optical networks, discussing their operation, advantages, disadvantages and implementation. Following a review of the evolution of optical WDM networks, an overview of the future trends out. The latest developments in optical access, local, metropolitan, and wide area networks are covered, including detailed technical descriptions of generalized multiprotocol label switching, waveband switching, photonic slot routing, optical flow, burst and packet switching. The convergence of optical and wireless access networks is also discussed, as are the IEEE 802.17 Resilient Packet Ring and IEEE 802.3ah Ethernet passive optical network standards and their WDM upgraded derivatives. The feasibility, challenges and potential of next-generation optical networks are described in a survey of state-of-the-art optical networking testbeds. Animations showing how the key optical switching techniques work are available via the web, as are lecture slides (www.cambridge.org/9780521868006).
Qubits are not the only information carriers that can be used for quantum information processing. In this chapter, we will focus on quantum communication with ‘continuous quantum variables’, or continuous variables for short. In the context of quantum information processing we will also call continuous variables ‘qunats’. We have seen in Chapter 2 that a natural representation of a continuous variable is given by the position of a particle. The conjugate continuous variable is then the momentum of the particle. Unfortunately, the eigenstates of the position and momentum operators are not physical, and we have to construct suitable approximations to these states that can be created in the laboratory. Any practical information processing device must then take into account the deviation of the actual states from the ideal position and momentum eigenstates. Rather than the position and momentum of a particle, we will consider here the two position and momentum quadratures of an electromagnetic field mode. These operators obey the same commutation relations as the canonical position and momentum operators, but they are not the physical position and momentum of field excitations. Approximate eigenstates of the quadrature operators can be constructed in the form of squeezed coherent states. We define a quantum mechanical phase space for quadrature operators, similar to a classical phase space for position and momentum. Probability distributions in the classical phase space then become quasi-probability distributions over the quadrature phase space.We will develop one of these distributions, namely the Wigner function, and identify certain phase-space transformations of the Wigner function with linearoptical and squeezing operations.
The field of quantum information processing has reached a level of maturity, and spans such a wide variety of topics, that it merits further specialization. In this book, we consider quantum information processing with optical systems, including quantum communication, quantum computation, and quantum metrology. Optical systems are the obvious choice for quantum communication, since photons are excellent carriers of quantum information due to their relatively slow decoherence. Indeed, many aspects of quantum communication have been demonstrated to the extent that commercial products are now available. The importance of optical systems for quantum communication leads us to ask whether we can construct integrated systems for communication and computation in which all processing takes place in optical systems. Recent developments indicate that while full-scale quantum computing is still extremely challenging, optical systems are one of the most promising approaches to a fully functional quantum computer.
This book is aimed at beginning graduate students who are starting their research career in optical quantum information processing, and it can be used as a textbook for an advanced master's course. The reader is assumed to have a background knowledge in classical electrodynamics and quantum mechanics at the level of an undergraduate physics course. The nature of the topic requires familiarity with quantized fields, and since this is not always a core topic in undergraduate physics, we derive the quantum mechanical formulation of the free electromagnetic field from first principles. Similarly, we aim to present the topics in quantum information theory in a self-contained manner.