To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces the continuous-time Fourier transform (CTFT) and its properties. Many examples are presented to illustrate the properties. The inverse CTFT is derived. As one example of its application, the impulse response of the ideal lowpass filter is obtained. The derivative properties of the CTFT are used to derive many Fourier transform pairs. One result is that the normalized Gaussian signal is its own Fourier transform, and constitutes an eigenfunction of the Fourier transform operator. Many such eigenfunctions are presented. The relation between the smoothness of a signal in the time domain and its decay rate in the frequency domain is studied. Smooth signals have rapidly decaying Fourier transforms. Spline signals are introduced, which have provable smoothness properties in the time domain. For causal signals it is proved that the real and imaginary parts of the CTFT are related to each other. This is called the Hilbert transform, Poisson’’s transform, or the Kramers–Kronig transform. It is also shown that Mother Nature “computes” a Fourier transform when a plane wave is propagating across an aperture and impinging on a distant screen – a well-known result in optics, crystallography, and quantum physics.
This chapter presents the Laplace transform, which is as fundamental to continuous-time systems as the z-transform is to discrete-time systems. Several properties and examples are presented. Similar to the z-transform, the Laplace transform can be regarded as a generalization of the appropriate Fourier transform. In continuous time, the Laplace transform is very useful in the study of systems represented by linear constant-coefficient differential equations (i.e., rational LTI systems). Frequency responses, resonances, and oscillations in electric circuits (and in mechanical systems) can be studied using the Laplace transform. The application in electrical circuit analysis is demonstrated with the help of an LCR circuit. The inverse Laplace transformation is also discussed, and it is shown that the inverse is unique only when the region of convergence (ROC) of the Laplace transform is specified. Depending on the ROC, the inverse of a given Laplace transform expression may be causal, noncausal, two-sided, bounded, or unbounded. This is very similar to the theory of inverse z-transformation. Because of these similarities, the discussion of the Laplace transform in this chapter is brief.
A number of properties relating to the inverse z-transform are discussed. The partial fraction expansion (PFE) of a rational z-transform plays a role in finding the inverse transform. It is shown that the inverse z-transform solution is not unique and depends on the region of convergence (ROC). Depending on the ROC, the solution may be causal, anticausal, two-sided, stable, or unstable. The condition for existence of a stable inverse transform is also developed. The interplay between causality, stability, and the ROC is established and illustrated with examples. The case of multiple poles is also considered. The theory and implementation of IIR linear-phase filters is discussed in detail. The connection between z-transform theory and analytic functions in complex variable theory is placed in evidence. Based on this connection, many intriguing examples of z-transform pairs are pointed out. In particular, closed-form expressions for radii of convergence of the z-transform can be obtained from complex variable theory. The case of unrealizable digital filters and their connection to complex variable theory is also discussed.
This chapter gives a brief overview of sampling based on sparsity. The idea is that a signal which is not bandlimited can sometimes be reconstructed from a sampled version if we have a priori knowledge that the signal is sparse in a certain basis. These results are very different from the results of Shannon and Nyquist, and are sometimes referred to as sub-Nyquist sampling theories. They can be regarded as generalizations of traditional sampling theory, which was based on the bandlimited property. Examples include sampling of finite-duration signals whose DFTs are sparse. Sparse reconstruction methods are closely related to the theory of compressive sensing, which is also briefly introduced. These are major topics that have emerged in the last two decades, so the chapter provides important references for further reading.
This chapter presents mathematical details relating to the Fourier transform (FT), Fourier series, and their inverses. These details were omitted in the preceding chapters in order to enable the reader to focus on the engineering side. The material reviewed in this chapter is fundamental and of lasting value, even though from the engineer’s viewpoint the importance may not manifest in day-to-day applications of Fourier representations. First the chapter discusses the discrete-time case, wherein two types of Fourier transform are distinguished, namely, l1-FT and l2-FT. A similar distinction between L1-FT and L2-FT for the continuous-time case is made next. When such FTs do not exist, it is still possible for a Fourier transform (or inverse) to exist in the sense of the so-called Cauchy principal value or improper Riemann integral, as explained. A detailed discussion on the pointwise convergence of the Fourier series representation is then given, wherein a number of sufficient conditions for such convergence are presented. This involves concepts such as bounded variation, one-sided derivatives, and so on. Detailed discussions of these concepts, along with several illuminating examples, are presented. The discussion is also extended to the case of the Fourier integral.
This chapter introduces recursive difference equations. These equations represent discrete-time LTI systems when the so-called initial conditions are zero. The transfer functions of such LTI systems have a rational form (ratios of polynomials in z). Recursive difference equations offer a computationally efficient way to implement systems whose outputs may depend on an infinite number of past inputs. The recursive property allows the infinite past to be remembered by remembering only a finite number of past outputs. Poles and zeros of rational transfer functions are introduced, and conditions for stability expressed in terms of pole locations. Computational graphs for digital filters, such as the direct-form structure, cascade-form structure, and parallel-form structure, are introduced. The partial fraction expansion (PFE) method for analysis of rational transfer functions is introduced. It is also shown how the coefficients of a rational transfer function can be identified by measuring a finite number of samples of the impulse response. The chapter also shows how the operation of polynomial division can be efficiently implemented in the form of a recursive difference equation.
Gives a brief overview of the book. Notations for signal representation in continuous time and discrete time are introduced. Both one-dimensional and two-dimensional signals are introduced, and simple examples of images are presented. Examples of noise removal and image smoothing (filtering) are demonstrated. The concept of frequency is introduced and its importance as well as its role in signal representation are explained, giving musical notes as examples. The history of signal processing, the role of theory, and the connections to real-life applications are mentioned in an introductory way. The chapter also draws attention to the impact of signal processing in digital communications (e.g., cell-phone communications), gravity wave detection, deep space communications, and so on.
This chapter introduces state-space descriptions for computational graphs (structures) representing discrete-time LTI systems. They are not only useful in theoretical analysis, but can also be used to derive alternative structures for a transfer function starting from a known structure. The chapter considers systems with possibly multiple inputs and outputs (MIMO systems); systems with a single input and a single output (SISO systems) are special cases. General expressions for the transfer matrix and impulse response matrix are derived in terms of state-space descriptions. The concept of structure minimality is discussed, and related to properties called reachability and observability. It is seen that state-space descriptions give a different perspective on system poles, in terms of the eigenvalues of the state transition matrix. The chapter also revisits IIR digital allpass filters and derives several equivalent structures for them using so-called similarity transformations on state-space descriptions. Specifically, a number of lattice structures are presented for allpass filters. As a practical example of impact, if such a structure is used to implement the second-order allpass filter in a notch filter, then the notch frequency and notch quality can be independently controlled by two separate multipliers.
This is a detailed chapter on digital filter design. Specific digital filters such as notch and antinotch filters, and sharp-cutoff lowpass filters such as Butterworth filters are discussed in detail. Also discussed are allpass filters and some of their applications, including the implementation of notch and antinotch filters. Computational graphs (structures) for allpass filters are presented. It is explained how continuous-time filters can be transformed into discrete time by using the bilinear transformation. A simple method for the design of linear-phase FIR filters, called the window-based method, is also presented. Examples include the Kaiser window and the Hamming window. A comparative discussion of FIR and IIR filters is given. It is demonstrated how nonlinear-phase filters can create visible phase distortion in images. Towards the end, a detailed discussion of steady-state and transient components of filter outputs is given. The dependence of transient duration on pole position is explained. The chapter concludes with a discussion of spectral factorization.
This chapter provides an overview of matrices. Basic matrix operations are introduced first, such as addition, multiplication, transposition, and so on. Determinants and matrix inverses are then defined. The rank and Kruskal rank of matrices are defined and explained. The connection between rank, determinant, and invertibility is elaborated. Eigenvalues and eigenvectors are then reviewed. Many equivalent meanings of singularity (non-invertibility) of matrices are summarized. Unitary matrices are reviewed. Finally, linear equations are discussed. The conditions under which a solution exists and the condition for the solution to be unique are also explained and demonstrated with examples.
This chapter discusses the Fourier series representation for continuous-time signals. This is applicable to signals which are either periodic or have a finite duration. The connections between the continuous-time Fourier transform (CTFT), the discrete-time Fourier transform (DTFT), and Fourier series are also explained. Properties of Fourier series are discussed and many examples presented. For real-valued signals it is shown that the Fourier series can be written as a sum of a cosine series and a sine series; examples include rectified cosines, which have applications in electric power supplies. It is shown that the basis functions used in the Fourier series representation satisfy an orthogonality property. This makes the truncated version of the Fourier representation optimal in a certain sense. The so-called principal component approximation derived from the Fourier series is also discussed. A detailed discussion of the properties of musical signals in the light of Fourier series theory is presented, and leads to a discussion of musical scales, consonance, and dissonance. Also explained is the connection between Fourier series and the function-approximation property of multilayer neural networks, used widely in machine learning. An overview of wavelet representations and the contrast with Fourier series representations is also given.
This chapter examines discrete-time LTI systems in detail. It shows that the input–output behavior of an LTI system is characterized by the so-called impulse response. The output is shown to be the so-called convolution of the input with the impulse response. It is then shown that exponentials are eigenfunctions of LTI systems. This property leads to the ideas of transfer functions and frequency responses for LTI systems. It is argued that the frequency response gives a systematic meaning to the term “filtering.” Image filtering is demonstrated with examples. The discrete-time Fourier transform (DTFT) is introduced to describe the frequency domain behavior of LTI systems, and allows one to represent a signal as a superposition of single-frequency signals (the Fourier representation). DTFT is discussed in detail, with many examples. The z-transform, which is of great importance in the study of LTI systems, is also introduced and its connection to the Fourier transform explained. Attention is also given to real signals and real filters, because of their additional properties in the frequency domain. Homogeneous time-invariant (HTI) systems are also introduced. Continuous-time counterparts of these topics are explained. B-splines, which arise as examples in continuous-time convolution, are presented.