We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The role of robots in society keeps expanding and diversifying, bringing with it a host of issues surrounding the relationship between robots and humans. This introduction to human–robot interaction (HRI) by leading researchers in this developing field is the first to provide a broad overview of the multidisciplinary topics central to modern HRI research. Written for students and researchers from robotics, artificial intelligence, psychology, sociology, and design, it presents the basics of how robots work, how to design them, and how to evaluate their performance. Self-contained chapters discuss a wide range of topics, including speech and language, nonverbal communication, and processing emotions, plus an array of applications and the ethical issues surrounding them. This revised and expanded second edition includes a new chapter on how people perceive robots, coverage of recent developments in robotic hardware, software, and artificial intelligence, and exercises for readers to test their knowledge.
This chapter introduces recursive difference equations where the initial conditions are nonzero. The output of such a system is studied in detail. One application is in the design of digital waveform generators such as oscillators, and this is explained in considerable detail. The coupled-form oscillator, which simultaneously generates synchronized sine and cosine waveforms at a given frequency, is presented. The chapter also introduces another application of recursive difference equations, namely the computation of mortgages. It is shown that the monthly payment on a loan can be computed using a first-order recursive difference equation. The equation also allows one to calculate the interest and principal parts of the payment every month, as shown. Poles play a crucial role in the behavior of recursive difference equations with zero or nonzero initial conditions. Many different manifestations of the effect of a pole are also summarized, including some time-domain dynamical meanings of poles.
This chapter introduces structures and structural interconnections for LTI systems and then considers several examples of digital filters. Examples include moving average filters, difference operators, and ideal lowpass filters. It is then shown how to convert lowpass filters into other types, such as highpass, bandpass, and so on, by use of simple transformations. Phase distortion is explained, and linear-phase digital filters are introduced, which do not create phase distortion. The use of digital filters in noise removal (denoising) is also demonstrated for 1D signals and 2D images. The filtering of an image into low and high-frequency subbands is demonstrated, and the motivation for subband decomposition in audio and image compression is explained. Finally, it is shown that the convolution operation can be represented as a matrix vector multiplication, where the matrix has Toeplitz structure. The matrix representation also shows us how to undo a filtering operation through a process called deconvolution.
This chapter introduces different types of signals, and studies the properties of many kinds of systems that are encountered in signal processing. Signals discussed include the exponential signal, the unit step, single-frequency signals, rectangular pulses, Dirac delta signals, and periodic signals. Two-dimensional signals, especially 2D frequencies and sinusoids, are also demonstrated. Many types of systems are discussed, such as homogeneous systems, additive systems, linear systems, stable systems, time-invariant systems, and causal systems. Both continuous and discrete-time cases are discussed. Examples are presented throughout, such as music signals, ECG signals, and so on, to demonstrate the concepts. Subtle differences between discrete-time and continuous-time signals and systems are also pointed out.
This chapter introduces bandlimited signals, sampling theory, and the method of reconstruction from samples. Uniform sampling with a Dirac delta train is considered, and the Fourier transform of the sampled signal is derived. The reconstruction from samples is based on the use of a linear filter called an interpolator. When the sampling rate is not sufficiently large, the sampling process leads to a phenomenon called aliasing. This is discussed in detail and several real-world manifestations of aliasing are also discussed. In practice, the sampled signal is typically processed by a digital signal processing device, before it is converted back into a continuous-time signal. The building blocks in such a digital signal processing system are discussed. Extensions of the lowpass sampling theorem to the bandpass case are also presented. Also proved is the pulse sampling theorem, where the sampling pulse is spread out over a short duration, unlike the Dirac delta train. Bandlimited channels are discussed and it is explained how the data rate that can be transmitted over a channel is limited by channel bandwidth.
This chapter introduces the continuous-time Fourier transform (CTFT) and its properties. Many examples are presented to illustrate the properties. The inverse CTFT is derived. As one example of its application, the impulse response of the ideal lowpass filter is obtained. The derivative properties of the CTFT are used to derive many Fourier transform pairs. One result is that the normalized Gaussian signal is its own Fourier transform, and constitutes an eigenfunction of the Fourier transform operator. Many such eigenfunctions are presented. The relation between the smoothness of a signal in the time domain and its decay rate in the frequency domain is studied. Smooth signals have rapidly decaying Fourier transforms. Spline signals are introduced, which have provable smoothness properties in the time domain. For causal signals it is proved that the real and imaginary parts of the CTFT are related to each other. This is called the Hilbert transform, Poisson’’s transform, or the Kramers–Kronig transform. It is also shown that Mother Nature “computes” a Fourier transform when a plane wave is propagating across an aperture and impinging on a distant screen – a well-known result in optics, crystallography, and quantum physics.
This chapter presents the Laplace transform, which is as fundamental to continuous-time systems as the z-transform is to discrete-time systems. Several properties and examples are presented. Similar to the z-transform, the Laplace transform can be regarded as a generalization of the appropriate Fourier transform. In continuous time, the Laplace transform is very useful in the study of systems represented by linear constant-coefficient differential equations (i.e., rational LTI systems). Frequency responses, resonances, and oscillations in electric circuits (and in mechanical systems) can be studied using the Laplace transform. The application in electrical circuit analysis is demonstrated with the help of an LCR circuit. The inverse Laplace transformation is also discussed, and it is shown that the inverse is unique only when the region of convergence (ROC) of the Laplace transform is specified. Depending on the ROC, the inverse of a given Laplace transform expression may be causal, noncausal, two-sided, bounded, or unbounded. This is very similar to the theory of inverse z-transformation. Because of these similarities, the discussion of the Laplace transform in this chapter is brief.
A number of properties relating to the inverse z-transform are discussed. The partial fraction expansion (PFE) of a rational z-transform plays a role in finding the inverse transform. It is shown that the inverse z-transform solution is not unique and depends on the region of convergence (ROC). Depending on the ROC, the solution may be causal, anticausal, two-sided, stable, or unstable. The condition for existence of a stable inverse transform is also developed. The interplay between causality, stability, and the ROC is established and illustrated with examples. The case of multiple poles is also considered. The theory and implementation of IIR linear-phase filters is discussed in detail. The connection between z-transform theory and analytic functions in complex variable theory is placed in evidence. Based on this connection, many intriguing examples of z-transform pairs are pointed out. In particular, closed-form expressions for radii of convergence of the z-transform can be obtained from complex variable theory. The case of unrealizable digital filters and their connection to complex variable theory is also discussed.
This chapter gives a brief overview of sampling based on sparsity. The idea is that a signal which is not bandlimited can sometimes be reconstructed from a sampled version if we have a priori knowledge that the signal is sparse in a certain basis. These results are very different from the results of Shannon and Nyquist, and are sometimes referred to as sub-Nyquist sampling theories. They can be regarded as generalizations of traditional sampling theory, which was based on the bandlimited property. Examples include sampling of finite-duration signals whose DFTs are sparse. Sparse reconstruction methods are closely related to the theory of compressive sensing, which is also briefly introduced. These are major topics that have emerged in the last two decades, so the chapter provides important references for further reading.
This chapter presents mathematical details relating to the Fourier transform (FT), Fourier series, and their inverses. These details were omitted in the preceding chapters in order to enable the reader to focus on the engineering side. The material reviewed in this chapter is fundamental and of lasting value, even though from the engineer’s viewpoint the importance may not manifest in day-to-day applications of Fourier representations. First the chapter discusses the discrete-time case, wherein two types of Fourier transform are distinguished, namely, l1-FT and l2-FT. A similar distinction between L1-FT and L2-FT for the continuous-time case is made next. When such FTs do not exist, it is still possible for a Fourier transform (or inverse) to exist in the sense of the so-called Cauchy principal value or improper Riemann integral, as explained. A detailed discussion on the pointwise convergence of the Fourier series representation is then given, wherein a number of sufficient conditions for such convergence are presented. This involves concepts such as bounded variation, one-sided derivatives, and so on. Detailed discussions of these concepts, along with several illuminating examples, are presented. The discussion is also extended to the case of the Fourier integral.
This chapter introduces recursive difference equations. These equations represent discrete-time LTI systems when the so-called initial conditions are zero. The transfer functions of such LTI systems have a rational form (ratios of polynomials in z). Recursive difference equations offer a computationally efficient way to implement systems whose outputs may depend on an infinite number of past inputs. The recursive property allows the infinite past to be remembered by remembering only a finite number of past outputs. Poles and zeros of rational transfer functions are introduced, and conditions for stability expressed in terms of pole locations. Computational graphs for digital filters, such as the direct-form structure, cascade-form structure, and parallel-form structure, are introduced. The partial fraction expansion (PFE) method for analysis of rational transfer functions is introduced. It is also shown how the coefficients of a rational transfer function can be identified by measuring a finite number of samples of the impulse response. The chapter also shows how the operation of polynomial division can be efficiently implemented in the form of a recursive difference equation.
Gives a brief overview of the book. Notations for signal representation in continuous time and discrete time are introduced. Both one-dimensional and two-dimensional signals are introduced, and simple examples of images are presented. Examples of noise removal and image smoothing (filtering) are demonstrated. The concept of frequency is introduced and its importance as well as its role in signal representation are explained, giving musical notes as examples. The history of signal processing, the role of theory, and the connections to real-life applications are mentioned in an introductory way. The chapter also draws attention to the impact of signal processing in digital communications (e.g., cell-phone communications), gravity wave detection, deep space communications, and so on.
This chapter introduces state-space descriptions for computational graphs (structures) representing discrete-time LTI systems. They are not only useful in theoretical analysis, but can also be used to derive alternative structures for a transfer function starting from a known structure. The chapter considers systems with possibly multiple inputs and outputs (MIMO systems); systems with a single input and a single output (SISO systems) are special cases. General expressions for the transfer matrix and impulse response matrix are derived in terms of state-space descriptions. The concept of structure minimality is discussed, and related to properties called reachability and observability. It is seen that state-space descriptions give a different perspective on system poles, in terms of the eigenvalues of the state transition matrix. The chapter also revisits IIR digital allpass filters and derives several equivalent structures for them using so-called similarity transformations on state-space descriptions. Specifically, a number of lattice structures are presented for allpass filters. As a practical example of impact, if such a structure is used to implement the second-order allpass filter in a notch filter, then the notch frequency and notch quality can be independently controlled by two separate multipliers.
This is a detailed chapter on digital filter design. Specific digital filters such as notch and antinotch filters, and sharp-cutoff lowpass filters such as Butterworth filters are discussed in detail. Also discussed are allpass filters and some of their applications, including the implementation of notch and antinotch filters. Computational graphs (structures) for allpass filters are presented. It is explained how continuous-time filters can be transformed into discrete time by using the bilinear transformation. A simple method for the design of linear-phase FIR filters, called the window-based method, is also presented. Examples include the Kaiser window and the Hamming window. A comparative discussion of FIR and IIR filters is given. It is demonstrated how nonlinear-phase filters can create visible phase distortion in images. Towards the end, a detailed discussion of steady-state and transient components of filter outputs is given. The dependence of transient duration on pole position is explained. The chapter concludes with a discussion of spectral factorization.