To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces a different kind of problem, namely direct constrained matrix approximation via interpolation, the constraint being positive definiteness. It is the problem of completing a positive definite matrix for which only a well-ordered partial set of entries is given (and also giving necessary and sufficient conditions for the existence of the completion) or, alternatively, the problem of parametrizing positive definite matrices. This problem can be solved elegantly when the specified entries contain the main diagonal and further entries crowded along the main diagonal with a staircase boundary. This problem turns out to be equivalent to a constrained interpolation problem defined for a causal contractive matrix, with staircase entries again specified as before. The recursive solution calls for the development of a machinery known as scattering theory, which involves the introduction of nonpositive metrics and the use of J-unitary transformations where J is a sign matrix.
This chapter presents an alternative theory of external and coprime factorization, using polynomial denominators in the noncommutative time-variant shift Z rather than inner denominators as done in the chapter on inner–outer theory. “Polynomials in the shift Z” are equivalent to block-lower matrices with a support defined by a (block) staircase, and are essentially different from the classical matrix polynomials of module theory, although the net effect on system analysis is remarkably similar. The polynomial method differs substantially and in a complementary way from the inner method. It is computationally simpler but does not use orthogonal transformations. It offers the possibility of treating highly unstable systems using unilateral series. Also, this approach leads to famous Bezout equations that, as mentioned in the abstract of Chapter 7, can be derived without the benefit of Euclidean divisibility methods.
A general view on digital modulation schemes beyond the concept of PAM is developed. This is required as many important modulation formats (e.g., digital frequency modulation) do not fall under the umbrella of PAM. To that end, the separation between the operations of coding and modulation is unambiguously defined. The key tool for the analysis and synthesis of transmission schemes is the representation of signals in a signal space. The concept is introduced and discussed in detail. Based on this view, methods for optimum coherent and non-coherent signal reception for any kind of general digital modulation scheme are derived. The principles of maximum-likelihood detection and maximum-likelihood sequence detection are discussed.
This chapter is on elementary matrix operations using a state-space or, equivalently, quasi-separable representation. It is a straightforward but unavoidable chapter. It shows how the recursive structure of the state-space representations is exploited to make matrix addition, multiplication and elementary inversion numerically efficient. The notions of outer operator and inner operator are introduced as basic types of matrices playing a central role in various specific matrix decompositions and factorizations to be treated in further chapters.
This chapter considers likely the most important operation in system theory: inner–outer and its dual, outer–inner factorization. These factorizations play a different role than the previously treated external or coprime factorizations, in that they characterize properties of the inverse or pseudo-inverse of the system under consideration, rather than the system itself. Important is that such factorizations are computed on the state-space representation of the original, that is, the original data. Inner–outer (or outer–inner) factorization is nothing but recursive “QR factorization,” as was already observed in our motivational Chapter 2, and outer–inner is recursive “LQ factorization,” in the somewhat unorthodox terminology used in this book for consistency reasons: QR for “orthogonal Q with a right factor R? and LQ for a “left factor” L with orthogonal Q?. These types of factorizations play the central role in a variety of applications (e.g., optimal tracking, state estimation, system pseudo-inversion, and spectral factorization) to be treated in the following chapters. We conclude the chapter showing how the time-variant, linear results generalize to the nonlinear case.
The most basic and most widely used form of mapping binary information to a physical transmit signal and back is digital pulse-amplitude modulation (PAM). As the name suggests, here the information is carried in the (complex-valued) amplitude of a basic pulse. We deal with real-valued and complex-valued amplitude coefficients in a unified manner. Thus, all kinds of baseband (amplitude-shift keying (ASK)) and carrier-modulated (quadrature-amplitude modulation (QAM) and phase-shift keying (PSK)) signal formats are included in the concept of PAM. PAM is the simplest form of digital modulation but establishes the basis for enhanced variants discussed in subsequent chapters. In this chapter, the focus is on modulation and demodulation operations. As, in a first approach, no channel coding is considered, modulation reduces to a symbol-by-symbol mapping of blocks of binary source symbols to signal points and detection at the receiver side can also be performed symbol by symbol. Strategies for optimum signal detection and conditions for continuous transmission of sequences of symbols without intersymbol interference (ISI) over non-dispersive channels are precisely developed.
Matrix theory is the lingua franca of everyone who deals with dynamically evolving systems, and familiarity with efficient matrix computations is an essential part of the modern curriculum in dynamical systems and associated computation. This is a master's-level textbook on dynamical systems and computational matrix algebra. It is based on the remarkable identity of these two disciplines in the context of linear, time-variant, discrete-time systems and their algebraic equivalent, quasi-separable systems. The authors' approach provides a single, transparent framework that yields simple derivations of basic notions, as well as new and fundamental results such as constrained model reduction, matrix interpolation theory and scattering theory. This book outlines all the fundamental concepts that allow readers to develop the resulting recursive computational schemes needed to solve practical problems. An ideal treatment for graduate students and academics in electrical and computer engineering, computer science and applied mathematics.
This innovative introduction to the foundations of signals, systems, and transforms emphasises discrete-time concepts, smoothing the transition towards more advanced study in Digital Signal Processing (DSP). A digital-first approach, introducing discrete-time concepts from the beginning, equips students with a firm theoretical foundation in signals and systems, while emphasising topics fundamental to understanding DSP. Continuous-time approaches are introduced in later chapters, providing students with a well-rounded understanding that maintains a strong digital emphasis. Real-world applications, including music signals, signal denoising systems, and digital communication systems, are introduced to encourage student motivation. Early introduction of core concepts in digital filtering, DFT and FFT provide a frictionless transition through to more advanced study. Over 325 end-of-chapter problems, and over 50 computational problems using Matlab. Accompanied online by solutions and code for instructors, this rigorous textbook is ideal for undergraduate students in electrical engineering studying an introductory course in signals, systems, and signal processing.
This chapter introduces recursive difference equations where the initial conditions are nonzero. The output of such a system is studied in detail. One application is in the design of digital waveform generators such as oscillators, and this is explained in considerable detail. The coupled-form oscillator, which simultaneously generates synchronized sine and cosine waveforms at a given frequency, is presented. The chapter also introduces another application of recursive difference equations, namely the computation of mortgages. It is shown that the monthly payment on a loan can be computed using a first-order recursive difference equation. The equation also allows one to calculate the interest and principal parts of the payment every month, as shown. Poles play a crucial role in the behavior of recursive difference equations with zero or nonzero initial conditions. Many different manifestations of the effect of a pole are also summarized, including some time-domain dynamical meanings of poles.
This chapter introduces structures and structural interconnections for LTI systems and then considers several examples of digital filters. Examples include moving average filters, difference operators, and ideal lowpass filters. It is then shown how to convert lowpass filters into other types, such as highpass, bandpass, and so on, by use of simple transformations. Phase distortion is explained, and linear-phase digital filters are introduced, which do not create phase distortion. The use of digital filters in noise removal (denoising) is also demonstrated for 1D signals and 2D images. The filtering of an image into low and high-frequency subbands is demonstrated, and the motivation for subband decomposition in audio and image compression is explained. Finally, it is shown that the convolution operation can be represented as a matrix vector multiplication, where the matrix has Toeplitz structure. The matrix representation also shows us how to undo a filtering operation through a process called deconvolution.
This chapter introduces different types of signals, and studies the properties of many kinds of systems that are encountered in signal processing. Signals discussed include the exponential signal, the unit step, single-frequency signals, rectangular pulses, Dirac delta signals, and periodic signals. Two-dimensional signals, especially 2D frequencies and sinusoids, are also demonstrated. Many types of systems are discussed, such as homogeneous systems, additive systems, linear systems, stable systems, time-invariant systems, and causal systems. Both continuous and discrete-time cases are discussed. Examples are presented throughout, such as music signals, ECG signals, and so on, to demonstrate the concepts. Subtle differences between discrete-time and continuous-time signals and systems are also pointed out.
This chapter introduces bandlimited signals, sampling theory, and the method of reconstruction from samples. Uniform sampling with a Dirac delta train is considered, and the Fourier transform of the sampled signal is derived. The reconstruction from samples is based on the use of a linear filter called an interpolator. When the sampling rate is not sufficiently large, the sampling process leads to a phenomenon called aliasing. This is discussed in detail and several real-world manifestations of aliasing are also discussed. In practice, the sampled signal is typically processed by a digital signal processing device, before it is converted back into a continuous-time signal. The building blocks in such a digital signal processing system are discussed. Extensions of the lowpass sampling theorem to the bandpass case are also presented. Also proved is the pulse sampling theorem, where the sampling pulse is spread out over a short duration, unlike the Dirac delta train. Bandlimited channels are discussed and it is explained how the data rate that can be transmitted over a channel is limited by channel bandwidth.