To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In many applications of digital signal processing, it is necessary for different sampling rates to coexist within a given system. One common example is when two subsystems working at different sampling rates have to communicate and the sampling rates must be made compatible. Another case is when a wideband digital signal is decomposed into several non-overlapping narrowband channels in order to be transmitted. In such a case, each narrowband channel may have its sampling rate decreased until its Nyquist limit is reached, thereby saving transmission bandwidth.
Here, we describe such systems which are generally referred to as multirate systems. Multirate systems are used in several applications, ranging from digital filter design to signal coding and compression, and have been increasingly present in modern digital systems.
First, we study the basic operations of decimation and interpolation, and show how arbitrary rational sampling-rate changes can be implemented with them. The design of decimation and interpolation filters is also addressed. Then, we deal with filter design techniques which use decimation and interpolation in order to achieve a prescribed set of filter specifications. Finally, MATLAB functions which aid in the design and implementation of multirate systems are briefly described.
Basic principles
Intuitively, any sampling-rate change can be effected by recovering the band-limited analog signal xa(t) from its samples x(m), and then resampling it with a different sampling rate, thus generating a different discrete version of the signal, x′(n).
In the previous chapter, we dealt with multirate systems in general, that is, systems in which more than one sampling rate coexist. Operations of decimation, interpolation, and sampling-rate changes were studied, as well as some filter design techniques using multirate concepts.
In a number of applications, it is necessary to split a digital signal into several frequency bands. After such decomposition, the signal is represented by more samples than in the original stage. However, we can attempt to decimate each band, ending up with a digital signal decomposed into several frequency bands without increasing the overall number of samples. The question is whether it is possible to exactly recover the original signal from the decimated bands. Systems which achieve this are generally called filter banks.
In this chapter, we first deal with filter banks, showing several ways in which a signal can be decomposed into critically decimated frequency bands, and recovered from them with minimum error. Later, wavelet transforms are considered. They are a relatively recent development of functional analysis that is generating great interest in the signal processing community, because of their ability to represent and analyze signals with varying time and frequency resolutions. Their digital implementation can be regarded as a special case of critically decimated filter banks. Finally, we provide a brief description of functions from the MATLAB Wavelet toolbox which are useful for wavelets and filter banks implementation.
This book originated from a training course for engineers at the research and development center of TELEBRAS, the former Brazilian telecommunications holding. That course was taught by the first author back in 1987, and its main goal was to present efficient digital filter design methods suitable for solving some of their engineering problems. Later on, this original text was used by the first author as the basic reference for the digital filters and digital signal processing courses of the Electrical Engineering Program at COPPE/Federal University of Rio de Janeiro.
For many years, former students asked why the original text was not transformed into a book, as it presented a very distinct view that they considered worth publishing. Among the numerous reasons not to attempt such task, we could mention that there were already a good number of well-written texts on the subject; also, after many years of teaching and researching on this topic, it seemed more interesting to follow other paths than the painful one of writing a book; finally, the original text was written in Portuguese and a mere translation of it into English would be a very tedious task.
In recent years, the second and third authors, who had attended the signal processing courses using the original material, were continuously giving new ideas on how to proceed. That was when we decided to go through the task of completing and updating the original text, turning it into a modern textbook.
In practice, a digital signal processing system is implemented by software on a digital computer, either using a general-purpose digital signal processor, or using dedicated hardware for the given application. In either case, quantization errors are inherent due to the finite-precision arithmetic. These errors are of the following types:
Errors due to the quantization of the input signals into a set of discrete levels, such as the ones introduced by the analog-to-digital converter.
Errors in the frequency response of filters, or in transform coefficients, due to the finite-wordlength representation of multiplier constants.
Errors made when internal data, like outputs of multipliers, are quantized before or after subsequent additions.
All these error forms depend on the type of arithmetic utilized in the implementation. If a digital signal processing routine is implemented on a general-purpose computer, since floating-point arithmetic is in general available, this type of arithmetic becomes the most natural choice. On the other hand, if the building block is implemented on special-purpose hardware, or a fixed-point digital signal processor, fixed-point arithmetic may be the best choice, because it is less costly in terms of hardware and simpler to design. A fixed-point implementation usually implies a lot of savings in terms of chip area as well.
For a given application, the quantization effects are key factors to be considered when assessing the performance of a digital signal processing algorithm.
In late 1982, Ted Hannan discussed with me a question he had been asked by some astronomers – how could you estimate the two frequencies in two sinusoids when the frequencies were so close together that you could not tell, by looking at the periodogram, that there were two frequencies? He asked me if I would like to work with him on the problem and gave me a reprint of his paper (Hannan 1973) on the estimation of frequency. Together we wrote a paper (Hannan and Quinn 1989) which derived the regression sum of squares estimators of the frequencies, and showed that the estimators were strongly consistent and satisfied a central limit theorem. It was clear that there were no problems asymptotically if the two frequencies were fixed, so Ted's idea was to fix one frequency, and let the other converge to it at a certain rate, in much the same way as the alternative hypothesis is constructed to calculate the asymptotic power of a test. Since then, I have devoted much of my research to sinusoidal models. In particular, I have spent a lot of time constructing algorithms for the estimation of parameters in these models, to implementing the algorithms in practice and, for me perhaps the most challenging, establishing the asymptotic (large sample) properties of the estimators.
We encounter periodic: phenomena every day of our lives. Those of us who still use analogue clocks are acutely aware of the 60 second, 60 minute and 12 hour periods associated with the sweeps of the second, minute and hour hands. We are conscious of the fact that the Earth rotates on its axis roughly every 24 hours and that it completes a revolution of the Sun roughly every 365 days. These periodicities are reasonably accurate. The quantities we are interested in measuring are not precisely periodic and there will also be error associated with their measurement. Indeed, some phenomena only seem periodic. For example, some biological population sizes appear to fluctuate regularly over a long period of time, but it is hard to justify using common sense any periodicity other than that associated with the annual cycle. It has been argued in the past that some cycles occur because of predator-prey interaction, while in other cases there is no obvious reason. On the other hand, the sound associated with musical instruments can reasonably be thought of as periodic, locally in time, since musical notes are produced by regular vibration and propagated through the air via the regular compression and expansion of the air. The ‘signal’ will not be exactly periodic, since there are errors associated with the production of the sound, with its transmission through the air (since the air is not a uniform medium) and because the ear is not a perfect receiver.
We introduce in this chapter those statistical and probability techniques that underlie what is presented later. Few proofs will be given because a complete treatment of even a small part of what is dealt with here would require a book in itself. We do not intend to bother the reader with too formal a presentation. We shall be concerned with a sample space, Ω, which can be thought of as the set of all conceivable realisations of the random processes with which we are concerned. If A is a subset of Ω, then P(A) is the probability that the realisation is in A. Because we deal with discrete time series almost exclusively, questions of ‘measurability’, i.e. to which sets A can P(·) be applied, do not arise and will never be mentioned. We say this once and for all so that the text will not be filled with requirements that this or that set be measurable or that this or that function be a measurable function. Of course we shall see only (part of) one realisation, {x(t), t = 0, ±1, ±2,…} and are calling into being in our mind's eye, so to say, a whole family of such realisations. Thus we might write ω (t; ω) where ω ∈ Ω is the point corresponding to a particular realisation and, as ω varies for given t, we get a random variable, i.e. function defined on the sample space Ω.
There are several types of frequency estimation techniques which we have not yet discussed. In particular, we have not paid any attention to those based on autocovariances, such as Pisarenko's technique (Pisarenko 1973), or those based on phase differences, for complex time series, such as two techniques due to Kay (1989). We have not spent much effort on these for the very reason that we have been concerned with asymptotic theory and asymptotic optimality. That is, for fixed system parameters, we have been interested in the behaviour of frequency estimators as the sample size T increases, with the hope that the sample size we have is large enough for the asymptotic theory to hold well enough. Moreover, we have not wished to impose conditions such as Gaussianity or whiteness on the noise process, as the latter in particular is rarely met in practice. Engineers, however, are often interested in the behaviour of estimators for fixed values of T, and decreasing SNR. The usual measure of this behaviour is mean square error, which may be estimated via simulations. Such properties, however, may rarely be justified theoretically, as there is no statistical limit theory which allows the mean square errors of nonlinear estimators to be calculated using what are essentially limiting distribution results. Although the methods mentioned above are computationally simple and computationally efficient, we shall see that they cannot be statistically asymptotically efficient and may even be inconsistent, i.e., actually converge to the wrong value.
In this chapter, we apply the theory of Chapter 2 to sinusoidal models with fixed frequencies. In Section 3.2, the likelihood function under Gaussian noise assumptions is derived, for both white and coloured noise cases, and the relationships between the resulting maximum likelihood estimators and local maximisers of the periodogram is explored. The problem of estimating the fundamental frequency of a periodic signal in additive noise is also discussed. The asymptotic properties of these estimators are derived in Section 3.3. The results of a number of simulations are then used to judge the accuracy of the asymptotic theory in ‘small samples’.
The exact CRB for the single sinusoid case is computed in Section 3.4 and this is used in Section 3.5 to obtain accurate asymptotic theory for two special cases. In the first case, we assume that there are two sinusoids, with frequencies very close together. In fact, we assume that they are so close together that we expect sidelobe interference, and that the periodogram will not resolve the frequencies accurately. Although the difference between the frequencies is taken to be of the form, where T is the sample size, we show that the maximum likelihood estimators of the two frequencies still have the usual orders of accuracy.