To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
At present, most electricity networks simply supply to the home or office whatever energy is demanded. A meter, often at the periphery of the building, monitors consumption and the building owner is subsequently charged accordingly.
While simple, this approach has a number of disadvantages.
It requires significant additional electricity generation to be available in order to supply peaks in demand. This is both costly and can have an environmental impact.
It provides little information to the home owner as to their instantaneous usage, making it hard to understand how energy consumption can be reduced.
It does not readily allow electricity generated locally, for example via solar panels on the building, to be supplied back to the grid.
Reading the meter can require the visit of an employee to the home.
With increasing environmental concerns and the possibility of a substantial increase of demand for electricity if battery-powered cars are charged at home, there are strong drivers to enhance the electricity supply in order to overcome these disadvantages. Such an approach is often termed the ‘smart grid’ – ‘smart’ because it would have some intelligence in terms of the way in which electricity is consumed. There are many differing views as to what the smart grid might look like and how it might be provided, which are explored in this section.
We started by examining all of the known new technologies currently ‘on the wireless horizon’ or, in some cases, much closer to implementation.
We looked at fourth-generation cellular systems and noted that they might bring some advantages in terms of both higher data rates and more efficient use of spectrum. With new spectrum becoming available to cellular operators in bands such as UHF (between 500 and 800 MHz), 2.6 GHz and 3.4 GHz, there is an inclination to use this for a new generation of technology rather than deploying more 3G, and this additional spectrum alone will provide much additional capacity. However, with cellular capacity rapidly being consumed by data, end users might not notice a substantial difference on going from 3G to 4G – instead the technology may be more about reducing the operator's cost base. A major question mark over 4G is the extent to which MIMO can bring benefits in real deployments; if it does not, then many of the promised gains of 4G will not prove to be real.
Femtocells are a topic of much current interest. We are certain that there will be small cells in the home – indeed, there already are WiFi hotspots in many. What is less clear is whether femtocells will be deployed in addition to WiFi. Much of this depends on the business models of the cellular operators; already different operators are deploying different models.
Capacity on wireless channels is often scarce. As discussed in earlier chapters, the capacity of a wireless system is determined by the efficiency of the technology, the amount of spectrum and the number of cells. Adding additional capacity almost invariably comes at a cost and hence there is an incentive to reduce the capacity requirements as much as possible. Only in certain networks, such as within the home, where the wireless systems provide more than sufficient capacity, is this not true – but even in these cases the demands have a tendency to grow to take up the available capacity.
One approach to reducing capacity needs is compression. Many of the types of information to be transmitted have significant redundancy within them – for example, much of speech is silence (between words or when the other party is talking), while one video frame tends to be very similar to the preceding one. In some cases huge reductions in data rates can be achieved by compressing the data stream. For example, one of the major gains in the number of voice calls that could be handled on moving to 2G cellular was the ability to use digital encoders to digitise voice and in the process substantially reduce the data rate needed. In this chapter we cover the current capabilities and likely future progress of encoders and decoders (collectively known as ‘codecs’) and consider the implications for wireless data requirements in the future.
It is a commonplace to say that humans are social animals. It is altogether another thing to leverage this fact of human nature to devise new services and technologies to support it. Indeed, the past 10 years or so have made it clear that there is both a need for, and many new business opportunities made possible by, technologically enabled social communications that had not been expected. These have generated and will continue to generate traffic for fixed and wireless infrastructures.
Social connections may be distinguished from person-to-person communications by the fact that they typically entail the broadcasting of messages or, for example, the multiple viewing of single messages on the home page of an individual's social network. Websites such as YouTube, MySpace and Twitter are incredibly popular because they satisfy a basic desire to stay in touch and to know what friends, colleagues and family are up to. It is about being part of a group, not about individual relationships. But satisfying this desire has also created new distinctions in types of social connection and this in turn has cultivated new needs. It is very likely that these will continue to evolve over the next decade or so in ways that will have consequences for technology.
Several forms of digitally enabled social connection can be demarcated. One relates to the sustaining and invigorating of existing social relationships. Here websites like Facebook come to mind.
Cells in the sky can broadly be divided into high-altitude platforms (HAPs) and satellites. HAPs are based on flying platforms such as aircraft or balloons, operating at altitudes of up to about 60,000 feet, while satellites use a wide range of orbits from low-Earth-orbit (LEO) systems such as Iridium at 300–800 km above the Earth to geo-stationary (GEO) satellites such as those used for TV broadcasting at 36,000 km above the Earth's surface. Systems of each type have their own particular characteristics, which will be discussed in the following sections.
A cell in the sky can provide excellent outdoor coverage. Owing to its elevated position, obstacles such as mountains or buildings tend not to get in the way, allowing line-of-sight propagation from many locations. Large cells can readily be provided, enabling coverage of both urban and rural areas. Because much of the propagation is line-of-sight, higher frequencies, such as those above 3 GHz, can be used. These are inappropriate for cellular communications because the decrease in reflection and refraction at higher frequencies prevents good coverage, but this is generally not a problem for cells in the sky. At higher frequencies spectrum is both less expensive and more plentiful. A good example of this is TV broadcasting. Terrestrial TV broadcasting uses frequencies in the UHF range – about 500–800 MHz.
Conventional wireless networks have a central transmitter, often termed a base station, transmitter mast or node. This controls the communications with devices within its range. For example, in a cellular system base stations provide coverage across an area and control the access from mobiles in the vicinity. The central transmitter is often elevated relative to the receivers – transmitters of cellular masts are typically 10–20 m above the ground while mobiles are mostly 1–2 m above ground.
A much discussed alternative is for there not to be a central transmitter. In the most extreme case devices transmit to other devices that relay their message onwards. If, for example, all communications occurred within a shopping mall, it might be quite possible for messages to pass from transmitter to intended recipient via re-transmissions (often termed ‘hops’) from one device to another across the mesh. Alternatively, the message might pass through a mesh in order to reach a point of interconnection with the fixed network (a ‘sink node’). At this point the message would be routed through the fixed network to the recipient in a conventional manner, although the final ‘drop’ to the recipient might be via another wireless mesh network.
Mesh systems potentially bring a number of advantages.
No need for infrastructure. Without any central transmitters, mesh networks do not require any infrastructure and hence are simpler, cheaper and faster to establish than conventional networks. They can also work where it is not possible to deploy a central infrastructure, perhaps in a war zone or during a civil emergency.
Transportation is an area where wireless can provide important services. From warning of congestion through to automatically guiding vehicles there are many benefits that wireless could bring. Wireless already plays a substantial role in some sectors – for example, air travel without wireless communications and radar is hard to imagine. But in other areas, such as driving, wireless plays a more limited role of providing radio entertainment and satellite navigation.
Transport is perceived to be one of the largest contributors to greenhouse gases and there is much interest in reducing emissions. Wireless could potentially play a role by making transport systems more efficient.
This chapter looks at each of the key modes of transport and considers the role that wireless might play and the difficulties in its introduction.
Road
Road applications can be divided into
route guidance,
safety and
vehicle telematics.
Route guidance
Many now make use of satellite navigation (satnav) systems to guide them to their destination. Satnav systems are gradually improving via the addition of information about congestion and alternative routing. Such information has been available for some time via dedicated sensors, but more recently some satnav systems have started to report on their speed of movement to a central control location. This can then make deductions about congestion (if many cars on a particular road report a slow speed, this is a strong indication of congestion), which it can then send to other satnav devices in the vicinity, allowing them to route around the congestion.
This chapter deals with the design methods in which a desired frequency response is approximated by a transfer function consisting of a ratio of polynomials. In general, this type of transfer function yields an impulse response of infinite duration. Therefore, the systems approximated in this chapter are commonly referred to as IIR filters.
In general, IIR filters are able to approximate a prescribed frequency response with fewer multiplications than FIR filters. For that matter, IIR filters can be more suitable for some practical applications, especially those involving real-time signal processing.
In Section 6.2 we study the classical methods of analog filter approximation, namely the Butterworth, Chebyshev, and elliptic approximations. These methods are the most widely used for approximations meeting prescribed magnitude specifications. They originated in the continuous-time domain and their use in the discrete-time domain requires an appropriate transformation.
We then address, in Section 6.3, two approaches that transform a continuous-time transfer function into a discrete-time transfer function, namely the impulse-invariance and bilinear transformation methods.
Section 6.4 deals with frequency transformation methods in the discrete-time domain. These methods allow the mapping of a given filter type to another; for example, the transformation of a given lowpass filter into a desired bandpass filter.
In applications where magnitude and phase specifications are imposed, we can approximate the desired magnitude specifications by one of the classical transfer functions and design a phase equalizer to meet the phase specifications.
In many applications of digital signal processing, it is necessary for different sampling rates to coexist within a given system. One common example is when two subsystems working at different sampling rates have to communicate and the sampling rates must be made compatible. Another case is when a wideband digital signal is decomposed into several nonoverlapping narrowband channels in order to be transmitted. In such a case, each narrowband channel may have its sampling rate decreased until its Nyquist limit is reached, thereby saving transmission bandwidth.
Here, we describe such systems which are generally referred to as multirate systems. Multirate systems are used in several applications, ranging from digital filter design to signal coding and compression, and have been increasingly present in modern digital systems.
First, we study the basic operations of decimation and interpolation, and show how arbitrary rational sampling-rate changes can be implemented with them. Then, we describe properties pertaining to the multirate systems, namely their valid inverse operations and the noble identities. With these properties introduced, the next step is to present the polyphase decompositions and the commutator models, which are key tools in multirate systems. The design of decimation and interpolation filters is also addressed. A step further is to deal with filter design techniques which use decimation and interpolation in order to achieve a prescribed set of filter specifications.
In Chapter 8 we dealt with multirate systems in general; that is, systems in which more than one sampling rate coexist. Operations of decimation, interpolation, and sampling-rate changes were studied, as well as some filter design techniques using multirate concepts.
In a number of applications, it is necessary to split a digital signal into several frequency bands. After such decomposition, the signal is represented by more samples than in the original stage. However, we can attempt to decimate each band, ending up with a digital signal decomposed into several frequency bands without increasing the overall number of samples. The question is whether it is possible to recover the original signal exactly from the decimated bands. Systems which decompose and reassemble the signals are generally called filter banks.
In this chapter, we deal with filter banks, showing several ways in which a signal can be decomposed into critically decimated frequency bands, and recovered from them with minimum error. We start with an analysis of M-band filter banks, giving conditions for perfect reconstruction. Then we perform both frequency- and time-domain analyses of filter banks, followed by a discussion on orthogonality. We also treat two-band perfect reconstruction filter banks, and present the special designs for quadrature mirror filters (QMFs) and conjugate quadrature filters (CQFs). In addition, we shift to M-band filter banks, analyzing block transforms, cosine-modulated filter banks, and lapped transforms.
In previous chapters we were introduced to some design techniques for FIR and IIR digital filters. Some of these techniques can also be used in other applications related to the general field of digital signal processing. In the present chapter we consider the very practical problem of estimating the power spectral density (PSD) of a given discrete-time signal y(n). This problem appears in several applications, such as radar/sonar systems, music transcription, speech modeling, and so on. In general, the problem is often solved by first estimating the autocorrelation function associated with the data at hand, followed by a Fourier transform to obtain the desired spectral description of the process, as suggested by the Wiener–Khinchin theorem to be described in this chapter.
There are several algorithms for performing spectral estimation. Each one has different characteristics with respect to computational complexity, precision, frequency resolution, or other statistical aspects. We may classify all algorithms as nonparametric or parametric methods. Nonparametric methods do not assume any particular structure behind the available data, whereas parametric schemes consider that the process follows some pattern characterized by a specific set of parameters pertaining to a given model. In general, parametric approaches tend to be simpler and more accurate, but they depend on some a priori information regarding the problem at hand.
In Chapter 9 we dealt with filter banks, which are important in several applications. In this chapter, wavelet transforms are considered. They come from the area of functional analysis and generate great interest in the signal processing community, because of their ability to represent and analyze signals with varying time and frequency resolutions. Their digital implementation can be regarded as a special case of critically decimated filter banks. Multiresolution decompositions are then presented as an application of wavelet transforms. The concepts of regularity and number of vanishing moments of a wavelet transform are then explored. Two-dimensional wavelet transforms are introduced, with emphasis on image processing. Wavelet transforms of finite-length signals are also dealt with. We wrap up the chapter with a Do-it-yourself section followed by a brief description of functions from the Matlab Wavelet Toolbox which are useful for wavelets implementation.
Wavelet transforms
Wavelet transforms are a relatively recent development in functional analysis that have attracted a great deal of attention from the signal processing community (Daubechies, 1991). The wavelet transform of a function belonging to ℒ2{ℝ}, the space of the square integrable functions, is its decomposition in a base formed by expansions, compressions, and translations of a single mother function ψ(t), called a wavelet.
The applications of wavelet transforms range from quantum physics to signal coding. It can be shown that for digital signals the wavelet transform is a special case of critically decimated filter banks (Vetterli & Herley, 1992).
This chapter starts by addressing some implementation methods for digital filtering algorithms and structures. The implementation of any building block of digital signal processing can be performed using a software routine on a simple personal computer. In this case, the designer's main concern becomes the description of the desired filter as an efficient algorithm that can be easily converted into a piece of software. In such cases, the hardware concerns tend to be noncritical, except for some details such as memory size, processing speed, and data input/output.
Another implementation strategy is based on specific hardware, especially suitable for the application at hand. In such cases, the system architecture must be designed within the speed constraints at a minimal cost. This form of implementation is mainly justified in applications that require high processing speed or in large-scale production. The four main forms of appropriate hardware for implementing a given system are:
The development of a specific architecture using basic commercial electronic components and integrated circuits (Jackson et al., 1968; Peled & Liu, 1974, 1985; Freeny, 1975; Rabiner & Gold, 1975;Wanhammar, 1981).
The use of programmable logic devices (PLDs), such as field-programmable gate arrays (FPGAs), which represent an intermediate integrated stage between discrete hardware and full-custom integrated circuits or digital signal processors (DSPs) (Skahill, 1996).
The design of a dedicated integrated circuit for the application at hand using computer-automated tools for a very large scale integration (VLSI) design.
The most widely used realizations for IIR filters are the cascade and parallel forms of second-order, and, sometimes, first-order, sections. The main advantages of these realizations come from their inherent modularity, which leads to efficient VLSI implementations, to simplified noise and sensitivity analyses, and to simple limit-cycle control. This chapter presents high-performance second-order structures, which are used as building blocks in high-order realizations. The concept of section ordering for the cascade form, which can reduce roundoff noise in the filter output, is introduced. Then we present a technique to reduce the output roundoff-noise effect known as error spectrum shaping. This is followed by consideration of some closed-form equations for the scaling coefficients of second-order sections for the design of parallel-form filters.
We also deal with other interesting realizations, such as the doubly complementary filters, made from allpass blocks, and IIR lattice structures, whose synthesis method is presented. Arelated class of realizations is the wave digital filters, which have very low sensitivity and also allow the elimination of zero-input and overflow limit cycles. The wave digital filters are derived from analog filter prototypes, employing the concepts of incident and reflected waves. The detailed design of these structures is presented in this chapter.
IIR parallel and cascade filters
The Nth-order direct forms seen in Chapter 4, Figures 4.11–4.13, have roundoff-noise transfer functions Gi(z) (see Figure 11.16) and scaling transfer functions Fi(z) (see Figure 11.20) whose L2 or L∞ norms assume significantly high values.