To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A decade ago the mobile phone was still something of a novelty. Users accepted that coverage was imperfect and were mostly happy to be able to make calls in a few locations. Dropping calls when passing through areas without coverage was accepted as a fact of life when using mobile phones.
Over time people have become increasingly reliant on the mobile. Instead of a tool to allow plans to be adjusted at the last minute when necessary, it has become the means by which peoples' lives get organised. Tradesmen have dispensed with front-office staff because they are able to take calls while working. Businessmen arrange conference calls while travelling on the assumption that they will have mobile coverage. Parents leave children unattended, assuming that they can be contacted via a mobile phone in case of problems. A lack of mobile coverage can be a significant problem to many in their lives.
Areas where there is no coverage are often known as ‘not-spots’ and the complaints about these have increased steadily. An expectation of perfect coverage is growing – both from consumers, who want reliability from their phones, and from governments, which want to provide citizens with services that are increasingly seen as essential. This section looks at why not-spots occur, how they might be resolved and whether they will lead to a mobile service that is increasingly regulated and delivered as an essential utility.
It might be assumed that a book on wireless futures would concern itself solely with technological matters, while users of that technology are left for a different text. At this point in the book, however, we turn to those technologies regarding which the role and changing behaviours of the user are central. A concern with users helps define what those technologies might be, how they will evolve, and what changes they might bring about in user behaviour that will in turn have implications for the technology. In this chapter, we present an overview on the importance of user behaviour in this regard before offering some high-level prognoses concerning the future. Subsequent chapters will deal with particular technologies and themes, such as location, health and transport.
As will become clear, one cannot separate the evolution of user behaviours entirely from the possibilities that the wireless landscape affords – the two are inseparable. Nonetheless, an emphasis on the user can highlight issues that are sometimes neglected in wireless research. This can help guide insights into the future, which is our task here. The interface between devices and services and the user will be central to this, but so too will be the changing trajectories of actions enabled by new hardware and services. The interface is merely the prism for both what users can do and what they want to do, both of which broaden through time.
Unlicensed spectrum is becoming more valuable but more congested
When Ofcom last looked at the value that a country derives from its use of spectrum in 2007, it concluded that unlicensed use delivered only about 1% of the overall value. This was a backwards-looking survey that was based on evidence from previous years. To come to these conclusions, the survey assumed that the main value in unlicensed use was WiFi and that this added value by enabling home owners to avoid wiring their home. At the time and looking at the evidence available for earlier years, this may have been a reasonable characterisation. It led to the overall assessment that spectrum managers should concentrate on licensed bands, where the overwhelming value of the spectrum was to be found.
Since that time much has changed. WiFi has become more than just a wire-replacement technology. It is slowly becoming a core part of our communications network, used by many on a daily basis to improve productivity and access network resources in a range of locations. It is increasingly used by cellular operators as a means of offloading data traffic, to the extent that in a few years time a widespread failure of WiFi networks could result in serious congestion on cellular networks. WiFi or other unlicensed devices may form home networks that can deliver important benefits in terms of energy efficiency, assistance in the home to the elderly and infirm, and a core part of the home-entertainment proposition.
At present, most electricity networks simply supply to the home or office whatever energy is demanded. A meter, often at the periphery of the building, monitors consumption and the building owner is subsequently charged accordingly.
While simple, this approach has a number of disadvantages.
It requires significant additional electricity generation to be available in order to supply peaks in demand. This is both costly and can have an environmental impact.
It provides little information to the home owner as to their instantaneous usage, making it hard to understand how energy consumption can be reduced.
It does not readily allow electricity generated locally, for example via solar panels on the building, to be supplied back to the grid.
Reading the meter can require the visit of an employee to the home.
With increasing environmental concerns and the possibility of a substantial increase of demand for electricity if battery-powered cars are charged at home, there are strong drivers to enhance the electricity supply in order to overcome these disadvantages. Such an approach is often termed the ‘smart grid’ – ‘smart’ because it would have some intelligence in terms of the way in which electricity is consumed. There are many differing views as to what the smart grid might look like and how it might be provided, which are explored in this section.
We started by examining all of the known new technologies currently ‘on the wireless horizon’ or, in some cases, much closer to implementation.
We looked at fourth-generation cellular systems and noted that they might bring some advantages in terms of both higher data rates and more efficient use of spectrum. With new spectrum becoming available to cellular operators in bands such as UHF (between 500 and 800 MHz), 2.6 GHz and 3.4 GHz, there is an inclination to use this for a new generation of technology rather than deploying more 3G, and this additional spectrum alone will provide much additional capacity. However, with cellular capacity rapidly being consumed by data, end users might not notice a substantial difference on going from 3G to 4G – instead the technology may be more about reducing the operator's cost base. A major question mark over 4G is the extent to which MIMO can bring benefits in real deployments; if it does not, then many of the promised gains of 4G will not prove to be real.
Femtocells are a topic of much current interest. We are certain that there will be small cells in the home – indeed, there already are WiFi hotspots in many. What is less clear is whether femtocells will be deployed in addition to WiFi. Much of this depends on the business models of the cellular operators; already different operators are deploying different models.
Capacity on wireless channels is often scarce. As discussed in earlier chapters, the capacity of a wireless system is determined by the efficiency of the technology, the amount of spectrum and the number of cells. Adding additional capacity almost invariably comes at a cost and hence there is an incentive to reduce the capacity requirements as much as possible. Only in certain networks, such as within the home, where the wireless systems provide more than sufficient capacity, is this not true – but even in these cases the demands have a tendency to grow to take up the available capacity.
One approach to reducing capacity needs is compression. Many of the types of information to be transmitted have significant redundancy within them – for example, much of speech is silence (between words or when the other party is talking), while one video frame tends to be very similar to the preceding one. In some cases huge reductions in data rates can be achieved by compressing the data stream. For example, one of the major gains in the number of voice calls that could be handled on moving to 2G cellular was the ability to use digital encoders to digitise voice and in the process substantially reduce the data rate needed. In this chapter we cover the current capabilities and likely future progress of encoders and decoders (collectively known as ‘codecs’) and consider the implications for wireless data requirements in the future.
It is a commonplace to say that humans are social animals. It is altogether another thing to leverage this fact of human nature to devise new services and technologies to support it. Indeed, the past 10 years or so have made it clear that there is both a need for, and many new business opportunities made possible by, technologically enabled social communications that had not been expected. These have generated and will continue to generate traffic for fixed and wireless infrastructures.
Social connections may be distinguished from person-to-person communications by the fact that they typically entail the broadcasting of messages or, for example, the multiple viewing of single messages on the home page of an individual's social network. Websites such as YouTube, MySpace and Twitter are incredibly popular because they satisfy a basic desire to stay in touch and to know what friends, colleagues and family are up to. It is about being part of a group, not about individual relationships. But satisfying this desire has also created new distinctions in types of social connection and this in turn has cultivated new needs. It is very likely that these will continue to evolve over the next decade or so in ways that will have consequences for technology.
Several forms of digitally enabled social connection can be demarcated. One relates to the sustaining and invigorating of existing social relationships. Here websites like Facebook come to mind.
Cells in the sky can broadly be divided into high-altitude platforms (HAPs) and satellites. HAPs are based on flying platforms such as aircraft or balloons, operating at altitudes of up to about 60,000 feet, while satellites use a wide range of orbits from low-Earth-orbit (LEO) systems such as Iridium at 300–800 km above the Earth to geo-stationary (GEO) satellites such as those used for TV broadcasting at 36,000 km above the Earth's surface. Systems of each type have their own particular characteristics, which will be discussed in the following sections.
A cell in the sky can provide excellent outdoor coverage. Owing to its elevated position, obstacles such as mountains or buildings tend not to get in the way, allowing line-of-sight propagation from many locations. Large cells can readily be provided, enabling coverage of both urban and rural areas. Because much of the propagation is line-of-sight, higher frequencies, such as those above 3 GHz, can be used. These are inappropriate for cellular communications because the decrease in reflection and refraction at higher frequencies prevents good coverage, but this is generally not a problem for cells in the sky. At higher frequencies spectrum is both less expensive and more plentiful. A good example of this is TV broadcasting. Terrestrial TV broadcasting uses frequencies in the UHF range – about 500–800 MHz.
Conventional wireless networks have a central transmitter, often termed a base station, transmitter mast or node. This controls the communications with devices within its range. For example, in a cellular system base stations provide coverage across an area and control the access from mobiles in the vicinity. The central transmitter is often elevated relative to the receivers – transmitters of cellular masts are typically 10–20 m above the ground while mobiles are mostly 1–2 m above ground.
A much discussed alternative is for there not to be a central transmitter. In the most extreme case devices transmit to other devices that relay their message onwards. If, for example, all communications occurred within a shopping mall, it might be quite possible for messages to pass from transmitter to intended recipient via re-transmissions (often termed ‘hops’) from one device to another across the mesh. Alternatively, the message might pass through a mesh in order to reach a point of interconnection with the fixed network (a ‘sink node’). At this point the message would be routed through the fixed network to the recipient in a conventional manner, although the final ‘drop’ to the recipient might be via another wireless mesh network.
Mesh systems potentially bring a number of advantages.
No need for infrastructure. Without any central transmitters, mesh networks do not require any infrastructure and hence are simpler, cheaper and faster to establish than conventional networks. They can also work where it is not possible to deploy a central infrastructure, perhaps in a war zone or during a civil emergency.
Transportation is an area where wireless can provide important services. From warning of congestion through to automatically guiding vehicles there are many benefits that wireless could bring. Wireless already plays a substantial role in some sectors – for example, air travel without wireless communications and radar is hard to imagine. But in other areas, such as driving, wireless plays a more limited role of providing radio entertainment and satellite navigation.
Transport is perceived to be one of the largest contributors to greenhouse gases and there is much interest in reducing emissions. Wireless could potentially play a role by making transport systems more efficient.
This chapter looks at each of the key modes of transport and considers the role that wireless might play and the difficulties in its introduction.
Road
Road applications can be divided into
route guidance,
safety and
vehicle telematics.
Route guidance
Many now make use of satellite navigation (satnav) systems to guide them to their destination. Satnav systems are gradually improving via the addition of information about congestion and alternative routing. Such information has been available for some time via dedicated sensors, but more recently some satnav systems have started to report on their speed of movement to a central control location. This can then make deductions about congestion (if many cars on a particular road report a slow speed, this is a strong indication of congestion), which it can then send to other satnav devices in the vicinity, allowing them to route around the congestion.
This chapter deals with the design methods in which a desired frequency response is approximated by a transfer function consisting of a ratio of polynomials. In general, this type of transfer function yields an impulse response of infinite duration. Therefore, the systems approximated in this chapter are commonly referred to as IIR filters.
In general, IIR filters are able to approximate a prescribed frequency response with fewer multiplications than FIR filters. For that matter, IIR filters can be more suitable for some practical applications, especially those involving real-time signal processing.
In Section 6.2 we study the classical methods of analog filter approximation, namely the Butterworth, Chebyshev, and elliptic approximations. These methods are the most widely used for approximations meeting prescribed magnitude specifications. They originated in the continuous-time domain and their use in the discrete-time domain requires an appropriate transformation.
We then address, in Section 6.3, two approaches that transform a continuous-time transfer function into a discrete-time transfer function, namely the impulse-invariance and bilinear transformation methods.
Section 6.4 deals with frequency transformation methods in the discrete-time domain. These methods allow the mapping of a given filter type to another; for example, the transformation of a given lowpass filter into a desired bandpass filter.
In applications where magnitude and phase specifications are imposed, we can approximate the desired magnitude specifications by one of the classical transfer functions and design a phase equalizer to meet the phase specifications.
In many applications of digital signal processing, it is necessary for different sampling rates to coexist within a given system. One common example is when two subsystems working at different sampling rates have to communicate and the sampling rates must be made compatible. Another case is when a wideband digital signal is decomposed into several nonoverlapping narrowband channels in order to be transmitted. In such a case, each narrowband channel may have its sampling rate decreased until its Nyquist limit is reached, thereby saving transmission bandwidth.
Here, we describe such systems which are generally referred to as multirate systems. Multirate systems are used in several applications, ranging from digital filter design to signal coding and compression, and have been increasingly present in modern digital systems.
First, we study the basic operations of decimation and interpolation, and show how arbitrary rational sampling-rate changes can be implemented with them. Then, we describe properties pertaining to the multirate systems, namely their valid inverse operations and the noble identities. With these properties introduced, the next step is to present the polyphase decompositions and the commutator models, which are key tools in multirate systems. The design of decimation and interpolation filters is also addressed. A step further is to deal with filter design techniques which use decimation and interpolation in order to achieve a prescribed set of filter specifications.
In Chapter 8 we dealt with multirate systems in general; that is, systems in which more than one sampling rate coexist. Operations of decimation, interpolation, and sampling-rate changes were studied, as well as some filter design techniques using multirate concepts.
In a number of applications, it is necessary to split a digital signal into several frequency bands. After such decomposition, the signal is represented by more samples than in the original stage. However, we can attempt to decimate each band, ending up with a digital signal decomposed into several frequency bands without increasing the overall number of samples. The question is whether it is possible to recover the original signal exactly from the decimated bands. Systems which decompose and reassemble the signals are generally called filter banks.
In this chapter, we deal with filter banks, showing several ways in which a signal can be decomposed into critically decimated frequency bands, and recovered from them with minimum error. We start with an analysis of M-band filter banks, giving conditions for perfect reconstruction. Then we perform both frequency- and time-domain analyses of filter banks, followed by a discussion on orthogonality. We also treat two-band perfect reconstruction filter banks, and present the special designs for quadrature mirror filters (QMFs) and conjugate quadrature filters (CQFs). In addition, we shift to M-band filter banks, analyzing block transforms, cosine-modulated filter banks, and lapped transforms.
In previous chapters we were introduced to some design techniques for FIR and IIR digital filters. Some of these techniques can also be used in other applications related to the general field of digital signal processing. In the present chapter we consider the very practical problem of estimating the power spectral density (PSD) of a given discrete-time signal y(n). This problem appears in several applications, such as radar/sonar systems, music transcription, speech modeling, and so on. In general, the problem is often solved by first estimating the autocorrelation function associated with the data at hand, followed by a Fourier transform to obtain the desired spectral description of the process, as suggested by the Wiener–Khinchin theorem to be described in this chapter.
There are several algorithms for performing spectral estimation. Each one has different characteristics with respect to computational complexity, precision, frequency resolution, or other statistical aspects. We may classify all algorithms as nonparametric or parametric methods. Nonparametric methods do not assume any particular structure behind the available data, whereas parametric schemes consider that the process follows some pattern characterized by a specific set of parameters pertaining to a given model. In general, parametric approaches tend to be simpler and more accurate, but they depend on some a priori information regarding the problem at hand.