To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Wireless security may come as something of a shock to designers used to wires, where security is always assumed to be implicit. There, you connect a cable between two sockets, data are transferred and the assumption is that no one can or will intercept the signal. The wireless world is very different. Life is far more complex than it is within wired systems. Not only is there a question of ensuring that the correct devices (and only those correct devices) are connected to each other, but it is also much easier to intercept the wireless data stream, so that also needs to be secure.
Every wireless standard has started off with this knowledge and has produced specifications that attempt to provide security similar to that experienced with a wired system. Most failed to provide an adequate level of security, at least in their initial attempts. To a large degree, the blame for this can be levelled at governments, particularly the US Government. Until recently, there has been a level of paranoia about exporting any encryption technologies that made it difficult for security agencies to intercept and crack messages. As a result, standards bodies, particularly those based in the USA saw little point in writing security specifications that would have made the products that implemented them illegal to export. In more recent times, there has been a relaxation in export controls, allowing standards to embrace higher levels of security.
In the decade since wireless standards emerged into the market, over three billion standards-based wireless chips have been sold and incorporated into products. Despite this huge growth, very many of them remain unused, and where they are deployed, only a few specific applications have emerged.
Intriguingly, the position is different if we look outside the field of wireless standards. If we compare the market for proprietary wireless, then the uses are very many and varied. Proprietary wireless competes directly with standards-based wireless in many areas and dominates in others. Amongst these are wireless mice and keyboards, stereo headsets and remote controls. There are a number of reasons for this and before looking in detail at the market potential for wireless standards, it is instructive to consider why they have not had the expected widespread success.
As an aside, proprietary wireless should not be dismissed as an option for wireless designs. It comes in many forms, is often optimised for a particular application and, as a result, can offer benefits in terms of power consumption and price. It achieves this because it does not come with the baggage that often encumbers a standard.
Where proprietary wireless falls down is evident from its name – it does not offer interoperability. For a product that will never talk to a product from a different manufacturer, proprietary wireless may be the best choice, but that means it is an isolated design.
Adding wireless to a product introduces a new set of implementation choices. These have consequences in terms of cost and timescale, which may surprise designers who are used to wired designs. This chapter looks at some of the choices that can be made when adding wireless connectivity to a design and the impact they are likely to have.
In most electronics design it is natural to take the approach of designing with components that are soldered directly to one or more PCBs. Occasionally a module may be used for a specific function, but most designers prefer to design from scratch. Implementing a new wireless design brings in new elements of cost and risk. It is important to understand these before embarking on a wireless design.
11.1 Assessing the options
There is a hierarchy of fairly universal design options across the wireless standards, running from a discrete design all the way to a fully approved module. Each option has an impact on design time, the likely number of iterations to get it right, cost, approvals and production tests. Although there is a correlation between sales volume and minimising cost, other factors, such as time to market, RF expertise and access to design information also come into play in making the choice, particularly if it is a company's first radio design.
11.2 The design architecture
Before talking about the different options, it is worth explaining the available architectural options.
802.15.4 and ZigBee have become largely synonymous in the minds of many people. That's in large part a result of an excellent marketing campaign by the ZigBee Alliance. In fact, the two are used together to form ZigBee products; IEEE 802.15.4 defines a low-power radio and media access controller (MAC) and the ZigBee Alliance defines a mesh networking stack that sits on top of the 802.15.4 standard.
Although by far the best-known higher-layer protocol stack using the 802.15.4 radio, ZigBee is by no means the only one. There are at least a dozen other standards making use of this low-power radio, of which those with the largest market usage are probably RF4CE, WirelessHART and 6LoWPAN. In this chapter, I'll concentrate on the underlying 802.15.4 standard and ZigBee – in particular the ZigBee PRO standard, but I'll also provide a brief overview of these other three upcoming specifications.
One of the reasons that the 802.15.4 radio is so well known is that there are no licence fees or restrictions around using it. That has made it a favourite for universities and companies developing a myriad of different low-power sensor networks. There is an associated risk, in that there is no guarantee that using it does not infringe patents, but its relative simplicity and the availability of chips and development kits from a number of different suppliers means that it is likely to remain a popular choice.
In this and the following chapter we look at how to get the best out of short-range wireless technology. These two chapters focus on the items of a specification a designer can influence within the constraints of each standard and how those constraints may direct the choice of standard. In many cases, these are the same techniques for all of the standards covered in the preceding chapters.
Back in Chapter 2, I talked about the three key differences you need to understand between a cable and a wireless link:
Working out what your wireless unit is connected to,
The fact that latency becomes a major factor, as information may not arrive at the far end of the link when you expect it to, and
Throughput varies in what can appear to be a random manner.
In this chapter I'll concentrate on how these three issues can be addressed and how they might affect your choice of standard, before progressing to ways of ensuring you get the best performance out of that choice.
A standard should be regarded as a well tried basic framework that provides the advantages of interoperability, reduced cost and faster time to market. Anyone who has used short-range wireless will know that within that framework there are many different possible implementations of each standard, each of which can radically affect the way they perform and how they are fitted to the use case.
In the previous chapter we looked at some of the parameters that affect the choice of a wireless standard. This chapter explains how to get the best performance by tailoring it for the specific implementation. It explains the trade-offs that can be made for some of the common parameters in a wireless design. As before, many of the comments and techniques are valid across the range of standards.
9.1 Range and throughput
Invariably, the first question that is asked is, ‘What is the range?’ In Chapter 2, I looked at the fundamentals of range, which are essentially: transmit power, receive sensitivity and matching. In this chapter, I'll look at how to put them into practice and discuss the other key influence – the choice of antenna.
9.1.1 Power amplifiers and low noise amplifiers
The first thought of most designers coming to wireless is how to shout louder; in other words, how they can add additional amplification to boost the transmit power. A number of points should be borne in mind when doing this:As the radio link is symmetrical (i.e., each radio needs to receive as well as transmit), increasing the output power only gives a real benefit if it is done at both ends, otherwise the second radio will not be able to transmit at a level that allows the first unit to hear whether or not its transmissions have been received. It is back to the issue of asymmetric link budgets.
The 802.11 and Wi-Fi standards have become immensely successful in providing Internet connectivity for laptops. In recent years they have also started to appear in mobile phones and other portable devices to provide a moderate speed connection to Internet hotspots. They are also finding new uses that take advantage of the widely deployed infrastructure, notably in the M2M space, and in some low-power incarnations for asset tracking. The most recent release of the standard – 802.11n is beginning to garner some degree of success for audio or video streaming applications in the home. Despite these uses, almost all current deployments are targeted solely at Internet access.
802.11 is the oldest of the wireless standards covered in this book. Its genesis grew out of a proprietary wireless LAN called WaveLAN that first appeared on the market in 1988, having been started back in 1986. In its early days, it was not intended for Internet access, but as a wireless replacement for Ethernet cables, with the potential markets of factory warehousing and connection to an office network. The concept was to replace the wired physical connection of the 802.11 standard with a wireless alternative that would slot into the same 802 protocol stack. In 1991, efforts were begun to evolve it into a wireless networking standard, which led to the release of the 802.11 specification in 1997.
Many designers rush into wireless without any knowledge or consideration of the practical issues they will face in manufacturing and selling a wireless product. Wireless introduces a number of requirements over and above those of normal electronics design. These need to be understood if manufacturers wish to place their products on the market and conform to legal requirements.
This chapter highlights these areas, so that a designer can assess the most practical route when embarking on a wireless design. If they are ignored, (as they frequently are), the resulting cost in putting things right after the event can be greater than the cost of the rest of the design effort. In the worst case, a national regulator can stop shipment of products within its country.
10.1 Regulatory approval
To the best of my knowledge, it is legal to sell a cable anywhere in the world. Plugging in a cable doesn't generate any significant amount of electromagnetic radiation that could interfere with other products. Replace that cable with a radio transmitter and everything changes.
Although we are talking about radios that work in the unlicensed ISM bands, that does not grant designers a right of laissez-faire. Products still need to adhere to strict rules and manufacturers must be able to prove that they meet them. These rules exist to try to ensure open access to anyone who wants to use that spectrum, minimising the possibility and severity of interference and to prevent any single product from monopolising too much of the spectrum.
Bluetooth low energy is the latest short-range wireless specification to appear on the market, having been ratified at the end of 2009. Although written by the Bluetooth Special Interest Group, it is a fundamentally different radio standard from the one covered in Chapter 5, both in terms of how it works and the applications it will enable. Hence it merits its own chapter.
By itself, Bluetooth low energy is incompatible with a standard Bluetooth chip – it is a completely new radio and protocol stack. Some of the applications it enables, such as allowing sports equipment to talk to watches, will use Bluetooth low energy chips for both ends of the link, neither of which will be able to talk to existing Bluetooth chips. In these end-to-end applications, it is not dissimilar to other low-power proprietary standards, such as ANT. However, where it differs, and what gives it its power, is that the standard allows dual-mode chips to be designed, which support multiplexed Bluetooth and Bluetooth low energy connections. These will replace the Bluetooth chips in today's mobile phones and PCs, providing an infrastructure of billions of devices that can communicate with existing Bluetooth peripherals, as well as the new generation of dedicated Bluetooth low energy products. It gives Bluetooth low energy the ‘free ride’ that will lead to economies of scale for chip vendors and a vibrant ecosystem of devices for products to connect to.
If you are working in digital signal processing, control or numerical analysis, you will find this authoritative analysis of quantization noise (roundoff error) invaluable. Do you know where the theory of quantization noise comes from, and under what circumstances it is true? Get answers to these and other important practical questions from expert authors, including the founder of the field and formulator of the theory of quantization noise, Bernard Widrow. The authors describe and analyze uniform quantization, floating-point quantization, and their applications in detail. Key features include:Analysis of floating point round offDither techniques and implementation issues analyzedOffers heuristic explanations along with rigorous proofs, making it easy to understand 'why' before the mathematical proof is given.
Presenting the fundamentals of cooperative communications and networking, this book treats the concepts of space, time, frequency diversity and MIMO, with a holistic approach to principal topics where significant improvements can be obtained. Beginning with background and MIMO systems, Part I includes a review of basic principles of wireless communications and space-time diversity and coding. Part II then presents topics on physical layer cooperative communications such as relay channels and protocols, performance bounds, multi-node cooperation, and energy efficiency. Finally, Part III focuses on cooperative networking including cooperative and content-aware multiple access, distributed routing, source-channel coding, and cooperative OFDM. Including end-of-chapter review questions, this text will appeal to graduate students of electrical engineering and is an ideal textbook for advanced courses on wireless communications. It will also be of great interest to practitioners in the wireless communications industry. Presentation slides for each chapter and instructor-only solutions are available at www.cambridge.org/9780521895132.
The problem of detecting abrupt changes in the behavior of an observed signal or time series arises in a variety of fields, including climate modeling, finance, image analysis, and security. Quickest detection refers to real-time detection of such changes as quickly as possible after they occur. Using the framework of optimal stopping theory, this book describes the fundamentals underpinning the field, providing the background necessary to design, analyze, and understand quickest detection algorithms. For the first time the authors bring together results which were previously scattered across disparate disciplines, and provide a unified treatment of several different approaches to the quickest detection problem. This book is essential reading for anyone who wants to understand the basic statistical procedures for change detection from a fundamental viewpoint, and for those interested in theoretical questions of change detection. It is ideal for graduate students and researchers of engineering, statistics, economics, and finance.
Although widely employed in image processing, the use of fractal techniques and the fractal dimension for speech characterisation and recognition is a relatively new concept which is now receiving serious attention. This book represents the fruit of research carried out to develop novel fractal-based techniques for speech and audio signal processing. Much of this work is finding its way into practical commercial applications with Nokia Communications and other key organisations. The book starts with an introduction to speech processing and fractal geometry, setting the scene for the heart of the book where fractal techniques are described in detail with numerous applications and examples, and concluding with a chapter summing up the advantages and potential of these new techniques over conventional processing methods. A valuable reference for researchers, academics and practising engineers working in the field of audio signal processing and communications.
Now that we have a large collection of algorithms for convolutions and for the discrete Fourier transform, it is time to turn to how these algorithms are used in applications of signal processing. Our major purpose in this chapter is to discuss the role of algorithms in constructing digital filters. We shall also study other tasks such as interpolation and decimation. By using the methods of nesting and concatenation, we will build large signal-processing structures out of small pieces. The fast algorithms for short convolutions that were studied in Chapter 5 will be used to construct small filter segments.
The most important device in signal processing is the finite-impulse-response filter. An incoming stream of discrete data samples enters the filter, and a stream of discrete samples leaves. The streams of samples at the input and output are very long; in some instances millions of samples per second pass through the filter. Fast algorithms for filter sections always break the input stream into batches of perhaps a few hundred samples. One batch at a time is processed. The input samples are clocked into an input buffer, then processed one block at a time after that input block has been completed. The resulting block is placed into an output buffer, and the samples are clocked out of the output buffer at the desired rate.
Convolution by sections
Many algorithms for the discrete Fourier transform were studied in Chapter 3.
A quarter of a century has passed since the previous version of this book was published, and signal processing continues to be a very important part of electrical engineering. It forms an essential part of systems for telecommunications, radar and sonar, image formation systems such as medical imaging, and other large computational problems, such as in electromagnetics or fluid dynamics, geophysical exploration, and so on. Fast computational algorithms are necessary in large problems of signal processing, and the study of such algorithms is the subject of this book. Over those several decades, however, the nature of the need for fast algorithms has shifted both to much larger systems on the one hand and to embedded power-limited applications on the other.
Because many processors and many problems are much larger now than they were when the original version of this book was written, and the relative cost of addition and multiplication now may appear to be less dramatic, some of the topics of twenty years ago may be seen by some to be of less importance today. I take exactly the opposite point of view for several reasons. Very large three-dimensional or four-dimensional problems now under consideration require massive amounts of computation and this computation can be reduced by orders of magnitude in many cases by the choice of algorithm. Indeed, these very large problems can be especially suitable for the benefits of fast algorithms.