To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Ever since the 1920s, every wireless system has been required to have an exclusive license from the government in order not to interfere with other users of the radio spectrum. Today, with the emergence of new technologies that enable new wireless services, virtually all usable radio frequencies are already licensed to commercial operators and government entities. According to former U.S. Federal Communications Commission (FCC) chair William Kennard we are facing a “spectrum drought” [356]. On the other hand, not every channel in every band is in use all the time; even for premium frequencies below 3 GHz in dense, revenue-rich urban areas, most bands are quiet most of the time. The FCC in the United States and the Ofcom in the United Kingdom, as well as regulatory bodies in other countries, have found that most of the precious, licensed radio-frequency spectrum resources are inefficiently utilized [37, 357].
In order to increase the efficiency of spectrum utilization, diverse types of technologies have been deployed. Cognitive radio is one of those that leads to the greatest technological gain in wireless capacity. Through the detection and utilization of the spectra that are assigned to the licensed users but standing idle at certain times, cognitive radio acts as a key enabler for spectrum sharing. Spectrum sensing, aiming at detecting spectrum holes (i.e., channels not used by any primary users), is the precondition for the implementation of cognitive radio.
A wireless network refers to a telecommunications network that interconnects between nodes that are implemented without the use of wires. Wireless networks have experienced unprecedented growth over the past few decades, and they are expected to continue to evolve in the future. Seamless mobility and coverage ensure that various types of wireless connections can be made anytime, anywhere. In this chapter, we introduce some basic types of wireless networks and provide the reader with some necessary background on the state-of-the-art developments.
Wireless networks use electromagnetic waves, such as radio waves, for carrying the information. Therefore, their performance is greatly affected by the randomly fluctuating wireless channels. To develop an understanding of channels, in Section 2.1 we will study the radio frequency band first, then the existing wireless channel models used for different network scenarios, and finally the interference channel.
There exist many wireless standards, and we describe them according to the order of coverage area, starting with cellular wireless networks. In Section 2.2.1, we provide an overview of the key elements and technologies of the third-generation (3G) wireless cellular network standards. WiMax, based on the IEEE 802.16 standard for the wireless metropolitan area network, is discussed in Section 2.2.2. In Section 2.2.3, we study a wireless local area network (WLAN or WiFi), which is a network in which a mobile user can connect to a local area network through a wireless connection.
In this chapter, we discuss the application of CS in the task of positioning. It is easy to imagine that there are many applications of geographical positioning. For example, in a battlefield, it is very important to know the location of a soldier or a tank. In cellular networks, the location of the mobile user can be used for Emergency-911 services. For goods and item tracking, it is of key importance to track their locations. The precision of positions also ranges from subcentimeters (e.g., robotic surgery) to tens of meters (e.g., bus information).
Several classifications of positioning technologies are given below [338]:
Classified by signaling scheme: In positioning, the target needs to send out signals to base stations or receive signals from base stations in order to determine the target's position. Essentially, the signal needs to be wireless. Radio-frequency (RF), infrared or optical signals can be used.
Classified by RF waveforms: Various RF signals can be used for positioning, such as UWB, CDMA and OFDM.
Classified by positioning-related metrics: The metrics include time of arrival (TOA), time difference of arrival (TDOA), angle of arrival (AOA) and received signal strength (RSS).
Classified by positioning algorithm: When triangulation-based algorithms are used, the positioning is obtained from the intersections of lines, based on metrics such as AOA. In trilateration-based algorithms, the position is obtained from the intersections of circles, based on metrics such as TDOA or TOA. In fingerprinting-based (also called pattern-matching) algorithms, a training period will be spent to establish a mapping between the location and the received signal fingerprinting (or pattern).
Despite the relatively short history of CS theory pioneered by the work by Candes, Romberg, and Tao [67-69] and Donoho [70], the numbers of studies and publications in this area have become amazingly large. On the other hand, the applications of CS are just beginning to appear. The inborn nature that many signals can be represented by sparse vectors has been recognized in many areas of applications. Examples in wireless communication include the sparse channel impulse response in the time domain, the sparse unitization of the spectrum, and the time and spatial sparsity in the wireless sensor networks. For each of these sparse signals, there are innovative signal acquisition schemes that not only satisfy the requirements by the CS theory, but are also easily realizable on hardware. Efficient signal-recovery algorithms for each system are also available. They guarantee stable signal recovery with high probability.
In this chapter, we provide a concise overview of CS basics and some of its extensions. In subsequent chapters, we focus on CS algorithms and specific areas of CS research in wireless communication.
This chapter begins with Section 3.1, which gives the motivation of CS, illustrates the typical steps of CS by an example, summarizes the key components CS, and discusses how nearly sparse signals and measurement noise are treated in robust CS. Following these discussions, Section 3.2 compares CS with traditional sensing and examines their advantages and disadvantages.
Ultra-wideband (UWB) is one of the major breakthroughs in the area of wireless communications, which is highly sparse in the time domain. Hence, it is natural to introduce CS in the design of UWB systems in order to improve the performance of UWB signal acquisition. In this chapter, we will discuss both the compression and reconstruction procedures in UWB systems, which serves as a good tutorial for how to apply CS in communication systems having sparsity. Note that channel estimation is also an important topic in UWB systems. However, since the CS-based channel estimation for general wireless communication systems has been discussed in Chapter 6, we omit the corresponding discussion in this chapter.
A brief introduction to UWB
History and applications
Although it has been studied since the late 1960s [327], the term “ultra-wideband” was not applied until around 1989. A major breakthrough of UWB was the invention of micropower impulse radar (MIR) that, for the first time, operated UWB signals at extremely low power and inexpensive hardware. According to the wide applications of UWB, the Federal Communications Commission (FCC) in the United States authorized the unlicensed use of UWB in the spectrum band of 3.1 GHz to 10.6 GHz in a report and order issued on February 14, 2002. The more detailed history of UWB can be found in [328].
Sampling is not only a beautiful research topic with an interesting history, but also a subject with high practical impact, at the heart of signal processing and communications and their applications. Conventional approaches to sample signals or images follow Shannon's celebrated theorem: the sampling rate must be at least twice the maximum frequency present in the signal (the so-called Nyquist rate) has been to some extent accepted and widely used ever since the sampling theorem was implied by the work of Harry Nyquist in 1928 (“Certain topics in telegraph transmission theory”) and was proved by Claude E. Shannon in 1949 (“Communication in the presence of noise”). However, with the increasing demand for higher resolutions and an increasing number of modalities, the traditional signal-processing hardware and software are facing significant challenges. This is especially true for wireless communications.
The compressive sensing (CS) theory is a new technology emerging in the interdisciplinary area of signal processing, statistics, optimization, as well as many application areas including wireless communications. By utilizing the fact that a signal is sparse or compressible in some transform domain, CS can acquire a signal from a small set of incoherent measurements with a sampling rate much lower than the Nyquist rate. As more and more experimental evidence suggests that many kinds of signals in wireless applications are sparse, CS has become an important component in the design of next-generation wireless networks.
In this chapter, we discuss multiple access using CS in the context of wireless communications. Essentially, multiple access means that multiple users send their data to a single receiver that needs to distinguish and reconstruct the data from these users, as illustrated in Figure 9.1. It exists in many wireless communication systems, such as
Cellular systems: In cellular systems, the mobile users within a cell are served by a base station. The users may need to transmit their data during the same time period; e.g., multiple cellular phone users make phone calls simultaneously.
Ad hoc networks: In such systems, there is no centralized base station. However, one node may need to receive data from multiple neighbors simultaneously; e.g., a relay node in a sensor network receives the reports from multiple sensors and forwards them to a data sink.
The key challenge of multiple access is how to distinguish the information from different transmitters. Hence, there are basically two types of multiple-access schemes:
• Orthogonal multiple access: This type of multiple access includes time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA) and carrier sense multiple access (CSMA). In TDMA, each transmitter is assigned a timeslot and it can only transmit over the assigned time-slot. Hence, the signals from different transmitters are distinguished in the time. OFDMA has a similar scheme to TDMA. The only difference is that OFDMA allocates different frequency channels to different transmitters and thus separates the transmitters in the frequency domain.
This chapter reviews a collection of sparse optimization models and algorithms. The review focuses on introducing these algorithms along with their motivations and basic properties.
This chapter is organized as follows. Section 4.1 gives a short overview of convex optimization, including the definitions of convex sets and functions, the concepts of local and global optimality, as well as optimality conditions. A list of sparse optimization models is presented in Section 4.2; it deals with different types of signals and different kinds of noise and can also include additional features as objectives and constraints that arise in practical problems. Section 4.3 demonstrates that the convex sparse optimization problems can be transformed in equivalent cone programs and solved by off-the-shelf algorithms, yet it argues that these algorithms are usually inefficient or even infeasible for large-scale problems, which are typical in the CS applications. Sections 4.4-4.13 cover a large variety of (yet hardly complete) algorithms for sparse optimization. The list is not short because sparse optimization is a common ground where lots of optimization techniques, old or new, are found useful in varying senses. They have different strengths and fit different applications. One common reason that makes many of these algorithms efficient is their use of the shrinkage-like proximal operations, which can be computed very efficiently; so, we start with shrinkage in Section 4.4. Then, Section 4.5 presents a prox-linear framework and gives several algorithms under this framework that are based on gradient descent and take advantages of shrinkage-like operations.
The wireless channels place fundamental limitations on the performance of many wireless communication systems. A transmitted radio signal propagating from a transmitter to a receiver is generally reflected, scattered, and attenuated by the obstacles in the field. At the receiver, two or more slightly delayed versions of the transmitted signal will superpose together. This so-called fading phenomena in wireless communication channels is a double-edged sword to wireless communication systems [1, 270]. On the one hand, this multipath propagation phenomenon will result in severe fluctuations of the amplitudes, phases and delays of a radio signal over a short period of time or distance. Interpreting the resultant received signals, which vary widely in amplitude and phase, is a critical design challenge to the receiver. On the other hand, multipaths will enhance the time, spatial and frequency diversity of the channel available for communication, which will lead to gains in data transmission and channel reliability. To obtain the diversity gain and alleviate the side effect of the channel fading, knowledge of channel state information (CSI) is necessary. Thus, channel estimation is of significant importance for the transceiver design.
The CS theory [69, 70, 271] is a new technology that has emerged in the area of signal processing, statistics and wireless communication. By utilizing the fact that a signal is sparse or compressible in some transform domain, CS can powerfully acquire a signal from a small set of randomly projected measurements with a sampling rate much lower than the Nyquist sampling rate.
An analog-to-digital converter (ADC) is a device that uses sampling to convert a continuous quantity to a discrete-time representation in digital form. The reverse operation is performed by a digital-to-analog converter (DAC). ADC and DAC are the gateways between the analog world and digital domain. Most signal processing tasks are implemented in the digital domain. Therefore, ADC and DAC are the key enablers for the digital signal processing and have significant impacts on the system performances.
In this chapter, we study the literature of CS-based ADCs. We introduce the traditional ADC and its concepts first. Then, we study two major types of CS-ADC architectures, Random Demodulator and Modulated Wideband Converter. Next, we investigate the unified framework, Xampling, and briefly explain the other types of implementation. Finally, we present a summary.
Traditional ADC basics
In this section, we study the basics of ADC by the sampling theorem, quantization rule and practical ADC implementation in the following subsections.
Sampling theorem
In digital processing, it is useful to represent a signal in terms of sample values taken at appropriately spaced intervals as illustrated in Figure 5.1. The signal can be reconstructed from the sampled waveform by passing it through an ideal low-pass filter. In order to ensure a faithful reconstruction, the original signal must be sampled at an appropriate rate, as described in the sampling theorem.
This detailed, up-to-date introduction to heterogeneous cellular networking introduces its characteristic features, the technology underpinning it and the issues surrounding its use. Comprehensive and in-depth coverage of core topics catalogue the most advanced, innovative technologies used in designing and deploying heterogeneous cellular networks, including system-level simulation and evaluation, self-organisation, range expansion, cooperative relaying, network MIMO, network coding and cognitive radio. Practical design considerations and engineering tradeoffs are also discussed in detail, including handover management, energy efficiency and interference management techniques. A range of real-world case studies, provided by industrial partners, illustrate the latest trends in heterogeneous cellular networks development. Written by leading figures from industry and academia, this is an invaluable resource for all researchers and practitioners working in the field of mobile communications.
Compressive sensing is a new signal processing paradigm that aims to encode sparse signals by using far lower sampling rates than those in the traditional Nyquist approach. It helps acquire, store, fuse and process large data sets efficiently and accurately. This method, which links data acquisition, compression, dimensionality reduction and optimization, has attracted significant attention from researchers and engineers in various areas. This comprehensive reference develops a unified view on how to incorporate efficiently the idea of compressive sensing over assorted wireless network scenarios, interweaving concepts from signal processing, optimization, information theory, communications and networking to address the issues in question from an engineering perspective. It enables students, researchers and communications engineers to develop a working knowledge of compressive sensing, including background on the basics of compressive sensing theory, an understanding of its benefits and limitations, and the skills needed to take advantage of compressive sensing in wireless networks.
Digital information exchange is seen as a key enabler in modern economics, and one of its most challenging aspects is the heterogeneous nature of modern wireless networks. Over 1.4 billion user equipments (UEs) are connected to the cellular network with over 3 million base stations (BSs) [1]. The volume of data communicated via information communication technology (ICT) infrastructures has increased by more than tenfold over the past 5 years. According to [2], the global mobile data traffic is expected to reach 6.3 exabytes per month by 2015, which is more than 26 times the mobile data traffic per month in 2010. A recognized target by the United Nations is to improve both the coverage and the capacity of cellular networks, in order to foster economic growth and reduce the wealth and knowledge gap among countries. It is important to achieve the aforementioned level of wireless connectivity, whilst consuming a low amount of energy and incurring a low cost, due to the growing concern over the environmental damage caused by carbon emissions. The greenhouse effect is mainly caused by excessive emissions of carbon dioxide (CO2) in the last century. As reported in [3–5], human industrial activities emit twice as much CO2 as natural processes can absorb at the moment. Globally, ICT infrastructures consume approximately 3% of the world's total energy [5, 6]. In particular, up to 20% of the energy consumption of the ICT industry is attributed to wireless networks [7], the scale of which is still growing explosively [4]. Roughly 70% of the wireless network energy is consumed by the outdoor macrocell BSs.
The main motivation behind using network coding is the wireless broadcast nature, which means that every other node can potentially overhear the signal transmitted by one node. Conventionally, the overheard signal is treated as noise or interference, and thus completely ignored. However, as shown in [1], a smartly controlled interference can be used to greatly improve the total network throughput. While interference is harmful in a conventional perspective, if a node has previously transmitted or overheard the interference, its detrimental effects can be completely removed to increase the chance of conveying more information in a single transmission.
Network coding was initially proposed in [2] to achieve the multicast capacity of a single-session multicast network by permitting intermediate nodes to encode the received data in addition to traditional routing operations. For a single-session multicast network, it was shown in [3] that linear codes are sufficient to achieve the multicast capacity. A polynomial time algorithm for network code construction was proposed in [4]. The distributed random linear code construction approach in [5] was shown to be asymptotically valid given a sufficiently large field size. For a multiple-session network, it was shown in [6, 7] that linear network coding may be insufficient to achieve the multicast capacity. Moreover, finding a network coding solution for a network with multiple sessions was shown to be an NP-hard problem [8, 9]. Although optimal network coding solutions for multiple-session networks are generally unknown, simple network coding solutions are able to offer tremendous throughput improvements for wireless cooperative networks, which was famously demonstrated by [1, 10–12].