To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Spread spectrum communications was originally used in the military for the purpose of interference rejection and enciphering. In digital cellular communications, spread spectrum modulation is used as a multiple-access technique. Spectrum spreading is mainly performed by one of the following three schemes.
Direct sequence (DS): Data is spread and the carrier frequency is fixed.
Frequency hopping (FH): Data is directly modulated and the carrier frequency is spread by channel hopping.
Time hopping (TH): Signal transmission is randomized in time.
The first two schemes are known as spectral spreading, and are introduced in this chapter. Time hopping is known as temporal spreading, and will be introduced in Chapter 20. Spectrum spreading provides frequency diversity, low PSD of the transmitted signal, and reduced band-limited interference, while temporal spreading has the advantage of time diversity, low instantaneous power of the transmitted signals, and reduced impulse interference.
CDMA is a spread spectrum modulation technology in which all users occupy the same time and frequency, and they can be separated by their specific codes. For DS-CDMA systems, at the BS, the baseband bitstream for each MS is first mapped onto M-ary symbols such as QPSK symbols; each of the I and Q signals is then spread by multiplying a spreading code and then a scrambling code. The spread signals for all MSs are then amplified to their respective power, summed, modulated to the specified band, and then transmitted.
Two channels with different frequencies, polarizations, or physical locations experience fading independently of each other. By combining two or more such channels, fading can be reduced. This is called diversity. Diversity ensures that the same information reaches the receiver from statistically independent channels. There are two types of diversity: microdiversity that mitigates the effect of multipath fading, and macrodiversity that mitigates the effect of shadowing.
For a fading channel, if we use two well-separated antennas, the probability of both the antennas being in a fading dip is low. Diversity is most efficient when multiple diversity channels carry independently fading copies of the same signal. This leads to a joint pdf being the product of the marginal pdfs for the channels. Correlation between the fading of the channels reduces the effectiveness of diversity, and correlation is characterized by the correlation coefficient, as discussed in Section 3.4.2. Note that for an AWGN channel, diversity does not improve performance.
Common diversity methods for dealing with small-scale fading are spatial diversity (multiple antennas with space separation), temporal diversity (time division), frequency diversity (frequency division), angular diversity (multiple antennas using different antenna patterns), and polarization diversity (multiple antennas with different polarizations). Macrodiveristy is usually implemented by combining signals received by multiple BSs, repeaters or access points, and the coordination between them is part of the networking protocols.
A digital image is a rectangular array of picture elements (pixels), arranged in m rows and n columns. The resolution of the image is m × n. Images can be categorized into bi-level, grayscale, and color images. A natural scene, such as a picture taken by a digital camera or obtained by using a scanner, is typically a continuous-tone image, where the colors vary continuously to the eye and there is a lot of noise in the picture. An artificial image, such as a graphical image, does not have the noise or blurring of a natural image. A cartoon-like image consists of uniform color in each area, but adjacent areas have different colors.
The features in each type of image can be exploited to achieve a better compression. For example, for the bi-level image, each pixel is represented by one bit. A pixel has a high probability of being the same as its neighboring pixels, and thus RLE is suitable for compressing such image. The image can be scanned column by column or in zigzag. For the grayscale image, each pixel is represented by n bits, and a pixel tends to be similar to its immediate neighbors but may be not identical, thus RLE is not suitable. By representing the image using a Gray code that differs in only one bit for two consecutive integers, a grayscale image can be separated into n bi-level images, and each can be compressed by using RLE.
The cellular concept was a major breakthrough in mobile communications, and it initiated the era of modern wireless communications. It helped to solve the problem of spectral congestion and user capacity. For wireless communications, the antennas are typically required to be omnidirectional. The cellular structure divides the geographical area into many cells. A BS equipped with an omnidirectional antenna is installed at the center of each cell. Neighboring cells use different frequency bands to avoid co-channel interference (CCI).
The same frequency bands can be used by different cells that are sufficiently far away. This leads to frequency reuse. For a distance D between two cells that use the same frequency bands and a radius R of the cell, the relative distance D/R between the two cells is the reuse distance. There is no CCI within a cluster of cells, where each cell uses a different frequency spectrum. The number of cells in a cluster is called the cluster size. The cluster size determines the capacity of the cellular system: a smaller cluster size leads to a large capacity.
The cell shape usually takes the form of hexagon, and the overall division of space is like a beehive pattern. This is illustrated in Fig. 4.1 for cluster size 4 and reuse distance D/R ∼ 4. This hexagonal cell shape is suitable when the antennas of the BSs are placed on top of buildings with a coverage radius of a few miles, such as in the 1G mobile systems.
A channel is an abstract model describing how the received (or retrieved) data is associated with the transmitted (or stored) data. Channel coding starts with Claude Shannon's mathematical theory of communication.
Error detection/correction coding
Channel coding can be either error detection coding or error correction coding. When only error detection coding is employed, the receiver can request a transmission repeat, and this technique is known as automatic repeat request (ARQ). This requires two-way communications. An ARQ system requires a code with good error-detecting capability so that the probability of an undetected error is very small.
Forward error correction (FEC) coding allows errors to be corrected based on the received information, and it is more important for achieving highly reliable communications at rates approaching channel capacity. For example, by turbo coding, an uncoded BER of 10−3 corresponds to a coded BER of 10−6 after turbo decoding. For applications that use simplex (one-way) channels, FEC coding must be supported since the receiver must detect and correct errors, and no reverse channel is available for retransmission requests.
Another method using error detection coding is error concealment. This method processes data in such a way that the effect of errors is minimized. Error concealment is especially useful for applications that carry data for subjective appreciation, such as speech, music, image, and video. Loss of a part of the data is acceptable, since there is still some inherent redundancy in the data.
Source coding or data compression is used to remove redundancy in a message so as to maximize the storage and transmission of information. In Chapter 14, we have introduced Shannon's source-coding and rate-distortion theorems. We have also introduced lossless data compression based on the source-coding theorem. In Chapters 16 and 17, we will address speech/audio and image/video coding. Lossy data compression is obtained by quantizing the analog signals, and the performance of quantization is characterized by the rate-distortion bound. Source coding is the procedure used to convert an analog or digital signal into a bitstream; both quantization and noiseless data compression may be part of source coding.
Coding for analog sources
Source coding can be either lossy or lossless. For discrete sources, a lossless coding technique such as entropy coding is used. Huffman coding is a popular entropy coding scheme. Lossless coding uses more radio spectrum. For analog sources, lossy coding techniques are usually used.
PCM is a digital representation of an analog signal. The signal magnitude, sampled regularly at uniform intervals, is quantized to a series of symbols in a digital, usually binary code. This can be performed by using A/D converters. The demodulation of PCM signals can be performed by DACs. The PCM code is the original waveform for source coding. There are three approaches to source coding.
MIMO systems are wireless systems with multiple antenna elements at both ends of the link. MIMO systems can be used for beamforming, diversity combining, or spatial multiplexing. The first two applications are the same as for the smart antennas, while spatial multiplexing is the transmission of multiple data streams on multiple antennas in parallel, leading to a substantial increase in capacity. MIMO technology and turbo coding are the two most prominent recent breakthroughs in wireless communications. MIMO technology promises a significant increase in capacity.
MIMO systems have the ability to exploit, rather than combat, multipath propagation. The separability of the MIMO channel relies on the presence of rich multipath, which makes the channel spatially selective. Thus, MIMO effectively exploits multipath. In contrast, some smart antenna systems perform better in the LOS case, and their optimization criteria are based on the DoA/DoD. Although some smart antenna systems generate good results in the non-LOS channel, they mitigate multipath rather than exploit it.
The maximum spatial diversity obtained for a non-frequency-selective fading MIMO channel is proportional to the product of the numbers of receive and transmit antennas. In the uncorrelated Rayleigh fading channel, the MIMO channel capacity/throughout limit grows linearly with the number of transmit or receive antennas, whichever is smaller − i.e., min (Nt, Nr). According to the analysis and simulation performed in, MIMO can provide a spectral efficiency as high as 20–40 bits/s/Hz.
A wireless ad hoc network is an autonomous, self-organized, distributed, peer-to-peer network of fixed, nomadic, or mobile users that communicate over bandwidth-constrained wireless links. There is no preexisting infrastructure. Wireless ad hoc networks can be relay, mesh, or star networks comprising special cases. When the nodes are connected in mesh topology, the network is also known as a wireless mesh network. It is known as a mobile ad hoc network (MANET), when the nodes are mobile.
A wireless ad hoc network has a mutlihop relaying of packets, as shown in Fig. 22.1. It can be easily and rapidly deployed, and expensive infrastructures can be avoided. Without an infrastructure, the nodes handle the necessary control and networking tasks by themselves, generally by distributed control. Typical applications are wireless PANs for emergency operations, civilian, and military use. The wireless ad hoc network is playing an increasing role in wireless networks, and wireless ad hoc networking mode has been or is being standardized in most IEEE families of wireless networks.
MANETs and wireless sensor networks (WSNs) are the two major types of wireless ad hoc networks. They both are distributed, multi-hop systems. A MANET is an autonomous collection of mobile routers (and associated hosts) connected by wireless links. Each node is an information appliance, such as a personal digital assistant (PDA), equipped with a radio transceiver. The nodes are fully mobile.
OFDM, also known as simultaneous MFSK, has been widely implemented in high-speed digital communications in delay-dispersive environments. It is a multicarrier modulation (MCM) technique. OFDM was first proposed by Chang in 1966. Chang proposed the principle of transmitting messages simultaneously over multiple carriers in a linear band-limited channel without ISI and ICI. The initial version of OFDM employed a large number of oscillators and coherent demodulators. In 1971, DFT was applied to the modulation and demodulation process by Weinstein and Ebert. In 1980, Peled and Ruiz introduced the notion of cyclic prefix to maintain frequency orthogonality over the dispersive channel. The first commercial OFDM-based wireless system is the ETSI DAB standard proposed in 1995.
A wide variety of wired and wireless communication standards are based on the OFDM or MCM technology. Examples are
digital broadcasting systems such as DAB, DVB-T (Terrestrial DVB), and DVB-H;
home networking such as digital subscriber line (xDSL) technologies;
wireless LAN standards such as HyperLAN/2 and IEEE 802.11a/g/n;
wireless MANs such as IEEE 802.16a/e (WiMAX), ETSI HiperACESS, IEEE 802.20 (mobile-Fi), WiBro, and HiperMAN2;
wireless WANs such as 3GPP LTE and 3GPP2 UMB;
powerline communications such as HomePlug;
wireless PANs such as UWB radios (IEEE 802.15.3a/3c/4a).
It is commonly deemed that OFDM is a major modulation technique for beyond-3G wireless multimedia communications.
Features of OFDM technology
In OFDM technology, the multiple carriers are called subcarriers, and the frequency band occupied by the signal carried by a subcarrier is called a sub-band.
Analog input signals are converted into digital signals for digital processing and transmission. The analog-to-digital (A/D) converter (ADC) performs this functionality using two steps: the sample-and-hold (S/H) operation, followed by digital quantization. The ADC is primarily characterized by the sampling rate and resolution. A sampling rate of above twice the Nyquist frequency is a must; otherwise, aliasing occurs and the result is not usable. A higher sample rate leads to a more accurate result, but a more complex system. The successive-approximation ADC successively increases the digital code by digitizing the difference until a match is found. The successive-approximation ADC is the most popular type of ADC. The sigma-delta (Σ-Δ) ADC uses oversampling and noise shaping to significantly attenuate the power of quantization noise in the band of interest.
The digital-to-analog (D/A) converter (DAC) is used to convert the processed digital signal back to an analog signal by comparing it to the input voltage. This chapter introduces ADCs and DACs that are used in wireless communication systems.
Sampling
Ideal and natural sampling
An analog signal x(t), bandlimited to fmax, can be transformed into digital form by periodically sampling the signal at time nT, where T is the sampling period.
In the last three decades, the explosive growth of mobile and wireless communications has radically changed the life of people. Wireless services have migrated from the conventional voice-centric services to data-centric services. The circuit-switched communication network is now being replaced by the all-IP packet-switched network. Mobile communications have also evolved from the first-generation (1G) analog systems to the third-generation (3G) systems now being deployed, and the fourth-generation (4G) systems are now under development and are expected to be available by 2010. The evolution of wireless networking has also taken place rapidly during this period, from low-speed wireless local-area networks (LANs) to broadband wireless LANs, wireless metropolitan area networks (MANs), wireless wide-area networks (WANs), and wireless personal-area networks (PANs). Also, broadband wireless data service has been expanded into broadcasting service, leading to satellite TV broadcasting and wireless regional-area networks (RANs) for digital TV. The data rate has also evolved from the 10 kbits/s voice communications to approximately 1 Gbit/s in the 4G wireless network. In addition, the 4G wireless network will provide ubiquitous communications.
Scope and Purpose
A complete wireless system involves many different areas. However, most existing textbooks on wireless communications focus only on the fundamental principles of wireless communications, while many other areas associated with a whole wireless system, such as digital signal processing, antenna design, microwave and radio frequency (RF) subsystem design, speech coding, video coding, and channel coding, are left to other books.
Conception of software-defined radio (SDR) started in the early 1990s, and has now become a core technology for future-generation wireless communications. In 1997, the U.S. DoD recommended replacing its 200 families of radio systems with a single family of SDRs in the programmable modular communications system (PMCS) guideline document. An architecture outlined in this document includes a list of radio functions, hardware and software component categories, and design rules. The ultimate objective of SDR is to configure a radio platform like a freely programmable computer so that it can adapt to any typical air interface by using an appropriate programming interface. SDR is targeted to implement all kinds of air interfaces and signal processing functions using software in one device. It is the basis of the 3G and 4G wireless communications.
Proliferation of wireless standards has created the dramatic need for an MS architecture that supports multiband, multimode, and multistandard low-power radio communications and wireless networking. SDR has become the best solution. By using a unified hardware platform, the user needs only to download software of a radio and run it, and immediately shift to a new radio standard for a different environment. The download of the software can be over the air or via a smart card. For example, several wireless LAN standards, including IEEE 802.11, IEEE 802.15, Bluetooth, and HomeRF, use the 2.4 GHz ISM band, and they can be implemented in one SDR system.