To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many years of research have dealt with predistortion techniques for memoryless PA. Recently, several solutions have included memory effects compensation, since those effects seem to be of significant concern when considering high bandwidths with multilevel and multi-carrier modulation formats. Digital predistortion solutions, usually based on a particular PA behavioral model, have to be designed to be later implemented in a digital signal processor.
An efficient way to implement the predistortion function without introducing an excessive computational cost is by using look-up tables (LUTs). An LUT is a data structure used to replace a runtime computation with a simpler array indexing operation. Therefore, computational complexity and processing time is reduced by using LUTs, since retrieving a value from memory is in general faster than running the algorithm required to generate this value. In addition, LUT-based DPD has shown better performance than using other low-order parametric models such as polynomials. An LUT-based DPD consists, among other blocks, of a memory block that contains a representation of the inverse characteristic of the amplifier and an address calculator to index the memory block. Then, as we will discuss in the following, according to the type of LUT architecture considered it will incorporate several real or complex adders and multipliers to perform the predistortion of the input complex data signal.
With the rapid development and worldwide deployment of broadband wireless communication and digital broadcasting infrastructures, the use of digital processing technology in the front-end and radio frequency unit is growing explosively. Digital processing technology for front-end in transmitters and receivers of wireless communication and digital broadcasting covers a broad range of topics including digital predistortion (DPD), digital up-conversion (DUC), digital down-conversion (DDC), DC-offset calibration, peak-to-average power ratio (PAPR) or crest factor reduction (CFR), pulse-shaping, delay/gain/imbalance compensation, noise-shaping, numerically controlled oscillator (NCO), and conversion between the analog and digital domains. These digital processing technologies offer a number of advantages in power efficiency, cost reduction, time-to-market, and flexibility for software defined radio (SDR) so as to support multiple standards and multimode applications. Unlike baseband processing, front-end is tightly connected to the radio frequency layer, therefore it imposes great limitations and difficulties on digital processing speed, memory, computational capability, power, size, data interfaces, and bandwidths. This suggests that digital processing and circuit implementation of front-end are very challenging tasks and require the huge efforts of the related industry, research, and regulatory authorities.
From an application and implementation design point of view, this book aims to be the first single volume to provide a comprehensive and highly coherent treatment on digital front-end and its system integration for multiple standards, multi-carrier and multimode in both broadband communications and digital broadcasting by covering basic principles, signal processing algorithms, silicon-convergence, design trade-off, and well-considered implementation examples.
Usually the development of a new standard for wireless communications does not leave the previous ones as obsolete, so it is necessary to share different complex technologies in both the user equipment (UE) and the base-station (base transceiver station, BTS). This implies a strong need for systems integration, especially in handset devices. Besides, the market tendency is to embed in the same UE a number of communications, location, and entertainment applications that some years ago were split among cell phones, PDAs, laptops and dedicated devices, allowing access to both cellular communications networks and WLANs (i.e, different versions of the IEEE 802.11 standard, commercialized as WiFi). Additionally, some WPAN (Wireless Personal Area Networks) applications, such as Bluetooth or Zigbee are also included in some UE, as well as mobile broadcast tuners/decoders (DVB-H, DMB-T, ISDB-T, MediaFLO) and GPS receivers for allowing users to utilize their personal devices as navigators. Also, GPS signals are used in some CDMA base-stations for clock synchronization and in broadcasting transmitters for synchronizing single frequency networks (SFN).
As a consequence, the equipment has to deal with heterogeneous wireless access networks, different in terms of coverage, access technologies, bandwidth, power, data rate, and latency. Besides, developments for new base-stations should support portions of evolving standards, at least the anticipation to modify hardware components and to reconfigure (or upgrade) the software. A multimodal device is a device capable of coping with a number of different standards and applications, supporting different modes operating at different access techniques, data rates, powers, sensitivities, modulations, and codes.
The purpose of communication engineering is to transmit information from its source to the destination over some distance away. A basic communication system mainly consists of three essential components: transmitter, channel (wired or wireless), and receiver. Figure 1.1 shows a typical point-to-point one-way communication system. For a two-way system, a receiver and a transmitter are both required on each side.
The transmitter transforms the input signal to a transmission signal that is suitable for the characteristic of the channel. Since a channel is always varying with time, and the input signal to the system differs, the transmitter processes the input signal to produce a suitable signal for transmission. This generally includes the modulation and the coding. After being processed by the transmitter, the transmitted signal goes into the channel. The channel can be any medium or interface suitable for the transmission and it connects the transmitter and the receiver. The channel may be the laser beam, coaxial cable, or radio wave. During the transmission, there are various unwanted effects on the signals. Attenuation and power loss reduce the signal strength and make the detection difficult at the receiver. Besides the power loss and the attenuation, the channel may always introduce some undesired signals. These signals may be random and unpredictable signals that exist in nature, such as solar radiation, or the signals produced by other transmitters or machines. We call the former type of undesired signal “noise” and the latter type “interference.” If the interfering signals occupy different frequencies of the desired signal, using proper filters can remove these interferences. In comparison, the random noise that superimposes on the information-bearing signals is hard to completely eliminate by the filter and the remaining noise will inevitably corrupt the desired signal. The power ratio of received signal over noise decides the channel capacity which is one of the basic system performance parameters. After the receiver picks up the signal from the channel, it will do some filtering to compensate for the power loss, followed by demodulation and decoding to recover the original input signals.
The analog-to-digital converter (ADC) is one of the key components in modern radio front-end design. Real-world ADC components always have certain trade-offs in performance which have to be taken into account. One reason for that is the rather slow development of ADCs compared to other technical achievements in radio technology [22], [33]. The fundamental trade-offs are illustrated in Figure 15.1. Power dissipation is a very important aspect especially in mobile devices, but low-power ADCs tends to have low resolution and sampling rate. On the other hand, if high resolution is required, it usually means that the sampling rate of that high-precision ADC is not very high.
Very high requirements for both the sampling rate and resolution are set by software defined radios, where most of the selectivity and other functionalities are implemented with digital signal processing [21], [32]. In other words, ADCs have to digitize high-bandwidth signals with large dynamic range. Figure 15.2 illustrates why high resolution is needed to cope with high signal dynamics. Before analog-to-digital conversion the signal has to be scaled properly to avoid overshooting the voltage range of the ADC [4]. If the overall waveform to be digitized consists of several signal bands with different power levels, strong signals result in fewer quantization levels that can be used for weak signals. This means that the weak signals suffer from quantization noise more than they would if there were no strong signals present at the same time.
The millimeter wave spectrum has been identified as a candidate of choice to support multi-gigabit/s data transmissions. The increasing interest of recent years has pushed the regulatory agencies to provide new opportunities for unlicensed spectrum usage with fewer restrictions on radio parameters. In order to provide more flexibility in spectrum sharing, the FCC introduced an opening of 7 GHz unlicensed spectrum at millimeter wave frequencies around 60 GHz, from 57 to 64 GHz.
As known, in the case of comparable bandwidths and data-rates, an important advantage of using millimeter wave frequencies instead of microwave ones is the reduced ratio between the bandwidth and the central frequency, leading the way to transceiver simplicity. In addition, compared to microwave frequencies, the strong signal attenuation at 60 GHz allows an efficient reuse. This helps to create small indoor cells for hot spot secure wireless communications. This spectrum is suitable for multi-gigabit/s wireless communication systems, which could be home or office high-speed wireless networking and entertainment, such as extremely fast downloading of files via wireless Gigabit Ethernet, and wireless High Definition Multimedia Interface (HDMI).
Wireless communication systems use radio frequency (RF) signals to transmit data between base stations and mobile users. The RF power amplifier (PA) is located within the transmitter and is a key component of the down-link connecting the base station to the mobile. Power amplifiers tend to be either linear or efficient, but not both. Fortunately, an efficient power amplifier may be used within a digital transmitter if the nonlinear behavior of the PA is compensated using digital predistortion (DPD).
This chapter discusses digital predistortion techniques suitable for use in a digital transmitter. Section 6.2 reviews the nonlinear behavior of a power amplifier and its effect on the output spectrum. Section 6.3 provides an overview of digital predistortion avoiding equations for the most part. Details of the basic algorithms used appear in Section 6.4. Section 6.5 discusses some advanced topics in DPD.
Signal-to-noise ratio (SNR) is the principal figure of merit for any electronic system. The relation between the signal power and the noise power has been the main challenge for electronic engineers.
In order to study and optimize this type of system, the system design engineers have looked mainly to the noise optimization, that is, to the minimization of all sorts of noise in communication systems. This noise arises mainly from thermal noise [1], nonlinear distortion noise [2], and/or quantization noise in digital systems [3].
Thus the correct identification of the noise contributions is of fundamental importance to the calculation of the noise budget. This is one of the reasons why most of the communication engineers start by worrying about the time domain waveform characteristics of electronic systems. Mainly what they have called the peak of the signal versus its average value, or what is normally known as the peak-to-average ratio (PAR) [4] or crest factor (CF) [5].
Recent advantages in wireless communications have enabled reliable long-range coupling, high data rate services at mobile users and support of multimedia content. The vast majority of currently operating wireless networks is based on digital processing at both transmission ends, where incoming analog signals (e.g. voice) are converted into digital bit streams. Digital transmission offers increased reliability and flexibility when compared to analog transmission, as well as compatibility among different digital systems. The boost in wireless digital communications came along with the development of small-scale integrated circuits, that made feasible the deployment of cost-effective and time-efficient implementations of complex operations related with digital processing (e.g. sampling, quantization, matrix inversion, fast Fourier transform, etc.). However, the performance of a wireless network depends not only on the installed hardware at both transmission ends but on a number of issues related to system protocols and architecture, such as transceiver signal processing techniques and resource management strategies. The main challenging task in the design of a wireless link is the provision of acceptable quality of service at fixed or mobile users, despite the channel conditions. The latter term refers to channel variations either due to multipath propagation or due to pathloss. Moreover, in wireless networks different users share the same physical access medium; hence another challenging task is the implementation of effective physical layer protocols that allow the simultaneous transmission and reception from different users as well as the maximization of per user capacity.
In 1998, the Wideband Code Division Multiple Access (WCDMA) physical layer protocol was adopted for third-generation (3G) mobile networks, which offer higher data rates and multimedia communications when compared to second-generation mobile networks. However, as the demand for even higher data rates is constantly increasing, the research on spectrum efficient physical layer architectures continues to be important. In April of 2009, the Orthogonal Frequency Division Multiple Access (OFDMA) physical layer protocol was proposed for the Long Term Evolution (LTE) of the currently operating 3G networks. Fourth-generation (4G) wireless networks will be in position to provide among other features peak data rates up to 300 Mb/s and a radio-network delay less than 5 ms.
Traditional continuous-time receivers face issues of high area and power requirements and low selectivity, as outlined in the following Section 23.1, when used in integrated software defined radios. Low selectivity puts increased demands on the analog-to-digital converters used in software defined radio receivers.
Organized into six sections, this chapter is devoted to programmable, sampling-based radio receivers. Early sampling combined with discrete-time filters can reduce the power required for analog-to-digital conversion by increasing selectivity while maintaining easy programmability and reduced power consumption. The required theory to understand such sampling filters is outlined in Section 23.2. In each of Sections 23.3 and 23.4, we look at the advantages and disadvantages, associated challenges, and the state of the art of programmable zero intermediate frequency and low intermediate frequency discrete-time receivers, respectively. As an exercise, we review a case study of an integrated AM/FM super-heterodyne receiver in Section 23.5. Finally, in Section 23.6, we present a summary and conclusions.
In wireless communications, it is often desirable to transmit a signal as efficiently as possible to achieve low power dissipation. In the meantime, it is necessary to keep signal distortion small. Unfortunately, these two desirable features contradict each other. For example, it is well known that the radio frequency (RF) power amplifier (PA) in wireless transmitters is inherently nonlinear, and when operated near saturation, it causes inter-modulation products that interfere with adjacent channels. To reduce distortion, a common method is to “back-off” the output power of the PA to ensure that signal peaks do not exceed the saturation point of the amplifier to permit distortion-free transmission. This “back-off” has not been a big concern in narrowband systems, e.g. GSM/EDGE systems, where the modulation exhibits a modest few dB peak-to-average power ratio (PAPR), which degrades efficiencies by around 30 percent that is deemed acceptable. However, the “back-off” could dramatically degrade the PA efficiency in wideband systems because, for example, 3GPP LTE (Long Term Evolution) signals often exhibit PAPRs in excess of 10 dB. Backing off an amplifier to operate within its linear region with these waveforms rapidly forces highly inefficient operation, which can lead to typical efficiencies of less than 10%. That is not acceptable in practical situations.
Digital predistortion (DPD) is proposed to use digital signal processing techniques to compensate for the nonlinear distortion in the RF PA, thereby allowing it to be operated at higher drive levels for higher efficiency [1]. The attraction of this approach is that the nonlinear PA can be linearized by a standalone add-on digital block, freeing vendors from the burden and complexity of manufacturing complex analog/RF circuits. Digital predistortion has become one of the most popular and feasible linearization techniques in modern wireless communication systems.
Digital conversion is a fundamental part of many digital radio systems, which include up-conversion of the discrete baseband signal stream into a high-resolution radio signal at the transmitter, and down-conversion of a high-resolution radio signal back into a baseband signal at the receiver. In this chapter, we cover the basics of digital conversion (analog to digital and digital to analog), the functionality of the digital up-converter (DUC) and digital down-converter (DDC) in relation to conversion between intermediate frequency (IF) and baseband with the emphasis on the implementation of the DDC and DUC for standard wireless communication systems.
This chapter is organized into six sections. The first section gives an introduction to the digital transceiver and related basic processing blocks. Section 12.2 will present the multi-rate, multi-stage and filter-banks design for a digital front-end. Section 12.2.1 will describe I/Q (In-phase/Quadrature) modulation/demodulation and NCO (numerical control oscillator) design for the DDC and DUC. Section 12.2.2 will present sample rate conversion design in the DDC. The multi-rate, multi-stage filtering and filter bank implementation is covered in this section. Section 12.2.3 will talk about sample rate conversion in the DUC.
Software means programmable. Hence software defined radio means that the radio should now be programmable. We know what computer programming means, and we agree, up to a certain level, on how it should be done. But do we know what programming a radio means? Several questions are still open: what will an SDR platform look like in ten years? Will there exist software radio code? What will be the technical challenges and commercial issues behind this code?
Programming is more precise than configuring or tuning, it implies a much greater level of freedom for the programmer. But it also means much cheaper implementations in many cases and in particular a re-use of the same hardware for different protocols (i.e. with different programs). This is, to our point of view, the main difficulty of software radio programming: reconfiguration and in particular dynamic reconfiguration. Dynamic (i.e. very fast) reconfiguration is now mandatory because some protocols, 3GPP-LTE (Third Generation Partnership Program Long Term Evolution) for instance, propose channel adapting for each frame, requiring a setting of the channel estimation parameter in a few milliseconds.
Nowadays, one of the main common objectives in all Electrical Engineering research areas consists of reducing energy consumption by enhancing power efficiency. It is well known that the power amplifier (PA) is one of the most power hungry devices in radiocommunications. Therefore, to amplify non-constant envelope modulated signals, the use of linear Class-A PAs operating at high-power back-off levels to guarantee the desired linearity is no longer a desirable solution since it results in power inefficiency. In a classical Cartesian I-Q transmitter with static supply, the PA has to linearly amplify a carrier signal which is both phase and amplitude modulated and usually showing high peak-to-average power ratios (PAPRs), which implies that for having linear amplification it is necessary to use extremely inefficient class-A or class-AB PAs. Power amplifier system level linearizers, such as digital predistortion (DPD), extend the linear range of power amplifiers which, properly combined with crest factor reduction (CFR) techniques [1], enable PAs to be driven harder into compression (thus more efficient) while meeting linearity requirements.
Thanks to the intensive processing capabilities offered by the “always faster” digital signal processors, some power supply control architectures with great potential for high-efficiency operation have been revived. The PA drain supply modulation is carried out using techniques such as envelope elimination and restoration (EE&R) [2] and envelope tracking (ET) [3],[4] in conjunction with DPD. Therefore, the use of linearizers, and more precisely DPD, becomes an essential solution to mitigate nonlinear distortion effects arising from the use of more efficient but highly nonlinear PAs (Class D, E, F switched PAs) in both Cartesian and Polar transmitter architectures.
Contemporary wireless communications have expanded to provide a multitude of services ranging from mobile telephony and satellite navigation to Internet access and image or video transfer. These functions have been made possible by the digital signal processing (DSP) techniques implemented in dedicated hardware. In addition to the primary functions also various digital techniques must be employed to ensure integrity and reliability of communications systems, increasing thereby the demands for the system throughput. To enable ever higher data rates both wider channel bandwidths and advanced DSP techniques such as Orthogonal Frequency-Division Multiplexing (OFDM) and multi-bit quadrature amplitude modulation (QAM) are used. These solutions place more demands on the signal-to-noise ratio (SNR) and peak-to-average power ratio (PAPR) of the signals, and in effect, on the dynamic range (DR) and linearity of the analog front-end of a transceiver.
The requirements on the interface between the digital and analog front-end part are just as important. In fact, the quality of the A/D and D/A data conversion largely decides the performance of the received and transmitted signal. The demands placed on the data converters in wireless transceivers are exacerbated in terms of multimode operation, and are pushed to an extreme for the software defined radio (SDR) where the converters should be placed close to antenna [1].