To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The implementation challenges in building compact and low-cost radios for future wireless systems are continuously growing. This is partially due to the introduction of multi-antenna transmission techniques as well as the use of wideband communication waveforms and high-order symbol alphabets, in addition to the increasing demands for more efficient radio spectrum utilization through e.g. carrier aggregation and scattered spectrum use. In general, implementations of several parallel radios with wide operating bandwidth and high performance, in terms of linearity and spurious free dynamic range, are required in a single device. Then, to keep the overall implementation costs and size feasible, simplified radio architectures and lower-cost radio electronics are typically used. This in turn implies that various nonidealities in the deployed analog radio frequency (RF) modules, stemming from the unavoidable physical limitations of the electronics used, are expected to play a critical role in future radio devices.
Good examples of the above “dirty-RF” paradigm [1], [2] are, e.g., oscillator phase noise, power amplifier (PA) nonlinearities, imperfections of the sampling and analog-to-digital (A/D) interface, in-phase/quadrature (I/Q) branch amplitude and phase mismatches, as well as nonlinearities of receiver small signal components like low-noise amplifiers (LNAs) and mixers. In this chapter, we will focus on the behavioral modeling and digital signal processing (DSP) based mitigation of I/Q imbalances and the resulting mirror-frequency interference in direct-conversion type radio transmitters and receivers. For generality, in most of the developments, the I/Q imbalances are assumed to be frequency-dependent within the processed bandwidth, which is then built in to both modeling as well as mitigation algorithms. Also an extensive list of state-of-the-art literature is given.
This chapter presents some algorithms and technology that are used in the Physical (PHY) layer entity of a flexible or cognitive radio. In the context of this chapter, cognitive radio refers to a wireless air interface that implements the cognitive cycle introduced by Mitola in his Ph.D. thesis [20]. This cycle is illustrated on Figure 25.1 and is composed of four main steps: the “Sense” acquires relevant information from the radio environment, the “Analyse” and “Decide” steps represent all that implies some intelligence, “Analyse” for interpreting the observation of the “Sense” step, “Decide” for learning, planning, and decision making, and finally the “Act” step reconfigures the transceiver’s communication parameters, with the transceivers designed with software defined radio (SDR) principles in order to be as flexible as possible.
Mitola underlines the strong need for defining a novel wireless air interface (i.e., Media Access Control MAC and PHY layers) based on cognitive radio. This air interface should support the new functionalities described in Figure 25.1; the MAC layer (or the cognitive manager) operating the “Analyse” and “Decide” steps and the PHY layer operating the “Sense” and “Act” steps.
This chapter describes the integration and interface between the digital front-end and analog front-end with a focus on wireless terminals’ ASICs applications. This has been most dynamic over the past decade with rapid progress from a mostly discrete implementation platform to a single system-on-chip (SoC) ASIC that integrates the RF blocks with the Modem [1]. The pressure to perform with size, costs, and power constraints has fueled creativity in system and architecture designs and implementations in submicron CMOS processes. Such development efforts have revolutionized the types and means of digital-to-analog interfaces, which in turn have redefined ASIC partitions and popularized today’s mixed signal integration on a common substrate.
Traditionally, the digital front-end resides within the Modem baseband processor ASIC and seeks a partition with an analog baseband IQ interface to the RFIC transceiver. Such direct conversion transceiver architecture offers simplicity in analog design, and provides analog channel filtering and full dynamic gain range in the analog domain. The digital front-end can perform calibration functions in addition to pre-decoding and post-encoding functions. Such a device platform continues to see wide commercial applications in 3G ASICs implementation, mWiMax wireless terminal ASICs, as well as LTE datacard ASICs design.
The radio frequency (RF) power amplifier (PA) is the most power consuming element in a wireless transmission system, and can account for more than 50 percent of the total power consumed by the transmitter [1]. Improving the PA efficiency saves energy and drives down the overall system costs. A study described in [2, p. 13] provides a compelling reason for PA linearization: application of PA linearization technologies can yield annual savings of millions of dollars for a typical network service provider.
Efficient PAs are usually nonlinear. Nonlinearity generates both in-band distortion and out-of-band interference, which manifest in terms of transmitter error vector magnitude (EVM) degradation and spectral regrowth. In wireless communication systems, many signal formats, such as Code Division Multiple Access (CDMA) and Orthogonal Frequency Division Multiplexing (OFDM) transmission, have been introduced to improve spectrum efficiency and data rate. However, these non-constant-envelope signals are not power efficient in the presence of nonlinear PAs as large back-offs are needed for linear transmission. In most commercial wireless communication systems, PAs are still the dominant source of signal quality degradation since the PAs are usually biased in a mildly nonlinear region to gain a reasonable amount of efficiency.
We begin with a brief overview of some of the fundamental concepts and mathematical tools of information theory. This allows us to establish notation and to set the stage for the results presented in subsequent chapters. For a comprehensive introduction to the fundamental concepts and methods of information theory, we refer the interested reader to the textbooks of Gallager [2], Cover and Thomas [3], Yeung [4], and Csiszár and Körner [5].
The rest of the chapter is organized as follows. Section 2.1 provides an overview of the basic mathematical tools and metrics that are relevant for subsequent chapters. Section 2.2 illustrates the fundamental proof techniques used in information theory by discussing the point-to-point communication problem and Shannon's coding theorems. Section 2.3 is entirely devoted to network information theory, with a special emphasis on distributed source coding and multi-user communications as they relate to information-theoretic security.
Mathematical tools of information theory
The following subsections describe a powerful set of metrics and tools that are useful to characterize the fundamental limits of communication systems. All results are stated without proof through a series of lemmas and theorems, and we refer the reader to standard textbooks [2, 3, 4] for details. Unless specified otherwise, all random variables and random vectors used throughout this book are real-valued random vectors.
Useful bounds
We start by recalling a few inequalities that are useful to bound the probabilities of rare events.
At the time of their initial conception, most common network protocols, such as the Transmission Control Protocol (TCP) and the Internet Protocol (IP), were not developed with security concerns in mind. When DARPA launched the first steps towards the packet-switched network that gave birth to the modern Internet, engineering efforts were targeted towards the challenges of guaranteeing reliable communication of information packets across multiple stations from the source to its final destination. The reasons for this are not difficult to identify: the deployed devices were under the control of a few selected institutions, networking and computing technology was not readily available to potential attackers, electronic commerce was a distant goal, and the existing trust among the few users of the primitive network was sufficient to allow all attention to be focused on getting a fully functional computer network up and running.
A few decades later, with the exponential growth in number of users, devices, and connections, issues such as network access, authentication, integrity, and confidentiality became paramount for ensuring that the Internet and, more recently, broadband wireless networks could offer services that are secure and ultimately trusted by users of all ages and professions. By then, however, the layered architecture, in which the fundamental problems of transmission, medium access, routing, reliability, and congestion control are dealt with separately at different layers, was already ingrained in the available network devices and operating systems.
This book is the result of more than five years of intensive research in collaborationwith a large number of people. Since the beginning, our goal has been to understand at a deeper level how information-theoretic security ideas can help build more secure networks and communication systems. Back in 2008, the actual plan was to finish the manuscript within one year, which for some reason seemed a fairly reasonable proposition at that time. Needless to say, we were thoroughly mistaken. The pace at which physical-layer security topics have found their way into the main journals and conferences in communications and information theory is simply staggering. In fact, there is now a vibrant scientific community uncovering the benefits of looking at the physical layer from a security point of view and producing new results every day. Writing a book on physical-layer security thus felt like shooting at not one but multiple moving targets.
To preserve our sanity we decided to go back to basics and focus on how to bridge the gap between theory and practice. It did not take long to realize that the book would have to appeal simultaneously to information theorists, cryptographers, and network-security specialists. More precisely, the material could and should provide a common ground for fruitful interactions between those who speak the language of security and those who for a very long time focused mostly on the challenges of communicating over noisy channels.
In this chapter, we develop the notion of secrecy capacity, which plays a central role in physical-layer security. The secrecy capacity characterizes the fundamental limit of secure communications over noisy channels, and it is essentially the counterpart to the usual point-to-point channel capacity when communications are subject not only to reliability constraints but also to an information-theoretic secrecy requirement. It is inherently associated with a channel model called the wiretap channel, which is a broadcast channel in which one of the receivers is treated as an adversary. This adversarial receiver, which we call the eavesdropper to emphasize its passiveness, should remain ignorant of the messages transmitted over the channel. The mathematical tools, and especially the random-coding argument, presented in this chapter are the basis for most of the theoretical research in physical-layer security, and we use them extensively in subsequent chapters.
We start with a review of Shannon's model of secure communications (Section 3.1), and then we informally discuss the problem of secure communications over noisy channels (Section 3.2). The intuition we develop from loose arguments is useful to grasp the concepts underlying the proofs of the secrecy capacity and motivates a discussion of the choice of an information-theoretic secrecy metric (Section 3.3). We then study in detail the fundamental limits of secure communication over degraded wiretap channels (Section 3.4) and broadcast channels with confidential messages (Section 3.5).
In all of the previous chapters, we discussed the possibility of secure transmissions at the physical layer for communication models involving only two legitimate parties and a single eavesdropper. These results generalize in part to situations with more complex communication schemes, additional legitimate parties, or additional eavesdroppers. Because of the increased complexity of these “multi-user” channel models, the results one can hope to obtain are, in general, not as precise as the ones obtained in earlier chapters. In particular, it becomes seldom possible to obtain a single-letter characterization of the secrecy capacity and one must often resort to the calculation of upper and lower bounds. Nevertheless, the analysis of multi-user communication channels still provides useful insight into the design of secure communication schemes; in particular it highlights several characteristics of secure communications, most notably the importance of cooperation, feedback, and interference. Although these aspects have been studied extensively in the context of reliable communications and are now reasonably well understood, they do not necessarily affect secure communications in the same way as they affect reliable communications. For instance, while it is well known that cooperation among transmitters is beneficial and improves reliability, the fact that interference is also helpful for secrecy is perhaps counter-intuitive.
There are numerous variations of multi-user channel models with secrecy constraints; rather than enumerating them all, we study the problem of secure communication over a two-way Gaussian wiretap channel.
Many of the applications of classical coding techniques can be found at the physical layer of contemporary communication systems. However, coding ideas have recently found their way into networking research, most strikingly in the form of algebraic codes for networks. The existing body of work on network coding ranges from determinations of the fundamental limits of communication networks to the development of efficient, robust, and secure network-coding protocols. This chapter provides an overview of the field of network coding with particular emphasis on how the unique characteristics of network codes can be exploited to achieve high levels of security with manageable complexity. We survey network-coding vulnerabilities and attacks, and compare them with those of state-of-the-art routing algorithms. Some emphasis will be placed on active attacks, which can lead to severe degradation of network-coded information flows. Then, we show how to leverage the intrinsic properties of network coding for information security and secret-key distribution, in particular how to exploit the fact that nodes observe algebraic combinations of packets instead of the data packets themselves. Although the prevalent design methodology for network protocols views security as something of an add-on to be included after the main communication tasks have been addressed, we shall contend that the special characteristics of network coding warrant a more comprehensive approach, namely one that gives equal importance to security concerns. The commonalities with code constructions for physical-layer security will be highlighted and further investigated.
This chapter extends the results obtained in Chapter 3 and Chapter 4 for discrete memoryless channels and sources to Gaussian channels and wireless channels, for which numerical applications provide insight beyond that of the general formula in Theorem 3.3. Gaussian channels are of particular importance, not only because the secrecy capacity admits a simple, intuitive, and easily computable expression but also because they provide a reasonable approximation of the physical layer encountered in many practical systems. The analysis of Gaussian channels also lays the foundations for the study of wireless channels.
The application of physical-layer security paradigms to wireless channels is perhaps one of the most promising research directions in physical-layer security. While wireline systems offer some security, because the transmission medium is confined, wireless systems are intrinsically susceptible to eavesdropping since all transmissions are broadcast over the air and overheard by neighboring devices. Other users can be viewed as potential eavesdroppers if they are not the intended recipients of a message. However, as seen in earlier chapters, the randomness present at the physical layer can be harnessed to provide security, and randomness is a resource that abounds in a wireless medium. For instance, we show that fading can be exploited opportunistically to guarantee secrecy even if an eavesdropper obtains on average a higher signal-to-noise ratio than a legitimate receiver.
We start this chapter with a detailed study of Gaussian channels and sources, including multiple-input multiple-output channels (Section 5.1.2).
In this chapter, we discuss the construction of practical codes for secrecy. The design of codes for the wiretap channel turns out to be surprisingly difficult, and this area of information-theoretic security is still largely in its infancy. To some extent, the major obstacles in the road to secrecy capacity are similar to those that lay in the path to channel capacity: the random-coding arguments used to establish the secrecy capacity do not provide explicit code constructions. However, the design of wiretap codes is further impaired by the absence of a simple metric, such as a bit error rate, which could be evaluated numerically. Unlike codes designed for reliable communication, whose performance is eventually assessed by plotting a bit-error-rate curve, we cannot simulate an eavesdropper with unlimited computational power; hence, wiretap codes must possess enough structure to be provably secure. For certain channels, such as binary erasure wiretap channels, the information-theoretic secrecy constraint can be recast in terms of an algebraic property for a code-generator matrix. Most of the chapter focuses on such cases since this algebraic view of secrecy simplifies the analysis considerably.
As seen in Chapter 4, the design of secret-key distillation strategies is a somewhat easier problem insofar as reliability and security can be handled separately by means of information reconciliation and privacy amplification. Essentially, the construction of coding schemes for key agreement reduces to the design of Slepian–Wolf-like codes for information reconciliation, which can be done efficiently with low-density parity-check (LDPC) codes or turbo-codes.
A simple look at today's information and communication infrastructure is sufficient for one to appreciate the elegance of the layered networking architecture. As networks flourish worldwide, the fundamental problems of transmission, routing, resource allocation, end-to-end reliability, and congestion control are assigned to different layers of protocols, each with its own specific tools and network abstractions. However, the conceptual beauty of the layered protocol stack is not easily found when we turn our attention to the issue of network security. In the early days of the Internet, possibly because network access was very limited and tightly controlled, network security was not yet viewed as a primary concern for computer users and system administrators. This perception changed with the increase in network connections. Technical solutions, such as personnel access controls, password protection, and end-to-end encryption, were developed soon after. The steady growth in connectivity, fostered by the advent of electronic-commerce applications and the ubiquity of wireless communications, remains unhindered and has resulted in an unprecedented awareness of the importance of network security in all its guises.
The standard practice of adding authentication and encryption to the existing protocols at the various communication layers has led to what could be rightly classified as a patchwork of security mechanisms. Given that data security is so critically important, it is reasonable to argue that security measures should be implemented at all layers where this can be done in a cost-effective manner.