To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
While non-cooperative game theory studies the strategic choices resulting from the interactions among competing players, cooperative game theory provides analytical tools to study the behavior of rational players when they cooperate. In this context, in a cooperative-game scenario, the players are allowed to form agreements among themselves that can impact the strategic choices of these players as well as their utilities. Cooperative games encompass two main branches: bargaining theory and coalitional games. The former describes the bargaining process between a set of players that need to agree on the terms of cooperation, while the latter describes the formation of cooperating groups of players, referred to as coalitions, that can strengthen the players' positions in a game. In this chapter, we examine the key characteristics, properties, and solution concepts of both branches of cooperative games as well as sample applications within wireless and communication networks.
Bargaining theory
Introduction
In economics, many problems involve a number of entities that are interested in reaching an agreement over a trade or the sharing of a resource but have a conflicting interest on how to reach this agreement and on the terms of the agreement. In this context, a bargaining situation is defined as a situation in which two (or more) players can mutually benefit from reaching a certain agreement but have conflicting interests on the terms of the agreement.
Recent advances in wireless communication have made possible the large-scale deployment of wireless networks, which consist of small, low-cost nodes with simple processing and networking capabilities. In order to reach the desired destination such as the data sink, transmissions depending on multiple hops are necessary. As a result, the optimization of routing is a critical problem that involves many aspects such as link quality, energy efficiency, and security. Moreover, the nodes may not be willing to fully cooperate. For example, from the node's perspective, forwarding the arriving packets consumes its limited battery power, so it may not be in the node's interest to forward all arriving packets. But doing so will adversely affect network connectivity. Hence, it is crucial to design a distributed-control mechanism that encourages cooperation among participating multi-hop nodes.
This chapter studies game-theoretic approaches to routing in multi-hop networks. We first introduce important models and examples of routing games. We provide two detailed examples, a repeated-routing game and a hierarchical-routing game for enforcing cooperation. Finally, we list other approaches from the literature.
Routing-game basics
A network is given by a directed graph G = (V, E), with vertex set V and edge set E (either directed or undirected). A set {(s1, d1), …, (sK, dK)} consists of source–destination vertex pairs, which we also call commodities. Each player is identified with one commodity. Different players can originate from different source vertices and pass information to different destination vertices.
IEEE 802.11 wireless local area networks (WLANs) have been widely deployed in many places for both residential and commercial use. The IEEE 802.11 standard supports two major configurations – i.e., the point – coordination function (PCF) and the distributed coordination function (DCF). With PCF, the transmission in the network is based on a central node (i.e., an access point). Client nodes listen to the channel and wait for the signal from the access point. Once permission is sent by the access point, the client node can start data transmission. On the other hand, with DCF, the nodes employ carrier-sense multiple-access with collision avoidance (CSMA/CA) for MAC protocol. Each node can transmit independently, based on the availability of the channel. In particular, with CSMA/CA, the nodes listen for the channel status. If the channel is busy, the node defers its transmission by waiting for a backoff period. If a node senses a channel is idle, it will wait for a certain period of time and start transmission. In this case, multiple nodes can start transmissions at the same time, which results in collision. The colliding nodes will wait for the backoff period, and then sense for transmission again. To avoid performance degradation arising from packet collision, the backoff period can be adjusted, according to a specific rule, on the basis of the congestion level in the network (e.g., the rate of packet collisions).
With the recent advances in telecommunications technologies, wireless networking has become ubiquitous because of the great demand created by pervasive mobile applications. The convergence of computing, communications, and media will allow users to communicate with each other and access any content at any time and at any place. Future wireless networks are envisioned to support various services such as high-speed access, telecommuting, interactive media, video conferencing, real-time Internet games, e-business ecosystems, smart homes, automated highways, and disaster relief. Yet many technical challenges remain to be addressed in order to make this wireless vision a reality. A critical issue is devising distributed and dynamic algorithms for ensuring a robust network operation in time-varying and heterogeneous environments. Therefore, in order to support tomorrow's wireless services, it is essential to develop efficient mechanisms that provide an optimal cost-resource-performance tradeoff and that constitute the basis for next-generation ubiquitous and autonomic wireless networks.
Game theory is a formal framework with a set of mathematical tools to study the complex interactions among interdependent rational players. For more than half a century, game theory has led to revolutionary changes in economics, and it has found a number of important applications in politics, sociology, psychology, communication, control, computing, and transportation, to list only a few. During the past decade, there has been a surge in research activities that employ game theory to model and analyze modern communication systems.
Cooperative communication has attracted significant recent attention as a transmission strategy for future wireless networks. It efficiently takes advantage of the broadcast nature of wireless networks to allow network nodes to share their messages and transmit cooperatively as a virtual antenna array, thus providing diversity that can significantly improve system performance. Cooperative communication can be applied in a variety of wireless systems and networks. In the research community, a considerable amount of work has been done in this area for networks such as cellular, WiFi, ad hoc/sensor networks, and ultra wideband (UWB). These ideas are also working their way into standards; e.g., the IEEE 802.16 (WiMAX) standards body for future broadband wireless access has established the 802.16j relay task group to incorporate cooperative relaying mechanisms into this technology. Most existing work on cooperative communication concentrates on the physical (PHY) and medium access control (MAC) layers of wireless networks, examining issues such as capacity improvement, power control, and relay selection. The impact on the higher layers, such as routing in the network layer, has not been fully investigated yet.
The merits of cooperative transmission at the physical layer have been well explored. However, the impact of cooperative transmission on the design of the higher layers is not well understood yet. Specifically, the issues for various layers are:
Physical layer. Objectives include optimizing the capacity region, minimizing the bit error rate (BER), and improving the link quality by power control.
Communication networks such as the Internet are becoming more and more dependent on the interactions of intelligent devices that are capable of autonomously operating within a highly dynamic and rapidly changing environment. The dynamism and complexity of Internet networks is a consequence of their size, heterogeneity, traffic diversity, and decentralized nature. Next-generation communication networks such as the future Internet are envisioned to be self-organizing, self-configuring, self-protecting, and self-optimizing. The applications and services that make use of these networks will also grow in complexity and impose stringent constraints and demands on network design: increased quality-of-service requirements for routing data, content distribution based on peer-to-peer networks, advanced pricing, and congestion control mechanisms, etc.
While these challenges were initially perceived with the emergence of the Internet, they are now essential in the design of every current and future network. To efficiently analyze and study such Internet-like networks, there is a need for a rich analytical framework such as game theory, whose models and algorithms can capture the numerous challenges arising in current and emerging communication networks. The challenges in designing Internet networks differ from those of their wireless counterparts in several aspects. In general, one does not need to worry about the reliability of the communication channel, as in the wireless case. But because Internet networks are generally composed of heterogeneous nodes having different capabilities and communicating over long paths and routes, the network size and as well as heterogeneity the nodes' capabilities, for example, play a role that is more critical in the design of Internet networks than in the wireless case.
Auction theory is an applied branch of game theory that deals with how people act in auction markets, and it studies the game-theoretic properties of auction markets. There are many possible designs (or sets of rules) for an auction, and typical issues studied by auction theorists include the efficiency of a given auction design, optimal and equilibrium bidding strategies, and revenue comparison. Auction theory is also used as a tool to inform the design of real-world auctions, most notably auctions for the privatization of public-sector companies or the sale of licenses for use of the electromagnetic spectrum.
Mechanism design is a subfield of game theory studying solution concepts for a class of private-information games. The distinguishing features of these games are as follows. First, a game “designer” chooses the game structure rather than inheriting one. Thus, the mechanism design is often called “reverse game theory.” Second, the designer is interested in the game's outcome. Such a game is called a “game of mechanism design” and is usually solved by motivating players to disclose their private information. The 2007 Nobel Memorial Prize in Economic Sciences was awarded to Leonid Hurwicz, Eric Maskin, and Roger Myerson “for having laid the foundations of mechanism design theory.”
Covering everything from signal processing algorithms to integrated circuit design, this complete guide to digital front-end is invaluable for professional engineers and researchers in the fields of signal processing, wireless communication and circuit design. Showing how theory is translated into practical technology, it covers all the relevant standards and gives readers the ideal design methodology to manage a rapidly increasing range of applications. Step-by-step information for designing practical systems is provided, with a systematic presentation of theory, principles, algorithms, standards and implementation. Design trade-offs are also included, as are practical implementation examples from real-world systems. A broad range of topics is covered, including digital pre-distortion (DPD), digital up-conversion (DUC), digital down-conversion (DDC) and DC-offset calibration. Other important areas discussed are peak-to-average power ratio (PAPR) reduction, crest factor reduction (CFR), pulse-shaping, image rejection, digital mixing, delay/gain/imbalance compensation, error correction, noise-shaping, numerical controlled oscillator (NCO) and various diversity methods.
An introduction to the theory and techniques for achieving high quality network communication with the best possible bandwidth economy, this book focuses on network information flow with fidelity. Covering both lossless and lossy source reconstruction, it is illustrated throughout with real-world applications, including sensor networks and multimedia communications. Practical algorithms are presented, developing novel techniques for tackling design problems in joint network-source coding via collaborative multiple description coding, progressive coding, diversity routing and network coding. With systematic introductions to the basic theories of distributed source coding, network coding and multiple description coding, this is an ideal self-contained resource for researchers and students in information theory and network theory.
This complete guide to physical-layer security presents the theoretical foundations, practical implementation, challenges and benefits of a groundbreaking new model for secure communication. Using a bottom-up approach from the link level all the way to end-to-end architectures, it provides essential practical tools that enable graduate students, industry professionals and researchers to build more secure systems by exploiting the noise inherent to communications channels. The book begins with a self-contained explanation of the information-theoretic limits of secure communications at the physical layer. It then goes on to develop practical coding schemes, building on the theoretical insights and enabling readers to understand the challenges and opportunities related to the design of physical layer security schemes. Finally, applications to multi-user communications and network coding are also included.
Digital modulation techniques can be largely divided into two categories. One is single-carrier modulation, which utilizes a single radio frequency (RF) to transmit data. The other is multi-carrier modulation, which utilizes simultaneously modulated multiple RF carriers in order to combat inter symbol interference (ISI) while increasing communication bandwidth. This chapter focuses on a particular type of multi-carrier modulation known as Orthogonal Frequency Division Multiplexing (OFDM). The idea of OFDM [1] was proposed in the 1960s followed a few years later by the Discrete Fourier Transform (DFT) based implementation algorithm [2]. Then, OFDM became practical and has been popular in a number of applications such as asymmetric digital subscriber line (ADSL), wireless local area network (WLAN), digital TV broadcasting (DTV). It also has become a strong candidate for 4th generation cellular land mobile radio system.
It is well known that OFDM modulation and demodulation can be implemented by IDFT and DFT. But in actual implementation, not only those DFTs but also several error compensation mechanisms are indispensable because orthogonality between parallel transmitted subcarrier signals are easily destroyed by synchronization errors such as RF error, clock sampling rate error, and FFT window position shift. When applying OFDM technology for mobile communication, Doppler induced RF errors easily distort a reception performance. Therefore the main contents of this chapter are two fundamental techniques to realize high-performance OFDM communication systems. These are Diversity technologies and Synchronization error Detection and Compensation methods. After this, real hardware implementation examples and their data are summarized.
The principle of oversampling the analog-to-digital converter (ADC) with negative feedback has been invented decades ago and is still being developed further by scientists all over the world. Today’s state-of-the-art converters have come a long way from the first ADCs employing the ΣΔ principle in the 1960s. There is quite a broad selection of ΣΔ oriented publications in the literature since the 1960s and early development phases have also been documented in a comprehensive manner, e.g., in [2], [4] and [52]. On the basis of [2], and following the outline of [35], this chapter aims to present the theory and technology of the advanced quadrature sigma-delta modulator designs for A/D interface. The chapter is organized into six sections. In the rest of the first section, we outline the basics of sigma-delta modulation. Section 14.2 is devoted to extending the discussion on some further modulator concepts and selected advanced quadrature structures will be presented in Section 14.3. Related implementation nonidealities are discussed in Section 14.4. Section 14.5 gives some simulation examples on the advanced structures introduced in Section 14.3, taking also circuit nonidealities into account. Section 14.6 will present related conclusions.
The origin of modern ΣΔ modulation is in delta modulation and differential pulse-code modulation (PCM). Delta modulation was invented in ITT laboratories in France in 1946, as was also the classical version of the PCM. Differential PCM system was patented in 1950 by Bell Telephone Labs.
The analog front-end of a direct-conversion transmitter suffers from several performance-degrading circuit implementation impairments. The main impairments are power amplifier (PA) nonlinear distortion, in-phase/quadrature-phase (I/Q) imbalance, and local oscillator (LO) leakage. Each of these impairments has been treated separately in the literature as well as on the pages of this book. For details on state-of-the-art PA predistortion, the reader is referred to the related chapters in Part II of this book and the references therein, and for a comprehensive review on I/Q imbalance compensation, to Chapter 16 of this book and the literature cited therein. It has been demonstrated that, when treated separately, each of the impairments can be mitigated by using digital predistortion. What is often overlooked, however, is that in direct-conversion transmitters these impairments interact in a manner that may severely cripple the overall transmitted signal quality. In addition to the obvious effects of I/Q imbalance and LO leakage (mirror-frequency interference (MFI) and spurious signal energy at the LO frequency, respectively), there are several other performance-degrading phenomena arising from their interaction with the nonlinearity that need addressing. First, I/Q imbalance and LO leakage cause extra intermodulation distortion (IMD) products to appear at the PA output [6], [9]. Effectively this means that even with access to ideal PA predistorter (PD) coefficients, spectral regrowth will not be fully mitigated. Second, the extra IMD products at the PA output will interfere with the estimation of an adaptive PA PD [6], [9]. In other words, if the PA PD is trained with no regard for I/Q imbalance and LO leakage, the resulting PD will be biased, and thus the overall transmitted signal quality will be further compromised. Third, PA nonlinearity interferes with the estimation of the I/Q modulator (IQM) predistorter, yielding biased estimates. This makes it difficult to compensate for IQM impairments prior to PA PD estimation. These aspects will be discussed in more detail in Section 17.2.
Nowadays the market continues to show a real interest in the development of telecommunication networks based on radiofrequency (RF) systems. Particularly, we can note a proliferation of radiofrequency standards such as WiFi, WiMax, DVB-T, or the 3G or 4G standards and a strong desire to merge those standards in a single terminal. Along with the existing standards, all new standards allow the operators to offer new and better services in terms of speed, quality, and availability. Consequently, in order to handle this important diversity of telecommunication techniques, there is a growing interest in developing new front-end architectures capable of processing several standards.
Of course, one first point is to enhance the digital resources of the terminal to offer Software Defined Radio (SDR) potential so that the structure can be easily reconfigured and evolved. But along with this evolution towards high numerical capacities, terminals are also required to propose an RF part enabling multi-standard, multi-frequency and often multi-antenna operations (referred to as multi-* terminals).
While the migration of RF circuits running at multi-gigahertz frequencies to low-cost deep-submicron CMOS process has made single-chip wireless transceivers a reality, the proliferation of multiple wireless standards has imposed new challenges of integrating multiple wireless standards on the same hardware. One of the challenges in multi-standard radios is the need to support multiple frequency bands ranging from less than 1 GHz to more than 5 GHz. In this chapter, we will discuss the design of digital front-end (DFE) circuits that are applicable to at least two wireless standards: these include GSM/GPRS/EDGE (GGE) and WCDMA. We will also show how to extend the design to support LTE. Restricting the coverage to only three wireless standards is primarily for the amount of analysis that needs to be performed for each standard.
In the context of a receiver, we will use digital front-end (DFE) to describe digital circuits that bridge the output of an A/D converter (ADC) to a digital base-band (DBB) processor. We will also restrict discussion in this chapter to the case of implementation-friendly direct-conversion receiver (DCR) architecture. This will assume zero intermediate frequency (ZIF) for WCDMA and LTE signals but will allow for very low IF (VLIF) for GGE.
The first International standard for Terrestrial Broadcasting of Digital Television was published by the Advanced Television System Committee (ATSC) in 1995. The standard is known as ATSC and was adopted by the Federal Communications Commission (FCC) in 1996. The main purpose of this standard was the transmission of High Definition Television (HDTV) for home consumption, that is, to deliver the experience of viewing full motion pictures in fixed scenarios with large screens [15]. The modulation scheme selected in ATSC was 8-level Vestigial Sideband (8-VSB) that contrasts with the Coded Orthogonal Frequency Division Multiplexing (COFDM) scheme selected by the European Digital Video Broadcasting – Terrestrial (DVB-T) and Japanese Integrated Services Digital Broadcasting – Terrestrial (ISDB-T) standards. Since any terrestrial TV system has to overcome many channel impairments and interferences (ghosts, noise bursts, fading, etc.) to reach the home viewer, the selection of the RF modulation format is crucial.
The selection of 8-VSB modulation in Digital Television ATSC system was motivated by different reasons. First, 8-VSB can cover larger distances with fewer repeaters than COFDM, which supposes a considerable cost reduction for low population North America rural areas. Direct Broadcast Satellite (DBS) Television is not popular in North America and therefore rural areas had to be covered by terrestrial television.