To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we shall describe coding systems, primarily for images, that use the principles and algorithms explained in previous chapters. A complete coding system uses a conjunction of compression algorithms, entropy coding methods, source transformations, statistical estimation, and ingenuity to achieve the best result for the stated objective. The obvious objective is compression efficiency, stated as the smallest rate for a given distortion for lossy coding or smallest rate or compressed file size in lossless coding. However, other attributes may be even more important for a particular scenario. For example, in medical diagnosis, decoding time may be the primary concern. For mobile devices, small memory and low power consumption are essential. For broadcasting over packet networks, scalability in bit rate and/or resolution may take precedence. Usually to obtain other attributes, some compression efficiency may need to be sacrificed. Of course, one tries to obtain as much efficiency as possible for the given set of attributes wanted for the system. Therefore, in our description of systems, we shall also explain how to achieve other attributes besides compression efficiency.
Wavelet transform coding systems
The wavelet transform consists of coefficients grouped into subbands belonging to different resolutions or scales with octave frequency separation. As such, it is a natural platform for producing streams of code bits (hereinafter called codestreams) that can be decoded at multiple resolutions.
Source coding began with the initial development of information theory by Shannon in 1948 [1] and continues to this day to be influenced and stimulated by advances in this theory. Information theory sets the framework and the language, motivates the methods of coding, provides the means to analyze the methods, and establishes the ultimate bounds in performance for all methods. No study of image coding is complete without a basic knowledge and understanding of the underlying concepts in information theory.
In this chapter, we shall present several methods of lossless coding of data sources, beginning with the motivating principles and bounds on performance based on information theory. This chapter is not meant to be a primer on information theory, so theorems and propositions will be presented without proof. The reader is referred to one of the many excellent textbooks on information theory, such as Gallager [2] and Cover and Thomas [3], for a deeper treatment with proofs. The purpose here is to set the foundation and present lossless coding methods and assess their performance with respect to the theoretical optimum when possible. Hopefully, the reader will derive from this chapter both a knowledge of coding methods and an appreciation and understanding of the underlying information heory.
The notation in this chapter will indicate a scalar source on a one-dimensional field, i.e., the source values are scalars and their locations are on a one-dimensional grid, such as a regular time or space sequence.
In previous chapters, we described mathematical transformations that produce nearly uncorrelated elements and pack most of the source energy into a small number of these elements. Distributing the code bits properly among these transform elements, which differ statistically, leads to coding gains. Several methods for optimal rate distribution were explained in Chapter 8. These methods relied on knowledge of the distortion versus rate characteristics of the quantizers of the transform elements. Using a common shape model for this characteristic and the squared error distortion criterion meant that only the variance distribution of the transform elements needed to be known. This variance distribution determines the number of bits to represent each transform element at the encoder, enables parsing of the codestream at the decoder and association of decoded quantizer levels to reconstruction values. The decoder receives the variance distribution as overhead information. Many different methods have arisen to minimize this overhead information and to encode the elements with their designated number of bits. In this chapter, we shall describe some of these methods.
Application of source transformations
A transform coding method is characterized by a mathematical transformation or transform of the samples from the source prior to encoding. We described the most common of these transforms in Chapter 7. The stream of source samples is first divided into subblocks that are normally transformed and encoded independently.
The storage requirements of samples of data depend on their number of possible values, called alphabet size. Real-valued data theoretically require an unlimited number of bits per sample to store with perfect precision, because their alphabet size is infinite. However, there is some level of noise in every practical measurement of continuous quantities, which means that only some digits in the measured value have actual physical sense. Therefore, they are stored with imperfect, but usually adequate precision using 32 or 64 bits. Only integer-valued data samples can be stored with perfect precision when they have a finite alphabet, as is the case for image data. Therefore, we limit our considerations here to integer data.
Natural representation of integers in a dataset requires a number of bits per sample no less than the base 2 logarithm of the number of possible integer values. For example, the usual monochrome image has integer values from 0 to 255, so we use 8 bits to store every sample. Suppose, however, that we can find a group of samples whose values do not exceed 15. Then every sample in that group needs at most 4 bits to specify its value, which is a saving of at least 4 bits per sample. We of course need location information for the samples in the group. If the location information in bits is less than four times the number of such samples, then we have achieved compression.
This unified treatment of game theory focuses on finding state-of-the-art solutions to issues surrounding the next generation of wireless and communications networks. Future networks will rely on autonomous and distributed architectures to improve the efficiency and flexibility of mobile applications, and game theory provides the ideal framework for designing efficient and robust distributed algorithms. This book enables readers to develop a solid understanding of game theory, its applications and its use as an effective tool for addressing wireless communication and networking problems. The key results and tools of game theory are covered, as are various real-world technologies including 3G networks, wireless LANs, sensor networks, dynamic spectrum access and cognitive networks. The book also covers a wide range of techniques for modeling, designing and analysing communication networks using game theory, as well as state-of-the-art distributed design techniques. This is an ideal resource for communications engineers, researchers, and graduate and undergraduate students.
The differential games framework extends static non-cooperative continuous-kernel game theory into dynamic environments by adopting the tools, methods, and models of optimal-control theory. Optimal-control theory [58] has been developed to obtain the optimal solutions to planning problems that involve dynamic systems, where the state evolves over time under the influence of a control input (which is the instrument variable that is designed). Differential games can be viewed as extensions of optimal-control problems in two directions: (i) the evolution of the state is controlled not by one input but by multiple inputs, with each under the control of a different player, and (ii) the objective function is no longer a single one, with each player now having a possibly different objective function (payoff or cost), defined over time intervals of interest and relevance to the problem. The relative position of differential games in this landscape is captured in Table 5.1 [58]. Two main approaches that yield solutions to optimal-control problems are dynamic programming (introduced by Bellman) and the maximum principle (introduced by Pontryagin) [58]. The former leads to an optimal control that is a function of the state and time (closed-loop feedback control), whereas the latter leads to one that is a function only of the time and the initial state (open-loop control). These two approaches have also been adopted in differential games, where the common solution concepts of a differential game are again the Nash equilibrium and the Stackelberg equilibrium, for non-hierarchical and hierarchical structures, respectively.
Non-cooperative game theory is one of the most important branches of game theory, focusing on the study and analysis of competitive decision-making involving several players. It provides an analytical framework suited for characterizing the interactions and decision-making process involving several players with partially or totally conflicting interests over the outcome of a decision process which is affected by their actions. Examples of non-cooperative games are ubiquitous. In economics, firms operating in the same market compete over pricing strategies, market control, trading of goods, and the like. In wireless and communication networks, wireless nodes are involved in numerous non-cooperative scenarios such as allocation of resources, choices of frequencies or transmit power, packet forwarding, and interference management. Beyond economics and networking, non-cooperative game theory has made its impact over a broad range of disciplines such as biology, political science, sociology, and military tactics. In this chapter, we introduce non-cooperative game theory along with different types of games, while presenting underlying fundamental notions and key solution concepts.
Non-cooperative games: preliminaries
In this section, we introduce some preliminary concepts and terminology that pertain to non-cooperative game theory.
Introduction
A non-cooperative game involves a number of players having totally or partially conflicting interests in the outcome of a decision process. For example, consider a number of wireless nodes attempting to control their transmit power, given the interference generated by other nodes.
In the past two decades, cellular communication has witnessed a significant growth. Today, millions of mobile users utilize cellular phones worldwide. In essence, a cellular network is designed to provide a large number of users with access to wireless services over a large area. The basic architecture of a cellular network relies on dividing a large area (e.g., a city) into smaller areas, commonly referred to as cells. Each cell typically represents the coverage area of a single base station that is often located at the center of the cell. By dividing the network into cells, one can ensure a reliable coverage and access to wireless services. Despite the emergence of ad hoc networks with no infrastructure, the cellular architecture remains prevalent in the majority of existing or soon-to-be deployed wireless networks, because of its proven success. In fact, cellular communication has been the pillar architecture in key wireless systems, from traditional 2G systems such as GSM to 3G systems such as UMTS, and the emerging 4G and 5G systems. Beyond traditional macrocell-based networks (e.g., 3G and 4G networks), the use of small cells, covered by low-cost, low-power stations known as femtocell access points that can be overlaid with existing architectures, has recently become of central importance in the design of next-generation wireless networks. Thus, cellular technology is expected to remain as one of the most important paradigms in future wireless communication systems. In Chapter 2, we provided a comprehensive introduction to cellular communication, its key challenges, as well as its past and projected future evolution.
Cognitive radio [201] is a new paradigm for designing wireless communication systems which aims at enhancing the utilization of the radio-frequency spectrum. The motivation for cognitive radio arises from the scarcity of the available frequency spectrum. Emerging wireless applications will increase spectrum demand from mobile users. However, most of the available radio spectrum has been allocated to existing wireless systems, and only small portions of the radio spectrum can be licensed to new wireless applications. Nonetheless, a study in [143] by the spectrum policy task force (SPTF) of the Federal Communications Commission (FCC) shows that there are also many frequency bands which are only partly occupied or largely unoccupied. For example, spectrum bands allocated to cellular networks in the USA [325] reach their highest utilization during working hours, while they remain unoccupied from midnight until early morning.
The major factor that leads to inefficient use of radio spectrum is the spectrum licensing scheme itself. In traditional spectrum allocation, based on a command-and-control model, where radio spectrum allocated to licensed users (i.e., primary users) is not used, it cannot be utilized by unlicensed users (i.e., secondary users) or applications [87]. Because of this static and inflexible allocation, legacy wireless systems operate only on a dedicated spectrum band, and cannot adapt the transmission band according to the changing environment.
Game theory can be viewed as a branch of applied mathematics as well as of applied sciences. It has been used in the social sciences, most notably in economics, but has also penetrated into a variety of other disciplines such as political science, biology, computer science, philosophy, and, recently, wireless and communication networks. Even though game theory is a relatively young discipline, the ideas underlying it have appeared in various forms throughout history and in numerous sources, including the Bible, the Talmud, the works of Descartes and Sun Tzu, and the writings of Charles Darwin, and in the 1802 work Considérations sur la Théorie Mathématique du Jeu of André-Marie Ampère, who was influenced by the 1777 Essai d'Arithmétique Morale of Georges Louis Buffon. Nonetheless, the main basis of modern-day game theory can be considered an outgrowth of three seminal works:
Augustin Cournot's Mathematical Principles of the Theory of Wealth in 1838, which gives an intuitive explanation of what would, over a century later, be formalized as the celebrated Nash equilibrium solution to non-cooperative games. Furthermore, Cournot's work provides an evolutionary or dynamic notion of the idea of a “best response,” i.e., situations in which a player chooses the best action given the actions of other players, this being so for all players.
A wireless network refers to a telecommunications network whose interconnections between nodes are implemented without the use of wires. Wireless networks have experienced unprecedented growth over the last few decades, and are expected to continue to evolve in the future. Seamless mobility and coverage ensure that various types of wireless connections can be made anytime, anywhere. In this chapter, we introduce some basic types of wireless networks and provide the reader with some necessary background on state-of-art development.
Wireless networks use electromagnetic waves, such as radio waves, for carrying information. Therefore, their performance is greatly affected by the randomly fluctuating wireless channels. To develop an understanding of channels, in Section 2.1 we will study the radio frequency band first, then the existing wireless channel models used for different network scenarios, and finally the interference channel.
There exist several wireless standards. We describe them according to the order of coverage area, starting with cellular wireless networks. In Section 2.2.1 we provide an overview of the key elements and technologies of the third-generation wireless cellular network standards. In particular, we discuss WCDMA, CDMA2000, TD/S CDMA, and 4G and beyond. WiMax, based on the IEEE 802.16 standard for wireless metropolitan area networks, is discussed in Section 2.2.2. A wireless local area network (WLAN) is a network in which a mobile user can connect to a local area network through a wireless connection.
The game models discussed in the preceding chapters were all built on the governing assumption that the players all have complete information on the elements of the game, particularly on the action spaces of all players and the players' payoff (or cost) functions, and that this is all common information to all players. However, in many situations, especially in a competitive environment, the a priori information available to a player may not be publicly available to other players. In particular, a player may not have complete information on his opponents' possible actions, strategies, and payoffs. For example, in the latter case, a player may not know the resulting payoff value for another player when all players have picked specific actions (or strategies). One way of addressing situations that arise as a result of such incompleteness of information is to formulate them as Bayesian games—the topic of this chapter. We first introduce this class of games in general terms, and then discuss applications in wireless communications and networking.
Overview of Bayesian games
Simple example
Let us consider the example of a game between two car companies responding to the possibility of a Clean Air Act, such as the one in 1990. If the Act is passed, then both car companies will be faced with the task of redesigning their cars, which will be costly. In order to prevent this from happening, they decide to start a lobbying campaign against the Act.
Evolutionary-game theory has been developed as a mathematical framework to study the interaction among rational biological agents in a population [152]. In evolutionary-game theory, the agent adapts (i.e., evolves) the chosen strategy based on its fitness (i.e., payoff). In this way, both static and dynamic behavior (e.g., equilibrium) of the game can be analyzed.
Evolutionary-game theory has the following advantages over the traditional non-cooperative game theory we have studied in the previous chapters:
As we have seen, the Nash equilibrium is the most common solution concept for non-cooperative games. An N-tuple of strategies in an N-player game is said to be in Nash equilibrium if an agent (player) cannot improve his payoff by moving to another strategy, given that the other players stay with their strategies at Nash equilibrium. Specifically, the strategy of a player at Nash equilibrium is the best response to the strategies of the other players, again at Nash equilibrium. However, the Nash equilibrium is not necessarily efficient, as it would be possible for all players to benefit from a collective behavior. Also, there could be multiple Nash equilibria in a game, and if the agent is restricted to adopting only pure strategies, the Nash equilibrium may not exist. In this case, the solution of the evolutionary game (i.e., evolutionarily stable strategies (ESS) or evolutionary equilibrium) can serve as a refinement to the Nash equilibrium, especially when multiple Nash equilibria exist.