To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The motivation of this book and necessary background knowledge of this book are provided. First, a brief introduction to competition and cooperation in wireless and social networks is provided, along with examples and a literature review. Then, the limitations of traditional game theory in this area are presented. Finally, the three branches of modern game theory – indirect reciprocity, evolutionary games, and sequential decision-making – will be briefly mentioned to illustrate their strengths for overcoming the highlighted limitations.
Deal selection on Groupon represents a typical social learning and decision-making process, where the quality of a deal is usually unknown to the customers. The customers must acquire this knowledge through social learning from other social media, such as reviews on Yelp. Additionally, the quality of a deal depends on both the state of the vendor and the decisions of other customers on Groupon. How social learning and network externality affect the decisions of customers in deal selection on Groupon is the main focus of this chapter. We develop a data-driven game-theoretic framework to understand rational deal selection behaviors across social media. The sufficient condition of the Nash equilibrium is identified. A value-iteration algorithm is utilized to find the optimal deal selection strategy. We utilize the Groupon–Yelp data set to analyze the deal selection game in a realistic setting. Finally, the performance of the social learning framework is evaluated using real data. The results suggest that customers make decisions in a rational way instead of following naive strategies, and there is still room to improve their decisions with assistance from a game-theoretic framework.
How information diffuses over social networks has attracted much attention from both industry and academics. Most of the existing works in this area are based on machine learning methods focusing on social network structure analysis and empirical data mining. However, the network users’ decisions, actions, and socioeconomic interactions are generally ignored in most existing works. In this chapter, we discuss an evolutionary game-theoretic framework to model the dynamic information diffusion process in social networks. Specifically, we derive the information diffusion dynamics in complete networks and uniform-degree and nonuniform-degree networks. We find that the dynamics of information diffusion over these three kinds of networks are scale-free and the same as each other when the network scale is sufficiently large. To verify the theoretical analysis, we perform simulations of the information diffusion over synthetic networks and real-world Facebook networks. Moreover, we conduct an experiment on the Twitter hashtag data set, which shows that the game-theoretic model well fits and predicts information diffusion over real social networks.
In this chapter, we discuss error correcting codes. We review the idea of hard versus soft decisions as types of symbol estimates that we provide to the decoder. We introduce the concepts of parity bits – which provide redundancy – and coding rate. We develop a simple, if flawed, toy systematic linear block code and use this code to demonstrate the concepts of generator matrix, hamming distance, and parity check matrix. By using the parity check matrix, we construct the syndrome and use this vector to perform error correction. To provide a set of viable systematic linear block codes, we introduce Hamming codes. We also introduce convolutional codes and relate the mathematical and shift-register block diagram forms. To enable decoding, we discuss the trellis diagram and use this diagram to motivate Viterbi decoding.
Distributed adaptive filtering has been considered to be an effective approach for data processing and estimation over distributed networks. Most existing algorithms focus on designing different information diffusion rules, regardless of the evolutionary characteristics of a distributed network. In this chapter, we study the adaptive network from the game-theoretic perspective and formulate the distributed adaptive filtering problem as a graphical evolutionary game. With this formulation, the nodes in the network are regarded as players and the local combiner of estimated information from different neighbors is regarded as a form of diverse strategy selection. We show that this graphical evolutionary game framework is very general and can unify the existing adaptive network algorithms. Based on this framework, as examples, two error-aware adaptive filtering algorithms are discussed. Moreover, we use graphical evolutionary game theory to analyze the information diffusion process over the adaptive networks and the evolutionarily stable strategy of the system. Finally, simulation results are shown to verify the effectiveness of the method discussed in this chapter.
The effectiveness of a decision may be uncertain due to the unknown system state. This uncertainty can be eliminated through learning from information sources, such as user-generated content or revealed actions. Nevertheless, user-generated content could be untrustworthy, since other agents may maliciously create misleading content for their selfish interests. Passively revealed actions are potentially more trustworthy and also easier to gather through simple observation. In this chapter, we introduce a game-theoretic framework – the hidden Chinese restaurant game (H-CRG) – to utilize the passively revealed actions in social learning process. We design grand information extraction, a novel Bayesian belief extraction process, to extract beliefs on hidden information directly from observed actions. The optimal policy is then analyzed in both centralized and game-theoretic approaches. We demonstrate how the H-CRG can be applied to the channel access problem in cognitive radio networks. The simulation results show that the equilibrium strategy derived in the H-CRG provides greater expected utilities for new users and maintains reasonably high social welfare.
In this chapter, we discuss the concepts of baseband and passband representations of signals and the mechanisms for moving between these two forms by using up-and down-conversion. We describe multiple up- and down-conversion approaches, such as digital-only, direct, superheterodyne, and digital intermediate frequency (IF). We discuss the components used to move between baseband and passband representation: analog-to-digital and digital-to-analog converters (ADC and DAC, respectively), and frequency synthesizers.
In many social computing systems, users decide sequentially whether to participate or not and, if they participate, whether to create a piece of content directly (i.e. answering) or to rate existing content contributed by previous users (i.e. voting). We present in this chapter a game-theoretic model that formulates the sequential decision-making of strategic users under the presence of this answering–voting externality. We prove theoretically the existence and uniqueness of a pure strategy equilibrium. We show that there exist advantages for users with higher abilities and for answering earlier. Therefore, the equilibrium exhibits a threshold structure and the threshold for answering gradually increases as answers accumulate. To show the validness of the game-theoretic model, we analyze user behavior data collected from a popular question-and-answer site Stack Overflow and show that the main qualitative predictions of the game-theoretic model match up with observations made from the data. Finally, we formulate the system designer’s problem and abstract several design principles that could potentially guide the design of incentive mechanisms for social computing systems in practice.
The basics of game theory, which are necessary for understanding the rest of the book, are provided in this chapter. Specifically, typical game compoments, solution concepts, and their applications are explained.
In this chapter, we specify some of the notation that is used throughout the text. Many of these concepts will be described in greater detail in Chapter 13. To provide a common notational framework, we review basic mathematical concepts. We review tools for complex numbers, vectors, and matrices, and the relationship between exponentials and logarithms. We specify notation for integration. We discuss the relationship signal representation in terms of amplitude versus power and linear versus decibel.
Users may have multiple concurrent options regarding different objects/resources and their decisions usually negatively influence each other’s utility, which makes the sequential decision-making problem more challenging. In this chapter, we introduce an Indian buffet game to study how users in a dynamic system learn about the uncertain system state and make multiple concurrent decisions by not only considering their current myopic utility, but also the influence of subsequent users’ decisions. We analyze the Indian buffet game under two different scenarios: one of customers requesting multiple dishes without budget constraints and the other with budget constraints. In both cases, we design recursive best-response algorithms to find the subgame-perfect Nash equilibrium for customers and characterize special properties of the Nash equilibrium profile in a homogeneous setting. Moreover, we introduce a non-Bayesian social learning algorithm by which customers can learn the system state, and we theoretically prove its convergence. Finally, we conduct simulations to validate the effectiveness and efficiency of the Indian buffet game.
Data sharing is a critical step in implementing data fusion, and how to encourage sensors to share their data is an important issue. In this chapter, we discuss a reputation-based incentive framework where the data-sharing stimulation problem is modeled as an indirect reciprocity game. In this game, sensors choose how to report their results to the fusion center and gain reputation, based on which they can obtain certain benefits in the future. Taking the sensing and fusion accuracy into account, reputation distribution is introduced into the game, where we prove theoretically the Nash equilibrium of the game and its uniqueness. Furthermore, we apply the scheme to cooperative spectrum sensing. We show that, within an appropriate cost-to-gain ratio, the optimal strategy for the secondary users is to report when the average received energy is above a given threshold and to keep silent otherwise. Such an optimal strategy is also proved to be a desirable evolutionarily stable strategy.
Learn how to analyse and manage evolutionary and sequential user behaviours in modern networks, and how to optimize network performance by using indirect reciprocity, evolutionary games, and sequential decision making. Understand the latest theory without the need to go through the details of traditional game theory. With practical management tools to regulate user behaviour, and simulations and experiments with real data sets, this is an ideal tool for graduate students and researchers working in networking, communications, and signal processing.
This chapter covers other canonical applications of network tomography that have been studied in the literature but fallen out of the scope of the previous chapters. This includes the inference of network routing topology (network topology tomography) and the inference of traffic demands (traffic matrix or origin-destination tomography). It also covers miscellaneous techniques used in network tomography that are not covered in the previous chapters (e.g., network coding). The chapter then concludes the book with discussions on practical issues in the deployment of tomography-based monitoring systems and future directions in addressing these issues.
Additive network tomography, which addresses the inference of link/node performance metrics (e.g., delays) that are additive from the sum metrics on measurement paths, represents the most well-studied branch in the realm of network tomography, upon which a rich body of seminal works have been conducted. This chapter focuses on the case in which the metrics of interest are additive and constant, which allows the network tomography problem to be cast as a linear system inversion problem. After introducing the abstract definitions of link identifiability and network identifiability using linear algebraic conditions, the chapter presents a series of graph-theoretic conditions that establish the necessary and sufficient requirements to achieve identifiability in terms of the number of monitors, the locations of monitors, the connectivity of the network topology, and the routing mechanism. It also contains extended conditions that allow the evaluation of robust link identifiability under failures and partial link identifiability when the network-wide identifiability condition is not satisfied.
This chapter completes the topic of measurement design for additive network tomography, started in Chapter 3, by discussing how to construct suitable measurement paths to identify additive link metrics using a given set of monitors. As in Chapter 3, the focus is on the design of efficient path construction algorithms that make novel use of certain graph algorithms (specifically, algorithms for constructing independent spanning trees) to find a set of paths that form a basis of the link space without enumerating all possible paths. The chapter also discusses a variation of the path construction problem when the number of measurement paths is constrained and each measurement path may fail with certain probability.
Chapters 7 and 8 are designated for network tomography for stochastic link metrics, which is a more fine-grained model than the models of deterministic additive/Boolean metrics, capturing the inherent randomness in link performances at a small time scale. Referred to as stochastic network tomography, these problems are typically cast as parameter estimation problems, which model each link metric as a random variable with a (partially) unknown distribution and aim at inferring the parameters of these distributions from end-to-end measurements. Chapter 7 focuses on one branch of stochastic network tomography that is based on unicast measurements. It introduces a framework based on concepts from estimation theory (e.g., maximum likelihood estimation, Fisher information matrix, Cramér–Rao bound), within which probing experiments and parameter estimators are designed to estimate link parameters from unicast measurements with minimum errors. Closed-form solutions are given for inferring parameters of packet losses (i.e., loss tomography) and packet delay variations (i.e., packet delay variation tomography).