To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Usually, humanoid walking gaits are only roughly distinguished between stable andunstable. The evaluation of a stable humanoid walking gait is difficult toquantify in scales. And, it is extremely hard to adjust humanoid robots insuitable a walking gait for different movement objectives such as fast walking,uneven floor walking, and so on. This paper proposes a stability marginconstructed by center of pressure (COP) to evaluate the gait stability ofhumanoid walking. The stability margin is modeled by the COP regions that ahumanoid robot needs for stable standing. We derive the mathematical model forCOP position by dividing the walking gait into single and double support phasesin order to measure the stability of the COP regions. An actual measuring systemfor the stable COP regions is designed and implemented. The measured COPtrajectory of a walking gait is eventually evaluated with respect to the stableCOP regions for the stability margins. The evaluation focuses on weak stabilityareas to be improved for robust walking gaits. To demonstrate the robustness ofthe improved walking gait, we replicate the experiment on three differentterrains. The experiments demonstrate that the walking gaits developed based onstable COP region can be applied for different movement objectives.
Computer science and economics have engaged in a lively interaction over the past 15 years, resulting in a new field called algorithmic game theory or alternatively economics and computation. Many problems central to modern computer science, ranging from resource allocation in large networks to online advertising, fundamentally involve interactions between multiple self-interested parties. Economics and game theory offer a host of useful models and definitions to reason about such problems. The flow of ideas also travels in the other direction, as recent research in computer science complements the traditional economic literature in several ways. For example, computer science offers a focus on and a language to discuss computational complexity; has popularized the widespread use of approximation bounds to reason about models where exact solutions are unrealistic or unknowable; and proposes several alternatives to Bayesian or average-case analysis that encourage robust solutions to economic design problems.
This book grew out of my lecture notes for my course “Algorithmic Game Theory,” which I taught at Stanford five times between 2004 and 2013. The course aims to give students a quick and accessible introduction to many of the most important concepts in the field, with representative models and results chosen to illustrate broader themes. This book has the same goal, and I have stayed close to the structure and spirit of my classroom lectures. Brevity necessitates omitting several important topics, including Bayesian mechanism design, compact game representations, computational social choice, contest design, cooperative game theory, incentives in cryptocurrencies and networked systems, market equilibria, prediction markets, privacy, reputation systems, and social computing. Many of these areas are covered in the books by Brandt et al. (2016), Hartline (2016), Nisan et al. (2007), Parkes and Seuken (2016), Shoham and Leyton-Brown (2009), and Vojnović (2016).
Reading the first paragraph of every lecture provides a quick sense of the book's narrative, and the “top 10 list” on pages 299–300 summarizes the key results in the book. In addition, each lecture includes an “Upshot” section that highlights its main points. After the introductory lecture, the book is loosely organized into three parts.
You might think that a section with this title is silly or unnecessary since everyone knows what evidence is. However, matters are not quite that simple since the term evidence is not so easy to define and its use often arouses controversy, especially in the field of intelligence analysis. One problem with definition of evidence is that several other terms are often used synonymously with it, when in fact there are distinctions to be made among these terms that are not always apparent. Quite unnecessary controversy occurs since some believe that the term evidence arises and has meaning only in the field of law. We will start with discussing these matters, since evidence is the foundation of all intelligence analyses.
We are actually not assisted very much in defining evidence by consulting a dictionary. For example, look at the Oxford English Dictionary under the term evidence and you will be led in a circle; evidence is ultimately defined as being evidence.
A variety of terms are so often used as synonyms for the term evidence: data, items of information, facts, and knowledge. When examined carefully, there are some valid and important distinctions to be made among these terms. First, consider the terms data and items of information. There are untold trillions of data or items of information in existence that will almost certainly never become evidence in any intelligence analysis or in most other inferences either. Here's a datum or item of information for you: Professor Schum has a long and steep driveway in front of his house that makes shoveling snow off of it very difficult in the winter. Can you think of any intelligence analysis problem in which this datum could become evidence? About the only matter in which this datum could become interesting evidence involves the question: Why did Schum and his wife, Anne, ever purchase this house in the first place? As we will discuss, data or items of information only become evidence when their relevance is established regarding some matter to be proved or disproved.
This textbook has been written for those studying the process of drawing conclusions from masses of evidence resulting from extensive investigations in a variety of contexts, including intelligence analysis, cybersecurity, criminal investigations, and military and business inferences and decisions. Many universities now offer undergraduate and graduate courses concerning these activities. These courses are offered in order to provide introductory preparation for persons contemplating future work in these contexts. These courses have also been of interest to persons having various levels of past experience in these activities, but who are seeking additional knowledge concerning matters their current work requires.
As you see, our book's subtitle is a frequently used metaphor: Connecting the Dots. This metaphor seems appropriate in characterizing the evidential and inferential matters discussed in our book. The metaphor may have gained its current popularity following the terrorist attacks in New York City and Washington, D.C., on September 11, 2001. It was frequently said that the intelligence services did not connect the dots appropriately in order to have possibly prevented the catastrophes that occurred. Since then, we have seen and heard this metaphor applied in the news media to inferences in a very wide array of contexts in addition to the aforementioned intelligence, legal, military, and business contexts. For example, we have seen it applied to allegedly faulty medical diagnoses; to allegedly faulty conclusions in historical studies; to allegedly faulty or unpopular governmental decisions; and in discussions involving the conclusions reached by competing politicians. What is also true is that the commentators on television and radio, or the sources of written accounts of inferential failures, never tell us what they mean by the phrase “connecting the dots.” A natural explanation is that they have never even considered what this phrase means and what it might involve.
Our major objective in this book is to provide accurate, useful, and extensive information about the evidential and inferential issues encountered by persons whose tasks require them to “connect the dots” to draw conclusions from masses of different kinds of evidence that come from a variety of different sources.
BELIEVABILITY: THE FOUNDATION OF ALL ARGUMENTS FROM EVIDENCE
The second major credential of evidence involves its believability (sometimes referred also as credibility). Suppose that an analyst is considering an item of information as possible relevant evidence. This credential involves the question, “Can we believe what this item of evidence is telling us?” Of course, this believability question involves considering the source from which this item came. The order in which we ask the relevance question, “So what?” and the believability question, “Can we believe it?” is immaterial. Which one of the relevance and believability questions we ask first will not affect our discussion of the credentials of evidence. In our view, the evidential and inferential issues surrounding believability assessment form perhaps the most difficult and interesting questions to be asked in intelligence analysis, or in any other context for that matter. One major difficulty is that we must ask different kinds of believability questions depending on the kinds of evidence we have and on the sources from which this evidence has come. In the next section, when we discuss basic substance-blind forms of evidence, we will see how various credibility questions we must ask are different for tangible items of evidence such as those provided by IMINT, COMINT, and MASINT than those provided by human informants or assets who provide testimonial evidence in the form of HUMINT. And, in the case of HUMINT, we require additional questions regarding the competence of HUMINT informants; one basic error often made in intelligence analyses is that the competence of a HUMINT source is often construed as evidence about this source's credibility. As we will show, competence does not entail credibility, nor does credibility entail competence; they are entirely separate matters.
Regardless of the form and source of the evidence being considered, we must distinguish between evidence about an event and the event itself. Having evidence about an event does not entail that this event actually occurred. To believe that an event did occur just because we have evidence for that invites all sorts of inferential troubles, primarily because sources of intelligence evidence of any kind have every possible gradation of believability shy of perfection.
Lectures 2-4 focused on the design of mechanisms that maximize, exactly or approximately, the social welfare of the outcome. Revenue is generated in such mechanisms only as a side effect, a necessary evil to incentivize agents to report truthfully their private information. This lecture studies mechanisms that are designed to raise as much revenue as possible, and characterizes the expected revenue-maximizing mechanism with respect to a prior distribution over agents’ valuations.
Section 5.1 explains why reasoning about revenue maximization is harder than welfare maximization, and introduces Bayesian envi-ronments. Section 5.2 is the heart of this lecture, and it characterizes expected revenue-maximizing mechanisms as “virtual welfare maxi-mizers.” Section 5.3 describes how this theory was used to boost sponsored search revenue at Yahoo. Section 5.4 proves a technical lemma needed for the characterization in Section 5.2.
The Challenge of Revenue Maximization
Spoiled by Social Welfare Maximization
There are several reasons to begin the study of mechanism design with the objective of maximizing social welfare. The first reason is that this objective is relevant to many real-world scenarios. For instance, in government auctions (e.g., to sell wireless spectrum; see Lecture 8), the primary objective is welfare maximization. Revenue is also a consideration in such auctions, but it is usually not the first-order objective. Also, in competitive markets, a rule of thumb is that a seller should focus on welfare maximization, since otherwise a competitor will (thereby stealing their customers).
The second reason to start with social welfare maximization is pedagogical: social welfare is special. In every single-parameter environment, there is a DSIC mechanism that, for every profile of private valuations, assuming truthful bids, computes the welfare-maximizing outcome (cf., Exercise 4.1). Such a mechanism optimizes the social welfare as effectively as if all of the private information was known in advance—the DSIC constraint is satisfied for free. This amazingly strong performance guarantee, called an “ex post” guarantee, cannot generally be achieved for other objective functions.
This lecture is our last on mechanism design, and it covers some of the greatest hits of mechanism design without money. Kidney exchanges, the case study covered in Section 10.1, have been deeply influenced by ideas from mechanism design over the past ten years. These exchanges enable thousands of successful kidney transplants every year. Stable matching and the remarkable deferred acceptance algorithm (Section 10.2) form the basis of modern algorithms for many assignment problems, including medical residents to hospitals and students to elementary schools. This algorithm also enjoys some beautiful mathematical properties and incentive guarantees (Section 10.3).
Case Study: Kidney Exchange
Background
Many people suffer from kidney failure and need a kidney transplant. In the United States, more than 100,000 people are on the waiting list for such transplants. An old idea, used also for other organs, is deceased donors; when someone dies and is a registered organ donor, their organs can be transplanted into others. One special feature of kidneys is that a healthy person has two of them and can survive just fine with only one of them. This creates the possibility of living organ donors, such as a family member of the patient in need.
Unfortunately, having a living kidney donor is not always enough; sometimes a patient-donor pair is incompatible, meaning that the donor's kidney is unlikely to function well in the patient. Blood and tissue types are the primary culprits for incompatibilities. For example, a patient with O blood type can only receive a kidney from a donor with the same blood type, and similarly an AB donor can only donate to an AB patient.
Suppose patient P1 is incompatible with her donor D1 because they have blood types A and B, respectively. Suppose P2 and D2 are in the opposite boat, with blood types B and A, respectively (Figure 10.1). Even though (P1, D1) and (P2, D2) have probably never met, exchanging donors seems like a pretty good idea, with P1 receiving her kidney from D2 and P2 from D1. This is called a kidney exchange.
This paper suggests that time could have a much richer mathematical structure than that of the real numbers. Clark & Read (1984) argue that a hypertask (uncountably many tasks done in a finite length of time) cannot be performed. Assuming that time takes values in the real numbers, we give a trivial proof of this. If we instead take the surreal numbers as a model of time, then not only are hypertasks possible but so is an ultratask (a sequence which includes one task done for each ordinal number—thus a proper class of them). We argue that the surreal numbers are in some respects a better model of the temporal continuum than the real numbers as defined in mainstream mathematics, and that surreal time and hypertasks are mathematically possible.
In the previous chapters, we have discussed the different types of evidence (such as testimonial or tangible) and the ingredients of their believability assessment. However, very rarely if ever does the analyst have access to the original evidence. Most often, what is being analyzed is an item of evidence that has undergone a series of transformations through a chain of custody. Here we have borrowed an important concept from the field of law, where a chain of custody refers to the persons or devices having access to the original source evidence, the time at which they had such access, and what they did to the original evidence when they had access to it. The original evidence may be altered in various ways at various links in chains of custody. The important point here is to consider the extent to which what the analyst finally receives is an authentic and complete account of what an original source provided. Uncertainties arising in chains of custody of intelligence evidence are not always taken into account. One result is that analysts are often misled about what the evidence is telling them.
Basically, establishing a chain of custody involves identifying the persons and devices involved in the acquisition, processing, examination, interpretation, and transfer of evidence between the time the evidence is acquired and the time it is provided to intelligence analysts. Lots of things may have been done to evidence in a chain of custody that may have altered the original item of evidence or have provided an inaccurate or incomplete account of it. In some cases, original evidence may have been tampered with in various ways. Unless these difficulties are recognized and possibly overcome, intelligence analysts are at risk of drawing quite erroneous conclusions from the evidence they receive. They are being misled, not by our original sources of evidence, but by the activities of our own persons or devices who do various things to incoming intelligence evidence.
In civilian and military courts, proponents of evidence, for either side of the matter in dispute, are required to verify the chain of custody of tangible evidence before it is admitted to trial. In many cases, evidence gathered is passed from one person to another, each of whom may examine and process the evidence in various ways.
The optimal design problem for a wireless power transfer link based on a resonant inductive coupling is addressed in this paper. It is assumed that the magnetic coupling coefficient and the inductor quality factors are known. By employing the conjugate image impedances, the values of the inductances realizing the optimal design with respect to given values of the network input and load impedances are derived. It is demonstrated that there is just one optimal design maximizing both the power delivered to the load and the power transfer efficiency of the link. The four possible schemes corresponding to the use of a parallel or a series arrangement for the two coupled resonators (Parallel-Parallel, Series-Series, Parallel-Series, and Series-Parallel) are considered and discussed. Closed form analytical formulas are derived and validated by circuital simulations.
This lecture segues into the third part of the course, where we ask: Do we expect strategic agents to reach an equilibrium? If so, which learning algorithms quickly converge to an equilibrium? Reasoning about these questions requires specifying dynamics, which describe how agents act when not at equilibrium. We consider dynamics where each agent's behavior is governed by an algorithm that attempts to, in some sense, learn the best response to how the other agents are acting. Ideally, we seek results that hold for multiple simple and natural learning algorithms. Then, even though agents may not literally follow such an algorithm, we can still have some confidence that our conclusions are robust and not an artifact of the particular choice of dynamics. This lecture focuses on variations of “best-response dynamics,” while the next two lectures study dynamics based on regret-minimization.
Section 16.1 defines best-response dynamics and proves convergence in potential games. Sections 16.2 and 16.3 introduce e-best-response dynamics and prove that several variants of it converge quickly in atomic selfish routing games where all agents have a common origin and destination. Section 16.4 proves that, in the (λ, μ)-smooth games defined in Lecture 14, several variants of best-response dynamics quickly reach outcomes with objective function value almost as good as at an equilibrium.
Best-Response Dynamics in Potential Games
Best-response dynamics is a straightforward procedure by which agents search for a pure Nash equilibrium (PNE) of a game (Definition 13.2), using successive unilateral deviations.
This lecture is a case study on the practical implementation of combinatorial auctions for wireless spectrum, an important and challenging multi-parameter mechanism design problem. While our sponsored search case studies (Sections 2.6 and 5.3) involve billions of smallstakes auctions, spectrum auction design concerns a single auction with billions of dollars of potential revenue.
Section 8.1 explains the practical benefits of indirect mechanisms. Section 8.2 discusses the prospects for selling multiple items via separate single-item auctions. Section 8.3 describes simultaneous ascending auctions, the primary workhorse in wireless spectrum auctions, while Section 8.4 weighs the pros and cons of packing bidding. Section 8.5 outlines the cutting edge of spectrum auction design, the 2016 FCC Incentive Auction.
Indirect Mechanisms
In a combinatorial auction (Example 7.2) there are n bidders, m items, and each bidder i's valuation specifies her value vi(S) for each bundle S of items that she might receive. In principle, the VCG mechanism provides a DSIC and welfare-maximizing combinatorial auction (Theorem 7.3). This mechanism is potentially practical if bidders’ valuations are sufficiently simple (Exercise 7.5), but not otherwise (Section 7.3). For example, the number of parameters that each bidder reports in the VCG mechanism, or any other direct-revelation mechanism, grows exponentially with the number of items m.
The utter absurdity of direct-revelation combinatorial auctions motivates indirect mechanisms, which learn information about bidders’ preferences only on a “need-to-know” basis. The canonical indirect auction is the ascending English auction; see also Exercise 2.7. This auction format is familiar from the movies: an auctioneer keeps track of the current price and tentative winner, and the auction stops when only one interested bidder remains. Each bidder has a dominant strategy, which is to stay in the auction as long as the current price is below her valuation (the bidder might win for positive utility) and to drop out once the current price reaches her valuation (after which winning can only lead to negative utility). If all bidders play these strategies, then the outcome of the English auction is the same as that of a second-price (sealed-bid) auction. The second-price auction is the result of applying the revelation principle (Theorem 4.3) to the English auction.