To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The results formulated in the previous chapter (Chapter II) will be proved in Chapters V–IX, that is, we will need 5 chapters, more than 250 pages! Chapter III lays an intermediate role: it is a preparation for the main task, and also it answers some of the questions raised in Section 4. For example, in Section 15 we discuss an interesting result related to Kaplansky's n-in-a-line game.
The main goal of Chapter III is to demonstrate the amazing flexibility of the potential technique on a wide range of simple applications.
Easy building via Theorem 1.2
Some of the statements formulated in Chapter II have easy proofs. So far we proved two potential criterions, both simple: (1) the Weak Win criterion Theorem 1.2, and (2) the Strong Draw criterion Theorem 1.4 (“Erdős-Selfridge”). In a few lucky cases a direct reference to Theorem 1.2 supplies the optimal result.
Weak Win in the Van der Waerden Game. A particularly simple example is the upper bound in Theorem 8.1 (“arithmetic progression game”). We recall the (N, n) Van der Waerden Game: the board is [N] = {1, 2, … N} and the winning sets are the n-term A.P.s (“arithmetic progressions”) in [N].
Mathematics is spectacularly successful at making generalizations: the more than 2000-year old arithmetic and geometry were developed into the monumental fields of calculus, modern algebra, topology, algebraic geometry, and so on. On the other hand, mathematics could say remarkably little about nontraditional complex systems. A good example is the notorious “3n + 1 problem. ” If n is even, take n/2, if n is odd, take (3n + 1)/2; show that, starting from an arbitrary positive integer n and applying the two rules repeatedly, eventually we end up with the periodic sequence 1,2,1,2,1,2,…. The problem was raised in the 1930s, and after 70 years of diligent research it is still completely hopeless!
Next consider some games. Tic-Tac-Toe is an easy game, so let's switch to the 3-space. The 3 × 3 × 3 Tic-Tac-Toe is a trivial first player win, the 4 × 4 × 4 Tic-Tac-Toe is a very difficult first player win (computer-assisted proof by O. Patashnik in the late 1970s), and the 5 × 5 × 5 Tic-Tac-Toe is a hopeless open problem (it is conjectured to be a draw game). Note that there is a general recipe to analyze games: perform backtracking on the game-tree (or position graph). For the 5 × 5 × 5 Tic-Tac-Toe this requires about 3125 steps, which is totally intractable.
We face the same “combinatorial chaos” with the game of Hex.
The reader is owed a few missing details such as (1) how to modify the Achievement proofs to obtain the Avoidance proofs, (2) the Chooser-Picker game, (3) the bestknown Pairing Strategy Draw in the nd hypercube Tic-Tac-Toe (part (b) in Open Problem 34.1).
Also we discuss a few new results: generalizations and extensions, such as what happens if we extend the board from the complete graph KN and the N × N lattice to a typical sub-board.
We discuss these generalizations, extensions, and missing details in the last four sections (Sections 46–49).
More exact solutions and more partial results
Extension: from the complete board to a typical sub-board. The book is basically about two results, Theorems 6.4 and 8.2, and their generalizations (discrepancy, biased, Picker-Chooser, Chooser-Picker, etc.). Here is another, perhaps the most interesting, way to generalize. In Theorem 6.4 (a) the board is KN, that is, a very special graph; what happens if we replace KN with a typical graph GN on N vertices?
Playing the usual (1:1) game on an arbitrary finite graph G, we can define the Clique Achievement (Avoidance) Number of G in the usual way, namely answering the question: “What is the largest clique Kq that Maker can build (that Forcer can force Avoider to build)?”
Part B is a practice session for the potential technique, demonstrating the enormous flexibility of this technique.
We look at about a dozen amusing “little” games (similar to the S-building game in Section 1). There is a large variety of results, starting with straightforward applications of Theorem 1.2 (“building”) and Theorem 1.4 (“blocking”), and ending with sophisticated proofs like the 6-page-long proof of Theorem 20.3 (“Hamiltonian cycle game”) and the 10-page-long proof of Theorem 15.1 (“Kaplansky's Game”).
The core idea is the mysterious connection between games and randomness. By using the terms “game-theoretic first moment” and “game-theoretic second moment,” we tried to emphasize this connection.
The point is to collect a lot of “easy” proofs. To get a “feel” for the subject the reader is advised to go through a lot of easy stuff. Reading Part B is an ideal warmup for the much harder Parts C-D.
A reader in a big rush focusing on the exact solutions may skip Part B entirely, and jump ahead to Sections 23–24 (where the “hard stuff” begins).
The objective of game-playing is winning, but very often winning is impossible for the simple reason that the game is a draw game: either player can force a draw. Blocking the opponent's winning sets is a solid way to force a draw; this is what we call a Strong Draw.
The main issue here is the Neighborhood Conjecture. The general case remains unsolved, but we can prove several useful partial results about blocking (called the Three Ugly Theorems).
Our treatment of the blocking part has a definite architecture. Metaphorically speaking, it is like a five-storied building where Theorems 34.1, 37.5, 40.1 represent the first three floors in this order, and Sections 43 and 44 represent the fourth and fifth floors; the higher floors are supported by the lower floors (there is no shortcut!).
An alternative way to look at the Neighborhood Conjecture is the Phantom Decomposition Hypothesis (see the end of Section 19), which is a kind of gametheoretic independence. In fact, there are two interpretations of game-theoretic independence: a “trivial” interpretation and a “non-trivial” one.
The “trivial” (but still very useful) interpretation is about disjoint games; Pairing Strategy is based on this simple observation. Disjointness guarantees that in each component either player can play independently from the rest of the components.
In the “non-trivial” interpretation the initial game does not fall apart into disjoint components.
Chess, Tic-Tac-Toe, and Hex are among the most well-known games of complete information with no chance move. What is common in these apparently very different games? In either game the player that wins is the one who achieves a “winning configuration” first. A “winning configuration” in Tic-Tac-Toe is a “3-in-a-row,” in Hex it is a “connecting chain of hexagons,” and in Chess it is a “capture of the opponent's King” (called a checkmate).
The objective of other well-known games of complete information like Checkers and Go is more complicated. In Checkers the goal is to be the first player either to capture all of the opponent's pieces (checkers) or to build a position where the opponent cannot make a move. The capture of a single piece (jumping over) is a “mini-win configuration,” and, similarly, an arrangement where the opponent cannot make a move is a “winning configuration.”
In Go the goal is to capture as many stones of the opponent as possible (“capturing” means to “surround a set of opponent's stones by a connected set”).
These games are clearly very different, but the basic question is always the same: “Which player can achieve a winning configuration first?”.
The bad news is that no one knows how to achieve a winning configuration first, except by exhaustive case study. There is no general theorem whatsoever answering the question of how. The well-known strategy stealing argument gives a partial answer to when, but doesn't say a word about how.
Everything that we know about ordinary win in a positional game comes from Strategy Stealing. We owe the reader a truly precise treatment of this remarkable existence argument. Also we make the vague term “exhaustive search” precise by introducing a backtracking algorithm called “backward labeling”. We start the formal treatment with a definite terminology (which is common sense anyway).
Terminology of Positional Games. There are some fundamental notions of games which are used in a rather confusing way in everyday language. First, we must distinguish between the abstract concept of a game, and the individual plays of that game.
In everyday usage, game and play are often synonyms. Tennis is a good example for another kind of confusion. To play a game of tennis, we have to win two or three sets, and to win a set, we must win six (or seven) games; i.e., certain components of the game are again called “games.” If the score in a set is 6:6 – a “tie” – then, by a relatively new rule in tennis, the players have to play a “tie-break.” We will avoid “tie,” and use “draw” instead; “drawing strategy” sounds better than “tie, or tying, strategy.”
In our terminology a game is simply the set of the rules that describe it.
There is an old story about the inventor of Chess, which goes something like this. When the King learned the new game, he quickly fell in love with it, and invited the inventor to his palace. “I love your game,” said the King, “and to express my appreciation, I decided to grant your wish.” “Oh, thank you, Your Majesty,” began the inventor, “I am a humble man with a modest wish: just put one piece of rice on the first little square of the chess board, 2 pieces of rice on the second square, 4 pieces on the third square, 8 pieces on the fourth square, and so on; you double in each step.” “Oh, sure,” said the King, and immediately called for his servants, who started to bring in rice from the huge storage room of the palace. It didn't take too long, however, to realize that the rice in the palace was not enough; in fact, as the court mathematician pointed out, even the rice produced by the whole world in the last thousand years wouldn't be enough to fulfill the inventor's wish (264 – 1 pieces of rice). Then the King became so angry that he gave the order to execute the inventor. This is how the King discovered Combinatorial Chaos.
Of course, there is a less violent way to discover Combinatorial Chaos.
There are two natural ways to generalize the concept of Positional Game: one way is the (1) discrepancy version, where Maker wants (say) 90% of some hyperedge instead of 100%. Another way is the (2) biased version like the (1 : 2) play, where underdog Maker claims 1 point per move and Breaker claims 2 points per move.
Chapter VI is devoted to the discussion of these generalizations.
Neither generalization is a perfect success, but there is a big difference. The discrepancy version generalizes rather smoothly; the biased version, on the other hand, leads to some unexpected tormenting(!) technical difficulties.
The main issue here is to formulate and prove the Biased Meta-Conjecture. The biased case is work in progress; what we currently know is a bunch of (very interesting!) sporadic results, but the general case remains wide open.
We don't see any a priori reason why the biased case should be more difficult than the fair (1:1) case. No one understands why the general biased case is still unsolved.
The Biased Meta-Conjecture is the most exciting research project that the book can offer. We challenge the reader to participate in the final solution.
The biased Maker–Breaker and Avoider–Forcer games remain mostly unsolved, but we are surprisingly successful with the biased (1:s) Chooser–Picker game where Chooser is the underdog (in each turn Picker picks (s + 1) new points, Chooser chooses one of them, and the rest goes back to Picker).
The missing Strong Draw parts of Theorems 8.2, 12.6, and 40.2 will be discussed here; we prove them in the reverse order. These are the most difficult proofs in the book. They demand a solid understanding of Chapter VIII. The main technical challenge is the lack of Almost Disjointness.
Chapters I–VI were about Building and Chapters VII–VIII were about Blocking. We separated these two tasks because undertaking them at the same time – ordinary win! – was hopelessly complicated. Now we have a fairly good understanding of Building (under the name of Weak Win), and have a fairly good understanding of Blocking (under the name of Strong Draw). We return to an old question one more time: “Even if ordinary win is hopeless, is there any other way to combine the two different techniques in a single strategy?” The answer is “yes,” and some interesting examples will be discussed in Section 45. One of them is the proof of Theorem 12.7: “second player's moral-victory.”
Winning planes: exact solution
The objective of this section is to prove the missing Strong Draw part of Theorems 12.6 and 40.2. The winning sets in these theorems are “planes”; two “planes” may be disjoint, or intersect in a point, or intersect in a “line.” The third case – “line-intersection” – is a novelty which cannot happen in Almost Disjoint hypergraphs; “line-intersection” requires extra considerations.
Games belong to the oldest experiences of mankind, well before the appearance of any kind of serious mathematics. (“Serious mathematics” is in fact very young: Euclid's Elements is less than three-thousand years old.) The playing of games has long been a natural instinct of all humans, and is why the solving of games is a natural instinct of mathematicians. Recreational mathematics is a vast collection of all kinds of clever observations (“pre-theorems”) about games and puzzles, the perfect empirical background for a mathematical theory. It is well-known that games of chance played an absolutely crucial role in the early development of Probability Theory. Similarly, Graph Theory grew out of puzzles (i.e. 1-player games) such as the famous Königsberg bridge problem, solved by Euler (“Euler trail”), or Hamilton's roundtrip puzzle on the graph of the dodecahedron (“Hamilton cycle problem”). Unlike these two very successful theories, we still do not have a really satisfying quantitative theory of games of pure skill with complete information, or as they are usually called nowadays: Combinatorial Games. Using technical terms, Combinatorial Games are 2-player zero-sum games, mostly finite, with complete information and no chance moves, and the payoff function has three values ±1, 0 as the first player wins or loses the play, or it ends in a draw.
The binary erasure channel (BEC) is perhaps the simplest non-trivial channel model imaginable. It was introduced by Elias as a toy example in 1954. The emergence of the Internet promoted the erasure channel into the class of “real-world” channels. Indeed, erasure channels can be used to model data networks, where packets either arrive correctly or are lost due to buffer overflows or excessive delays.
A priori, one might well doubt that studying the BEC will significantly advance our understanding of the general case. Quite surprisingly, however, most properties and statements that we encounter in our investigation of the BEC hold in much greater generality. Thus, the effort invested in fully understanding the BEC case will reap substantial dividends later on.
You do not need to read the whole chapter to know what iterative decoding for the BEC is about. The core of the material is contained in Sections 3.1–3.14 as well as 3.24. The remaining sections concern either more specialized or less accessible topics. They can be read in almost any order.
CHANNEL MODEL
Erasure channels model situations where information may be lost but is never corrupted. The BEC captures erasure in the simplest form: single bits are transmitted and either received correctly or known to be lost. The decoding problem is to find the values of the bits given the locations of the erasures and the non-erased part of the codeword. Figure 3.1 depicts the BEC(∊).