To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Given the field F, the vector space Fn exists for every positive integer n, and a linear code of blocklength n is defined as any vector subspace of Fn. Subspaces of dimension k exist in Fn for every integer k ≤ n. In fact, very many subspaces of dimension k exist. Each subspace has a minimum Hamming weight, defined as the smallest Hamming weight of any nonzero vector in that subspace. We are interested in those subspaces of dimension k over GF(q) for which the minimum Hamming weight is large.
In the study of Fn and its subspaces, there is no essential restriction on n. This remark is true in the finite field GF(q) just as in any other field. However, in the finite field, it is often useful to index components of the vector space GF(q)n by the elements of the field GF(q), when n = q, or by the nonzero elements of the field GF(q), when n = q − 1. The technique of using the elements of GF(q) to index the components of the vector over GF(q) is closely related both to the notion of a cyclic code and to polynomial evaluation. The essential idea of using nonzero field elements as indices can be extended to blocklength n = (q−1)2 by indexing the components of the vector υ by pairs of nonzero elements of GF(q). Then the vector υ is displayed more naturally as a two-dimensional array.
Codes on curves, along with their decoding algorithms, have been developed in recent years by using rather advanced topics of mathematics from the subject of algebraic geometry, which is a difficult and specialized branch of mathematics. The applications discussed in this book may be one of the few times that the somewhat inaccessible topics of algebraic geometry, such as the Riemann–Roch theorem, have entered the engineering literature. With the benefit of hindsight, we shall describe the codes in a more elementary way, without much algebraic geometry, emphasizing connections with bicyclic codes and the two-dimensional Fourier transform.
We shall discuss the hermitian codes as our primary example and the Klein codes as our secondary example. The class of hermitian codes, in its fullest form, is probably large enough to satisfy whatever needs may arise in communication systems of the near future. Moreover, this class of codes can be used to illustrate general methods that apply to other classes of codes. The Klein codes comprise a small class of codes over GF (8) with a rather rich and interesting structure, though probably not of practical interest.
An hermitian code is usually defined on a projective plane curve or on an affine plane curve. These choices for the definition are most analogous to the definitions of a doubly extended or singly extended Reed–Solomon code.
Error-control codes are now in widespread use in many applications such as communication systems, magnetic recording systems, and optical recording systems. The compact disk and the digital video disk are two familiar examples of such applications.
We shall discuss only block codes for error control. A block code for error control is a set of n-tuples in some finite alphabet, usually the finite field GF(q). The reason for choosing a field as the alphabet is to have a rich arithmetic structure so that practical codes can be constructed and encoders and decoders can be designed as computational algorithms. The most popular block codes are linear. This means that the componentwise sum of two codewords is a codeword, and any scalar multiple of a codeword is a codeword. So that a large number of errors can be corrected, it is desirable that codewords be very dissimilar from each other. This dissimilarity will be measured by the Hamming distance.
The most important class of block codes, the Reed–Solomon codes, will be described as an exercise in the complexity of sequences and of Fourier transform theory. Another important class of block codes, the BCH codes, will be described as a class of subcodes of the Reed–Solomon codes, all of whose components lie in a subfield. The BCH codes and the Reed–Solomon codes are examples of cyclic codes, which themselves form a subclass of the class of linear block codes.
An array, υ = [υi′i″], defined as a doubly indexed set of elements from a given alphabet, was introduced in Chapter 5. There we studied the relationship between the two-dimensional array υ and its two-dimensional Fourier transform V. In this chapter, further properties of arrays will be developed by drawing material from the subject of commutative algebra, but enriching this material for our purposes and presenting some of it from an unconventional point of view.
The two-dimensional array υ can be represented by the bivariate polynomial υ(x, y), so we can study arrays by studying bivariate polynomials, which is the theme of this chapter. The polynomial notation provides us with a convenient way to describe an array. Many important computations involving arrays can be described in terms of the addition, subtraction, multiplication, and division of bivariate polynomials. Although n-dimensional arrays also can be studied as n-variate polynomials, in this book we shall treat only two-dimensional arrays and bivariate polynomials.
As the chapter develops, it will turn heavily toward the study of ideals, zeros of ideals, and the relationship between the number of zeros of an ideal and the degrees of the polynomials in any set of polynomials that generates the ideal. A well known statement of this kind is Bézout's theorem, which bounds the number of zeros of an ideal generated by two polynomials.
An arrayυ = [υi′i″] is a doubly indexed set of elements from any given alphabet. The alphabet may be a field F, and this is the case in which we are interested. We will be particularly interested in arrays over the finite field GF(q). An array is a natural generalization of a sequence; we may refer to an array as a two-dimensional sequence or, with some risk of confusion, as a two-dimensional vector.
An array may be finite or infinite. We are interested in finite n′ by n″ arrays, and in those infinite arrays [υi′i″] that are indexed by nonnegative integer values of the indices i′ and i″. An infinite array is periodic if integers n′ and n″ exist such that υi′+n′,i″+n″ = υi′i″. Any finite array can be made into a doubly periodic infinite array by periodically replicating it on both axes.
The notion of an array leads naturally to the notion of a bivariate polynomial; the elements of the array υ are the coefficients of the bivariate polynomial υ(x, y). Accordingly, we take the opportunity in this chapter to introduce bivariate polynomials and some of their basic properties. The multiplication of bivariate polynomials is closely related to the two-dimensional convolution of arrays. Moreover, the evaluation of bivariate polynomials, especially bivariate polynomials over a finite field, is closely related to the two-dimensional Fourier transform.
In contrast to the class of Reed–Solomon codes, which was introduced by engineers, the class of hermitian codes was introduced by mathematicians as an example of an important class of algebraic geometry codes. In this chapter, we shall reintroduce hermitian codes as they might have appeared had they been discovered by the engineering community. Some additional insights will be exposed by this alternative formulation. In particular, we will shift our emphasis from the notion of punctured codes on curves to the notion of shortened codes on curves. We then give constructions of hermitian codes as quasi-cyclic codes and as linear combinations of Reed–Solomon codes akin to the Turyn construction. Much of the structure of hermitian codes stands out quite clearly when a code is restricted to the bicyclic plane (or torus), thereby forming an epicyclic hermitian code. If one takes the view that the cyclic form is the more fundamental form of the Reed–Solomon code, then perhaps one should take the parallel view that the epicyclic form is the more fundamental form of the hermitian code. In particular, we shall see that, for the epicyclic form of an hermitian code, there is no difference between a punctured code and a shortened code. This is important because the punctured code is compatible with encoding and the shortened code is compatible with decoding. In Section 11.2, we shall provide a method for the direct construction of shortened epicyclic hermitian codes.
Every “Theory” of Games concentrates on one aspect only, and pretty much neglects the rest. For example:
(I) Traditional Game Theory (J. von Neumann, J. Nash, etc.) focuses on the lack of complete information (for example, card games like Poker). Its main result is a minimax theorem about mixed strategies (“random choice”), and it is basically Linear Algebra. Games of complete information (like Chess, Go, Checkers, Nim, Tic-Tac-Toe) are (almost) completely ignored by the traditional theory.
(II) One successful theory for games of complete information is the “Theory of Nim like compound games” (Bouton, Sprague, Grundy, Berlekamp, Conway, Guy, etc. – see volume one of the Winning Ways). It focuses on “sum-games”, and it is basically Algebra (“addition theory”).
(III) In this book we are tackling something completely different: the focus is on “winning configurations,” in particular on “Tic-Tac-Toe like games,” and develop a “fake probabilistic method.” Note that “Tic-Tac-Toe like games” are often called Positional Games.
Here in Appendix D a very brief outline of (I) and (II) is given. The subject is games, so the very first question is: “What is a game?”. Well, this is a hard one; an easier question is: “How can one classify games?” One natural classification is the following:
The main objective of Chapter VIII is to develop a more sophisticated version of the BigGame–SmallGame Decomposition technique (introduced in Sections 35–36).
We prove the second Ugly Theorem; We formulate and prove the third Ugly Theorem. Both are about Almost Disjoint hypergraphs. In Section 42 we extend the decomposition technique from Almost Disjoint to more general hypergraphs. We call it the RELARIN technique. These tools will be heavily used again in Chapter IX to complete the proof of Theorem 8.2.
Proof of the second Ugly Theorem
The Neighborhood Conjecture (Open Problem 9.1) is a central issue of the book. The first result toward Open Problem 9.1 was Theorem 34.1, or as we called it: the first Ugly Theorem (see Section 36 for the proof). The second Ugly Theorem (Theorem 37.5) is more powerful. It gives the best-known Strong Draw result for the nd hypercube Tic-Tac-Toe (Theorem 12.5 (a)), and it is also necessary for the solution of the Lattice Games (Theorem 8.2).
Proof of Theorem 37.5. We assume that the reader is familiar with the proof of Theorem 34.1. In the proof of Theorem 34.1 Breaker used the Power-of-Two Scoring System in the Big Game to prevent the appearance of the “Forbidden Configurations” in the small game, and this way he could ensure the “simplicity” of the small game. The small game was so simple that Breaker could block every “emergency set” by a trivial Pairing Strategy.
Here is a nutshell summary of what we did in Part A: the goal of the first chapter was to introduce the basic concepts such as Positional Game, Weak Win, Strong Draw, and to demonstrate the power of the potential technique on several amusing examples. The goal of the second chapter was to formulate the main results such as Theorem 6.4 and Theorem 8.2 (“exact solutions”), and also the Meta-Conjecture, the main issue of the book.
Part B was a practice session for the potential technique.
In the forthcoming Parts C–D, we discuss the most difficult proofs, in particular the exact solutions of our Ramseyish games with 2-dimensional goals. Part C is the building part and Part D is (mainly) the blocking part.
In Part A, we introduced two simple “linear” criterions (Theorem 1.2 and Theorem 1.4), and gave a large number of applications. Here, in Part C, we develop some more sophisticated “higher moment” criterions. The motivation for “higher moment” comes from Probability Theory. The “higher moment” criterions are applied in a way very similar to how some of the main results of classical Probability Theory – such as the central limit theorem and the law of the iterated logarithm – are all based on higher moment techniques.
Note in advance that the last part of the book (Part D) also has a strong probabilistic flavor: Part D is about how to “sequentialize” the global concept of statistical independence.
In Chapter IV, we start to explore the connection between randomness and games. A more systematic study is made of the probabilistic approach, that is actually refered to as a “fake probabilistic method.”
The main ingredients of the “fake probabilistic method” are:
(1) the two linear criterions (“Part A”) – for some applications see Part B;
(2) the advanced Weak Win criterion together with the ad hoc method of Section 23 (“Part C”);
(3) the BigGame–SmallGame Decomposition and its variants (“Part D”).
The main result in Chapter V is (2): the Advanced Weak Win Criterion, a complicated “higher moment” criterion. It is complicated in many different ways:
(i) the form of the criterion is already rather complicated;
(ii) the proof of the criterion is long and complicated;
(iii) the application to the Clique Game requires complicated calculations.
This criterion basically solves the building part of the Meta-Conjecture (see Section 9).
Motivating the probabilistic approach
Let us return to Section 6: consider the Maker-Breaker version of the (KN, Kq) Clique Game (we don't use the notation [KN, Kq] any more). How do we prove lower bound (6.1)? How can Maker build such a large clique?
Halving Argument. The Ramsey criterion Theorem 6.2, combined with the Erdős–Szekeres bound, gives the size q = ½ log2N, which is roughly ¼ of the truth.