To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There are many known examples of 2-designs and, even with the restriction that they are symmetric, we have already constructed several infinite families. But t-designs with t > 2 are much rarer. Although (as we proved in Theorem 1.5) t-structures exist for arbitrarily high values of t, there are no known nontrivial 7-designs and, in fact, the first non-trivial 6-designs were not discovered until 1982 (see [6]). In this chapter we look at some of the important t-designs with t ≥ 3 and at various methods of constructing such designs.
Any t-design must be the extension of a (t – 1)-design and in Section 4.2 we prove Cameron's Theorem. This says that there are very few possibilities for starting with a symmetric design and extending it to a 3-design. This leads us to extending symmetric Hadamard 2-designs to get Hadamard 3-designs which, in turn, leads to the construction of the little Mathieu designs in Section 4.4.
Just as we used the 2-homogeneous groups ASL(2, q) for q ≡ 3 (mod 4) to construct the Paley designs ℒ(q), in Section 4.3 we use the 3-transitive groups PGL(2, q) to construct some 3-designs. We then, in Section 4.5, discuss some of Alltop's ideas for constructing (t + 1)-designs from t-ransitive groups and indicate how to use the 3-transitive groups PGL(4, 2n) to construct some 4-designs which, in fact, turn out to be 5-designs. Finally, in Section 4.6 we prove a generalisation of Fisher's Inequality for 2s-designs with s ≥ 1, which leads to the concept of tight designs. Both Sections 4.5 and 4.6 contain very little detail and any interested reader will have to consult the relevant references.
In this chapter we continue the study of symmetric designs but in a somewhat more specific way than in Chapter 2. Section 3.2 contains a detailed discussion of the relation between projective and affine planes and develops some of the theory of non-desarguesian planes. (This latter development is primarily concerned with translation planes, quasifields and semifields. It has a different algebraic flavour than the rest of the book and, although the results are important, the proofs may be skipped if necessary.) Affine planes lead naturally to a discussion of latin squares in Section 3.3 followed by nets which are a very important class of 1-designs; in Section 3.4 one of the applications of nets discussed is the construction of a new infinite family of symmetric designs. Section 3.5 deals with Hadamard designs and Hadamard matrices and contains a construction of the Paley designs. Section 3.6 has a fairly detailed discussion of biplanes (symmetric designs with λ = 2). In Section 3.7 we study the special class of graphs called ‘strongly regular’ and develop their elementary theory (e.g. eigenvalues and multiplicities), as well as giving a number of infinite families. Such graphs enable us to construct some new symmetric designs, and in addition they will be used again later in the book (see Chapters 7 and 8). The connections between strongly regular graphs and design theory are among the most important examples of the fruitful relationship between graphs and designs.
The subject of design theory has grown out of several branches of mathematics, and has been increasingly influenced in recent years by developments in other areas. Its statistical origins are still evident in some of its standard terminology (thus ‘ν’ for the number of points in a structure comes from ‘varieties’). Today it has very fruitful connections with group theory, graph theory, coding theory and geometry; these ties have been two-way, by and large.
We have attempted in this book to lay the groundwork for an understanding of designs, with advanced undergraduate or postgraduate students in mind. Our aim is to prepare the reader to use designs in other fields or to enter the active field of designs themselves. Finite projective and affine geometries are central to design theory, and are introduced early in the book. Since classical geometry is a very large field, the student with a background in this subject will be at an advantage, but we have tried to present a treatment sufficiently self-contained to answer the needs of a reader with a reasonable knowledge of linear algebra. The subject of symmetric designs is also introduced early, and its important aspects (the Bruck–Ryser–Chowla Theorem, Singer groups and difference sets, Hadamard 2-designs, etc.) are developed. The first four chapters, covering basic definitions, geometry and symmetric designs, are designed to be part of any course based on the book.
The other four chapters can be studied more or less independently of one another.
In general, 1-designs are less interesting than t-designs with t > 1; it is possible to construct them easily and there do not seem to be many deep theorems about them. But with certain extra properties imposed upon them, 1-designs can become complicated and important objects, in particular with crucial connections to group theory and geometry.
Among the most important 1-designs are generalised quadrangles, which are studied in Section 7.3 as special members of a class of 1-designs called Γα-geometries, introduced in Section 7.2. Using SR graphs we prove some elementary results about Γα-geometries, and we give some infinite families of examples. In particular, we develop two infinite families of generalised quadrangles, one classical (in the sense that it comes from a polarity of a projective geometry), the other not. The other classical generalised quadrangles involve deeper projective geometry and algebra, and we do not include them. (The surprisingly rich and complex theory of generalised quadrangles, and their connections to group theory as well as geometry, is beyond the scope of this book; some of the flavour of the subject is all that we can impart.) There are many unsolved problems (about existence, non-existence, and structure) in the area of Γα-geometries.
In Section 7.4 we study semisymmetric designs, which are 1-designs that generalise (and include) symmetric designs. Besides a number of examples, the section contains results about upper and lower bounds on the number of points in a semisymmetric design, and touches upon the many open questions in this area.
This chapter is the introduction to structures and designs and, while it is completely elementary, it is essential to the rest of the book. Section 1.2 contains the basic definitions. In Section 1.3 we then give a number of examples. We begin by listing some small carefully chosen ones to illustrate the meanings of the earlier definitions but then go on to examples based on projective and affine geometry. Obviously knowledge of classical geometry will help the reader to follow and understand these examples, but we have tried to make our explanations as full as possible and to make the entire chapter self-contained. Nevertheless the importance of finite projective and affine geometry to design theory cannot be overemphasised and we include some excellent references for further reading. In Section 1.4 we return to definitions and results about arbitrary structures, in particular relating a structure to others which can be constructed from it or from which it can be constructed. Section 1.5 studies the incidence matrix of a structure, already introduced in Section 1.2, and uses it to prove a number of basic theorems: Fisher's Inequality and properties of square structures and symmetric designs in particular. Polarities are introduced and the incidence matrix is exploited to prove a number of their basic and important properties. In Section 1.6 the notion of tactical decomposition of a structure is introduced, Block's Lemma is proved, and applications to automorphism groups (in particular the Orbit Theorems) are deduced. Resolutions and parallelisms are briefly introduced as well. Section 1.7 contains a brief discussion of graph theory and some of its connections with the theory of structures and designs.
Both natural and programming languages can be viewed as sets of sentences—that is, finite strings of elements of some basic vocabulary. The notion of a language introduced in this section is very general. It certainly includes both natural and programming languages and also all kinds of nonsense languages one might think of. Traditionally, formal language theory is concerned with the syntactic specification of a language rather than with any semantic issues. A syntactic specification of a language with finitely many sentences can be given, at least in principle, by listing the sentences. This is not possible for languages with infinitely many sentences. The main task of formal language theory is the study of finitary specifications of infinite languages.
The basic theory of computation, as well as of its various branches, such as cryptography, is inseparably connected with language theory. The input and output sets of a computational device can be viewed as languages, and—more profoundly—models of computation can be identified with classes of language specifications, in a sense to be made more precise. Thus, for instance, Turing machines can be identified with phrase-structure grammars and finite automata with regular grammars.
A finite automaton is a strictly finitary model of computation. Everything involved is of a fixed, finite size and cannot be extended during the course of computation. The other types of automata studied later have at least a potentially infinite memory. Differences between various types of automata are based mainly on how information can be accessed in the memory.
A finite automaton operates in discrete time, as do all essential models of computation. Thus, we may speak of the “next” time instant when specifying the functioning of a finite automaton.
The simplest case is the memoryless device, where, at each time instant, the output depends only on the current input. Such devices are models of combinational circuits.
In general, however, the output produced by a finite automaton depends on the current input as well as on earlier inputs. Thus, the automaton is capable (to a certain extent) of remembering its past inputs. More specifically, this means the following.
The automaton has a finite number of internal memory states. At each time instant i it is in one of these states, say qi. The state qi + 1 at the next time instant is determined by qi and by the input at given at time instant i. The output at time instant i is determined by the state qi (or by qi and ai, together).
As is true for all our models of computation, a Turing machine also operates in discrete time. At each moment of time it is in a specific internal (memory) state, the number of all possible states being finite. A read-write head scans letters written on a tape one at a time. A pair (q, a) determines a triple (q′, a′, m) where the q's are states, a's are letters, and m (“move”) assumes one of the three values l (left), r (right), or 0 (no move). This means that, after scanning the letter a in the state q, the machine goes to the state q′ writes a′ in place of a (possibly a′ = a, meaning that the tape is left unaltered), and moves the read-write head according to m.
If the read-write head is about to “fall off” the tape, that is, a left (resp. right) move is instructed when the machine is scanning the leftmost (resp. rightmost) square of the tape, then a new blank square is automatically added to the tape. This capability of indefinitely extending the external memory can be viewed as a built-in hardware feature of every Turing machine. The situation is depicted in Figure 4.1.
It might seem strange that a chapter on cryptography appears in a book dealing with the theory of computation, automata, and formal languages. However, in the last two chapters of this book we want to discuss some recent trends. Undoubtedly, cryptography now constitutes such a major field that it cannot be omitted, especially because its interconnections with some other areas discussed in this book are rather obvious. Basically, cryptography can be viewed as a part of formal language theory, although it must be admitted that the notions and results of traditional language theory have so far found only few applications in cryptography. Complexity theory, on the other hand, is quite essential in cryptography. For instance, a cryptosystem can be viewed as safe if the problem of cryptanalysis—that is, the problem of “breaking the code”—is intractable. In particular, the complexity of certain number-theoretic problems has turned out to be a very crucial issue in modern cryptography. And more generally, the seminal idea of modern cryptography, public key cryptosystems, would not have been possible without an understanding of the complexity of problems. On the other hand, cryptography has contributed many fruitful notions and ideas to the development of complexity theory.