To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Our aim in this chapter is to gather together some basic results from strongly regular graphs and partial geometries. These topics have had a profound influence in the area of combinatorial designs after Bose's classical paper of 1963. The results of the first two chapters will provide the necessary background for later chapters. We refer to Harary for the necessary background in graph theory. Marcus and Mine will generally suffice for details of matrix results used. For further applications of matrix tools in a variety of problems on designs, we refer to M.S. Shrikhande.
Let Γ be a finite undirected graph on n vertices. The adjacency matrix A of Γ is a square matrix of size n. The diagonal entries of A are zero and for i ≠ j, the (i, j) entry of A is 1 or 0 according as the vertices i and j are joined by an edge or not. There are other types of adjacency matrices used. For example in Seidel, or Goethals and Seidel, a (0, ± 1) adjacency matrix is used.
A graph Γ is called regular of valency a if A has constant row sum a. The adjacency matrix reflects many other graphical properties of Γ.
We now state two basic definitions. Firstly, a matrix A is permutationally congruent to a matrix B if there is a permutation matrix P such that A = PtBP.
All through the development of this monograph up to the present point (especially Chapters V through VIII), we have looked at various properties and characterizations, both parametric and geometric, of quasi-symmetric designs, essentially to the point of convincing ourselves that the problem of determination of all the quasi-symmetric designs is indeed a hard problem. With that background, this chapter will show us that the situation for quasi-symmetric 3-designs is more promising. This is to be expected since a derived design of a q.s. 3-design at any point is also quasi-symmetric and that gives us more information (Example 5.24). However, quite unlike the case of q.s. 2-designs, very few q.s. 3-designs seem to be known. In fact, up to complementation there are only three known examples of q.s. 3-designs with x ≠ 0. Application of the ‘polynomial method’ to this rather mysterious situation (see Cameron) is the theme of the present chapter.
Cameron's Theorem (Theorem 1.29) has been one of the focal points of our study of q.s. designs. This is more so for q.s. 3-designs since the extensions of symmetric designs obtained in that theorem are in fact quasi-symmetric with x = 0 (and conversely). Despite its completeness, the classification given by Cameron's theorem is perhaps and unfortunately only parametric as revealed in the following discussion. A Hadamard 3-design exists if and only if a Hadamard matrix of the corresponding order exists. Hadamard matrices are conjectured to exist for every conceivable (i.e., a multiple of four) order.
In this first chapter, we collect together and review some basic definitions, notation, and results from design theory. All of these are needed later on. Further details or proofs not given here may be found, for example, in Beth, Jungnickel and Lenz, Dembowski, Hall, Hughes and Piper, or Wallis. We mention also the monographs of Cameron and van Lint, Biggs and White, and the very recent one by Tonchev.
Let X = {x1,x2,…,xv} be a finite set of elements called points or treatments and β = { B1,B2,…,Bb} be a finite family of distinct k-subsets of X called blocks. Then the pair D = (X, β) is called a t-(v, k, λ) design if every t-subset of X occurs in exactly λ blocks. The integers v, k, and λ are called the parameters of the t-design D. The family consisting of all k-subsets of X forms a k-(v, k, 1) design which is called a complete design. The trivial design is the v-(v, v, 1) design. In order to exclude these degenerate cases we assume always that v > k >t ≥ 1 and λ ≥ 1. We use the term finite incidence structure to denote a pair (X, β), where X is a finite set and β is a finite family of not necessarily distinct subsets of X. In most of the situations of interest in the later chapters, however, we will have to tighten these restrictions further.
The adjective “nonlinear” will be used inclusively by taking “linear” to be a special case of “nonlinear.” As promised, we present in this chapter two different theories for nonlinear infinite networks. The first one is due to Dolezal and is very general in scope – except that it is restricted to 0-networks. It is an infinite-dimensional extension of the fundamental theory for scalar, finite, linear networks [67], [115], [127]. In particular, it examines nonlinear operator networks, whose voltages and currents are members of a Hilbert space ℋ; in fact, infinite networks whose parameters can be nonlinear, multivalued mappings restricted perhaps to subsets of ℋ are encompassed. As a result, virtually all the different kinds of parameters encountered in circuit theory – resistors, inductors, capacitors, gyrators, transformers, diodes, transistors, and so forth – are allowed. However, there is a price to be paid for such generality: Its existence and uniqueness theorems are more conceptual than applicable, because their hypotheses may not be verifiable for particular infinite networks. (In the absence of coupling between branches, the theory is easy enough to apply; see Corollary 4.1-7 below.) Nonetheless, with regard to the kinds of parameters encompassed, this is the most powerful theory of infinite networks presently available. Dolezal has given a thorough exposition of it in his two books [40], [41]. However, since no book on infinite electrical networks would be complete without some coverage of Doleza's work, we shall present a simplified version of his theory.
The purposes of this initial chapter are to present some basic definitions about infinite electrical networks, to show by examples that their behaviors can be quite different from that of finite networks, and to indicate how they approximately represent various partial differential equations in infinite domains. Finally, we explain how the transient responses of linear RLC networks can be derived from the theory of purely resistive networks; this is of interest because most of the results of this book are established in the context of resistive networks.
Notations and Terminology
Let us start by reviewing some symbols and phraseology so as to dispel possible ambiguities in our subsequent discussions. We follow customary usage; hence, this section may be skipped and referred to only if the need arises. Also, an Index of Symbols is appended for the more commonly occurring notations in this book; it cites the pages on which they are defined.
Let X be a set. X is called denumerably infinite or just denumerable if its members can be placed in a one-to-one correspondence with all the natural numbers: 0, 1, 2,. … X is called countable if it is either finite or denumerable. In this book the set of branches of any network will always be countable.
The notation {x ∈ X: P(x)}, or simply {x: P(x)} if X is understood, denotes the set of all x ∈ X for which the proposition P(x) concerning x is true.
An important class of quasi-symmetric designs is the class of symmetric designs characterized among the former larger class by the property of having only one block intersection number. Though the symmetric designs by themselves are improper quasi-symmetric designs, as we already saw in Theorem 1.29, the extendable symmetric designs open up many possibilities for the parameter sets of proper quasi-symmetric designs. In fact, the classification theorem of Cameron (Theorem 1.29) has given rise to a considerable activity in the area of quasi-symmetric 2 and 3-designs. We will choose to postpone these topics to later chapters and concentrate here on the structure of those 3-designs that can be obtained as extensions of symmetric designs. In doing so, we will also consider some other quasi-symmetric designs (such as a residual design) associated with the extension process.
Recalling Cameron's Theorem, observe that it classifies extendable symmetric designs into four sets: an infinite set of symmetric designs the first object of which is a projective plane of order four, an infinite set of all the Hadamard 2-designs, a projective plane of order ten and a symmetric (495, 39, 3)-design. The existence of a Hadamard 2-design is equivalent to the existence of the Hadamard matrix of corresponding order. Nothing is known about a (495, 39, 3)-design. In this chapter, we first consider the extension question of a projective plane of order ten.
The importance of coding theory as a valuable tool in the study of designs has been known for quite some time. We mention, for example, M. Hall, Jr., MacWilliams and Sloane, Pless, and also the monographs by Cameron and van Lint and Tonchev. Recently Tonchev, Calderbank, and Bagchi have proved some very nice results about designs using coding theory. We have referred to Bagchi's result (Theorem 7.30) in an earlier chapter.
The paper of Tonchev has shown the link between quasi-symmetric designs and self-dual codes. Calderbank, has proved some elegant non-existence criteria about 2-designs in terms of their intersection numbers. The proof of one of Calderbank's results depends on some deep theorems of Gleason and Mallows, and MacWilliams-Sloane on weight enumerators of certain self-dual codes. The results of Calderbank, and Tonchev when specialized to quasi-symmetric designs give strong results about existence, non-existence or uniqueness. For example, Tonchev shows the falsity of a part of the well known Hamada conjecture concerning the rank of the incidence matrix of certain 2-designs. Some results of Tonchev and Calderbank, seem to have been motivated by Neumaier's table of exceptional quasi-symmetric designs given in Chapter VIII.
The purpose of this chapter is to review some of the results of Tonchev, and Calderbank, which rely on codes as one of their principal tools.
This text is based on a course of the same title given at Cambridge for a number of years. It consists of an introduction to information theory and to coding theory at a level appropriate to mathematics undergraduates in their second or later years. Prerequisites needed are a knowledge of discrete probability theory and no more than an acquaintance with continuous probability distributions (including the normal). What is needed in finite-field theory is developed in the course of the text, but some knowledge of group theory and vector spaces is taken for granted.
The two topics treated are traditionally put into mathematical pigeon-holes remote from each other. They do however fit well together in a course, in addressing from different standpoints the same problem, that of communication through noisy channels. The authors hope that undergraduates who have liked algebra courses, or probability courses, will enjoy the otherhalf of the book also, and will feel at the end that their knowledge of how it all fits together is greater than the sum of its parts.
The Cambridge course was invented by Peter Whittle and the debt that particularly the information-theoretic part of the book owes him is unrepayable. Certain features that distinguish the present approach from that found elsewhere are due to him, in particular the conceptual ‘decoupling’ of source and channel, and the definition of channel capacity as a maximized rate of reliable transmission. The usual definition of channel capacity is, from that standpoint, an evaluation,less fundamental than the definition.
In detail, the first four chapters cover the information-theory part of the course. The first, on noiseless coding, also introduces entropy, for use throughout the text. Chapter 2 deals with information sources and gives a careful treatment of the evaluation of rate of information output. Chapters 3 and 4 deal with channels and random coding. An initial approach to the evaluation of channel capacity is taken in Chapter 3 that is not quite sharp, and so yields only bounds, but which seems considerably more direct and illuminating than the usual approach through mutual information. The latter route is taken in Chapter 4, where several channel capacities are exactly calculated.
The aim in this first chapter is to represent a message in as efficient or economical a way as possible, subject to the requirements of the devices that are to deal with it. For instance, computer memory stores information in binary form, essentially as strings of 0s and 1s. Everyone knows that English text contains far fewer letters q or j than e or t. So it is common sense to represent e and t in binary by shorter strings than are used for q and j. It is that common-sense idea that we shall elaborate in this chapter.
We do not consider at this stage any devices that corrupt messages or data. There is no error creation, so no need for error detection or correction. We are thus doing noiseless coding, and decoding. In later chapters we meet ‘noisy’ channels, that introduce occasional errors into messages, and will consider how to protect our messages against them. This will not make what we do in this chapter unnecessary, for we can employ coding and decoding for error correction as well as the noiseless coding and decoding to be met with here.
The first mathematical idea we shall consider about noiseless coding — beyond just setting up notation, though that carries ideas along with it — is that codes should be decipherable. We shall, naturally, insist on that! The mathematical expression of the idea, the Kraft inequality, limits how little code you can get away with to encode your messages. Under this limitation you still have much choice of code, and need therefore a criterion of what makes a code optimal. Now the problem is not to encode a single message, but to set up the method of encoding an indefinitely long stream, stretching into the future, of messages with similar characteristics. The likely characteristics of those prospective messages have to be specified probabilistically. That is, there is a message ‘source’ whose future output from the point of view of having to code it, is random, following a particular probability distribution or distributions which can be ascertained from the physical set-up or estimated statistically.