To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
At the beginning of Chapter 7, we said that by restricting our attention to linear codes (rather than arbitrary, unstructured codes), we could hope to find some good codes which are reasonably easy to implement. And it is true that (via syndrome decoding, for example) a “small” linear code, say with dimension or redundancy at most 20, can be implemented in hardware without much difficulty. However, in order to obtain the performance promised by Shannon's theorems, it is necessary to use larger codes, and in general, a large code, even if it is linear, will be difficult to implement. For this reason, almost all block codes used in practice are in fact cyclic codes; cyclic codes form a very small and highly structured subset of the set of linear codes. In this chapter, we will give a general introduction to cyclic codes, discussing both the underlying mathematical theory (Section 8.1) and the basic hardware circuits used to implement cyclic codes (Section 8.2). In Section 8.3 we will show that Hamming codes can be implemented as cyclic codes, and in Sections 8.4 and 8.5 we will see how cyclic codes are used to combat burst errors. Our story will be continued in Chapter 9, where we will study the most important family of cyclic codes yet discovered: the BCH/Reed–Solomon family.
This chapter serves the same function for Part two as Chapter 6 served for Part one, that is, it summarizes some of the most important results in coding theory which have not been treated in Chapters 7–11. In Sections 12.2, 12.3, and 12.4 we treat channel coding (block codes, convolutional codes, and a comparison of the two). Finally in Section 12.5 we discuss source coding.
Block codes
The theory of block codes is older and richer than the theory of convolutional codes, and so this section is much longer than Section 12.3. (This imbalance does not apply to practical applications, however; see Section 12.4.) In order to give this section some semblance of organization, we shall classify the results to be cited according to Berlekamp's list of the three major problems of coding theory:
How good are the best codes?
How can we design good codes?
How can we decode such codes?
• How good are the best codes? One of the earliest problems which arose in coding theory was that of finding perfect codes. If we view a code of length n over the finite field Fq as a subset {x1, x2, …, xM} of the vector space Vn(Fq), the code is said to be perfect (or close packed) if for some integer e, the Hamming spheres of radius e around the M codewords completely fill Vn(Fq) without overlap.
In 1948, in the introduction to his classic paper, “A mathematical theory of communication,” Claude Shannon, wrote:
“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.”
To solve that problem he created, in the pages that followed, a completely new branch of applied mathematics, which is today called information theory and/or coding theory. This book's object is the presentation of the main results of this theory as they stand 30 years later.
In this introductory chapter we illustrate the central ideas of information theory by means of a specific pair of mathematical models, the binary symmetric source and the binary symmetric channel.
The binary symmetric source (the source, for short) is an object which emits one of two possible symbols, which we take to be “0” and “1,” at a rate of R symbols per unit of time. We shall call these symbols bits, an abbreviation of binary digits. The bits emitted by the source are random, and a “0” is as likely to be emitted as a “1.” We imagine that the source rate R is continuously variable, that is, R can assume any nonnegative value.
The binary symmetric channel (the BSC2 for short) is an object through which it is possible to transmit one bit per unit of time.
Transmission of information is at the heart of what we call communication. As an area of concern it is so vast as to touch upon the preoccupations of philosophers and to give rise to a thriving technology.
We owe to the genius of Claude Shannon the recognition that a large class of problems related to encoding, transmitting, and decoding information can be approached in a systematic and disciplined way: his classic paper of 1948 marks the birth of a new chapter of Mathematics.
In the past thirty years there has grown a staggering literature in this fledgling field, and some of its terminology even has become part of our daily language.
The present monograph (actually two monographs in one) is an excellent introduction to the two aspects of communication: coding and transmission.
The first (which is the subject of Part two) is an elegant illustration of the power and beauty of Algebra; the second belongs to Probability Theory which the chapter begun by Shannon enriched in novel and unexpected ways.
The main changes in this edition are in Part two. The old Chapter 8 (“BCH, Goppa, and Related Codes”) has been revised and expanded into two new chapters, numbered 8 and 9. The old chapters 9, 10, and 11 have then been renumbered 10, 11, and 12. The new Chapter 8 (“Cyclic codes”) presents a fairly complete treatment of the mathematical theory of cyclic codes, and their implementation with shift register circuits. It culminates with a discussion of the use of cyclic codes in burst error correction. The new Chapter 9 (“BCH, Reed–Solomon, and Related Codes”) is much like the old Chapter 8, except that increased emphasis has been placed on Reed-Solomon codes, reflecting their importance in practice. Both of the new chapters feature dozens of new problems.
Consider a discrete memoryless source with source statistics p = (p0, p1, …, pr−1), that is, a sequence U1, U2, … of independent, identically distributed random variables with common distribution P{U = i} = pi, i = 0, 1, …, r − 1. According to the results of Chapter 3, it is in principle possible to represent this source faithfully using an average of bits per source symbol. In this chapter we study an attractive constructive procedure for doing this, called variable-length source coding.
To get the general idea (and the reader is warned that the following example is somewhat meretricious), consider the particular source p = (½, ¼ ⅛ ⅛), whose entropy is H2(½, ¼ ⅛ ⅛) = 1.75 bits. The source alphabet is AU = {0, 1, 2, 3}; now let us encode the source sequence U1, U2, … according to Table 11.1. For example, the source sequence 03220100 … would be encoded into 011111011001000 …. Clearly the average number of bits required per source symbol using this code is ½ · 1 + ¼ · 2 + ⅛ · 3 + ⅛ · 3 = 1.75, the source entropy. This fact in itself is not surprising, since we could, for example, get the average down to only one bit per symbol by encoding everything into 0! The remarkable property enjoyed by the code of Table 11.1 is that it is uniquely decodable, that is, the source sequence can be reconstructed perfectly from the encoded stream.
Introduction: The generator and parity-check matrices
We have already noted that the channel coding theorem (Theorem 2.4) is unsatisfactory from a practical standpoint. This is because the codes whose existence is proved there suffer from at least three distinct defects:
(a) They are hard to find (although the proof of Theorem 2.4 suggests that a code chosen “at random” is likely to be pretty good, provided its length is large enough).
(b) They are hard to analyze. (Given a code, how are we to know how good it is? The impossibility of computing the error probability for a fixed code is what led us to the random coding artifice in the first place!)
(c) They are hard to implement. (In particular, they are hard to decode: the decoding algorithm suggested in the proof of Theorem 2.4–search the region S(y) for codewords, and so on–is hopelessly complex unless the code is trivially small.)
In fact, virtually the only coding scheme we have encountered so far which suffers from none of these defects is the (7, 4) Hamming code of the Introduction. In this chapter we show that the Hamming code is a member of a very large class of codes, the linear codes, and in Chapters 7–9 we show that there are some very good linear codes which are free from the three defects cited above.
We investigate fast simulation techniques for estimating the unreliability in large Markovian models of highly reliable systems for which analytical/numerical techniques are difficult to apply. We first show mathematically that for “small” time horizons, the relative simulation error, when using the importance sampling techniques of failure biasing and forcing, remains bounded as component failure rates tend to zero. This is in contrast to naive simulation where the relative error tends to infinity. For “large” time horizons where these techniques are not efficient, we use the approach of first bounding the unreliability in terms of regenerative-cycle-based measures and then estimating the regenerative-cycle-based measures using importance sampling; the latter can be done very efficiently. We first use bounds developed in the literature for the asymptotic distribution of the time to hitting a rare set in regenerative systems. However, these bounds are “close” to the unreliability only for a certain range of time horizons. We develop new bounds that make use of the special structure of the systems that we consider and are “close” to the unreliability for a much wider range of time horizons. These techniques extend to non-Markovian, highly reliable systems as long as the regenerative structure is preserved.
In this article, we study a single-server queue with FIFO service and cyclic interarrival and service times. An efficient approximative algorithm is developed for the first two moments of the waiting time. Numerical results are included to demonstrate that the algorithm yields accurate results. For the special case of exponential interarrival times, we present a simple exact analysis.
In a system with one queue and several service stations, it is a natural principle to route a customer to the idle station with the distributionwise shortest service time. For the case with exponentially distributed service times, we use a coupling to give strong support to that principle. We also treat another topic. A modified version of our methods brings support to the design principle: It is better with few but quick servers.
The purpose of this article is to study several preservation properties of stochastic comparisons based on the mean inactivity time order under the reliability operations of convolution and mixture. Characterizations and relationships with the other well-known orders are given. Some examples of interest in reliability theory are also presented. Finally, testing in the increasing mean inactivity time class is discussed.