To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the layered model introduced earlier line coding is immediately above pulse generation and below channel coding. The input from the channel coding layer is an arbitrary binary sequence plus a timing waveform, and the output to the pulse generation layer can be thought of as a (structured) sequence of impulses.
In a real system line coding is generally very closely connected with pulse generation and it would often not be possible to separate completely the electronic circuits performing line coding from those generating the pulses. The close connection between line codes and pulse generation is reflected in the presentation in this chapter, where line coded waveforms will be illustrated (assuming, generally, rectangular baseband pulses), rather than the sequences of impulses which form the conceptual output of the line code layer.
Although there is a theoretical background to the analysis of line codes (some of which is discussed here), in practice the evolution of line codes has involved a large element of pragmatism. New line codes have often been designed for use in specific systems, with features to combat particular problems in those systems.
Most line codes have three major functions. First, they allow a baseband signal to be conveyed over a channel with d.c. blocking. This is achieved by ensuring that the coded data does not build up short-term d.c. offsets. In the frequency-domain it is necessary for the spectrum of the coded data to fall-off to zero at d.c: it is not sufficient to ensure that the coded data has no d.c. component.
This text is based on a course of the same title given at Cambridge for a number of years. It consists of an introduction to information theory and to coding theory at a level appropriate to mathematics undergraduates in their second or later years. Prerequisites needed are a knowledge of discrete probability theory and no more than an acquaintance with continuous probability distributions (including the normal). What is needed in finite-field theory is developed in the course of the text, but some knowledge of group theory and vector spaces is taken for granted.
The two topics treated are traditionally put into mathematical pigeon-holes remote from each other. They do however fit well together in a course, in addressing from different standpoints the same problem, that of communication through noisy channels. The authors hope that undergraduates who have liked algebra courses, or probability courses, will enjoy the otherhalf of the book also, and will feel at the end that their knowledge of how it all fits together is greater than the sum of its parts.
The Cambridge course was invented by Peter Whittle and the debt that particularly the information-theoretic part of the book owes him is unrepayable. Certain features that distinguish the present approach from that found elsewhere are due to him, in particular the conceptual ‘decoupling’ of source and channel, and the definition of channel capacity as a maximized rate of reliable transmission. The usual definition of channel capacity is, from that standpoint, an evaluation,less fundamental than the definition.
In detail, the first four chapters cover the information-theory part of the course. The first, on noiseless coding, also introduces entropy, for use throughout the text. Chapter 2 deals with information sources and gives a careful treatment of the evaluation of rate of information output. Chapters 3 and 4 deal with channels and random coding. An initial approach to the evaluation of channel capacity is taken in Chapter 3 that is not quite sharp, and so yields only bounds, but which seems considerably more direct and illuminating than the usual approach through mutual information. The latter route is taken in Chapter 4, where several channel capacities are exactly calculated.
The aim in this first chapter is to represent a message in as efficient or economical a way as possible, subject to the requirements of the devices that are to deal with it. For instance, computer memory stores information in binary form, essentially as strings of 0s and 1s. Everyone knows that English text contains far fewer letters q or j than e or t. So it is common sense to represent e and t in binary by shorter strings than are used for q and j. It is that common-sense idea that we shall elaborate in this chapter.
We do not consider at this stage any devices that corrupt messages or data. There is no error creation, so no need for error detection or correction. We are thus doing noiseless coding, and decoding. In later chapters we meet ‘noisy’ channels, that introduce occasional errors into messages, and will consider how to protect our messages against them. This will not make what we do in this chapter unnecessary, for we can employ coding and decoding for error correction as well as the noiseless coding and decoding to be met with here.
The first mathematical idea we shall consider about noiseless coding — beyond just setting up notation, though that carries ideas along with it — is that codes should be decipherable. We shall, naturally, insist on that! The mathematical expression of the idea, the Kraft inequality, limits how little code you can get away with to encode your messages. Under this limitation you still have much choice of code, and need therefore a criterion of what makes a code optimal. Now the problem is not to encode a single message, but to set up the method of encoding an indefinitely long stream, stretching into the future, of messages with similar characteristics. The likely characteristics of those prospective messages have to be specified probabilistically. That is, there is a message ‘source’ whose future output from the point of view of having to code it, is random, following a particular probability distribution or distributions which can be ascertained from the physical set-up or estimated statistically.