To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The reliable transmission of information-bearing signals over a noisy communication channel is at the heart of what we call communication. Information theory, founded by Claude E. Shannon in 1948 [Sha48], provides a mathematical framework for the theory of communication. It describes the fundamental limits to how efficiently one can encode information and still be able to recover it with negligible loss.
At its inception, the main role of information theory was to provide the engineering and scientific communities with a mathematical framework for the theory of communication by establishing the fundamental limits on the performance of various communication systems. Its birth was initiated with the publication of the works of Claude E. Shannon, who stated that it is possible to send information-bearing signals at a fixed code rate through a noisy communication channel with an arbitrarily small error probability as long as the code rate is below a certain fixed quantity that depends on the channel characteristics [Sha48]; he “baptized” this quantity with the name of channel capacity (see the discussion in Chapter 6). He further proclaimed that random sources – such as speech, music, or image signals – possess an irreducible complexity beyond which they cannot be compressed distortion-free. He called this complexity the source entropy (see the discussion in Chapter 5). He went on to assert that if a source has an entropy that is less than the capacity of a communication channel, then asymptotically error-free transmission of the source over the channel can be achieved.
Systems dedicated to the communication or storage of information are commonplace in everyday life. Generally speaking, a communication system is a system which sends information from one place to another. Examples include telephone networks, computer networks, audio/video broadcasting, etc. Storage systems, e.g. magnetic and optical disk drives, are systems for storage and later retrieval of information. In a sense, such systems may be regarded as communication systems which transmit information from now (the present) to then (the future). Whenever or wherever problems of information processing arise, there is a need to know how to compress the textual material and how to protect it against possible corruption. This book is to cover the fundamentals of information theory and coding theory, to solve the above main problems, and to give related examples in practice. The amount of background mathematics and electrical engineering is kept to a minimum. At most, simple results of calculus and probability theory are used here, and anything beyond that is developed as needed.
Information theory versus coding theory
Information theory is a branch of probability theory with extensive applications to communication systems. Like several other branches of mathematics, information theory has a physical origin. It was initiated by communication scientists who were studying the statistical structure of electrical communication equipment and was principally founded by Claude E. Shannon through the landmark contribution [Sha48] on the mathematical theory of communications. In this paper, Shannon developed the fundamental limits on data compression and reliable transmission over noisy channels.
The theory of error-correcting codes comes from the need to protect information from corruption during transmission or storage. Take your CD or DVD as an example. Usually, you might convert your music into MP3 files for storage. The reason for such a conversion is that MP3 files are more compact and take less storage space, i.e. they use fewer binary digits (bits) compared with the original format on CD. Certainly, the price to pay for a smaller file size is that you will suffer some kind of distortion, or, equivalently, losses in audio quality or fidelity. However, such loss is in general indiscernible to human audio perception, and you can hardly notice the subtle differences between the uncompressed and compressed audio signals. The compression of digital data streams such as audio music streams is commonly referred to as source coding. We will consider it in more detail in Chapters 4 and 5.
What we are going to discuss in this chapter is the opposite of compression. After converting the music into MP3 files, you might want to store these files on a CD or a DVD for later use. While burning the digital data onto a CD, there is a special mechanism called error control coding behind the CD burning process. Why do we need it? Well, the reason is simple. Storing CDs and DVDs inevitably causes small scratches on the disk surface.
Up to this point we have been concerned with coding theory. We have described codes and given algorithms of how to design them. And we have evaluated the performance of some particular codes. Now we begin with information theory, which will enable us to learn more about the fundamental properties of general codes without having actually to design them.
Basically, information theory is a part of physics and tries to describe what information is and how we can work with it. Like all theories in physics it is a model of the real world that is accepted as true as long as it predicts how nature behaves accurately enough.
In the following we will start by giving some suggestive examples to motivate the definitions that follow. However, note that these examples are not a justification for the definitions; they just try to shed some light on the reason why we will define these quantities in the way we do. The real justification of all definitions in information theory (or any other physical theory) is the fact that they turn out to be useful.
Motivation
We start by asking the question: what is information?
Let us consider some examples of sentences that contain some “information.”
The weather will be good tomorrow.
The weather was bad last Sunday.
The president of Taiwan will come to you tomorrow and will give you one million dollars.
We end this introduction to coding and information theory by giving two examples of how coding theory relates to quite unexpected other fields. Firstly we give a very brief introduction to the relation between Hamming codes and projective geometry. Secondly we show a very interesting application of coding to game theory.
Hamming code and projective geometry
Though not entirely correct, the concept of projective geometry was first developed by Gerard Desargues in the sixteenth century for art paintings and for architectural drawings. The actual development of this theory dated way back to the third century to Pappus of Alexandria. They were all puzzled by the axioms of Euclidean geometry given by Euclid in 300 BC who stated the following.
(1) Given any distinct two points in space, there is a unique line connecting these two points.
(2) Given any two nonparallel lines in space, they intersect at a unique point.
(3) Given any two distinct parallel lines in space, they never intersect.
The confusion comes from the third statement, in particular from the concept of parallelism. How can two lines never intersect? Even to the end of universe?
In your daily life, the two sides of a road are parallel to each other, yet you do see them intersect at a distant point. So, this is somewhat confusing and makes people very uncomfortable. Revising the above statements gives rise to the theory of projective geometry.
Most of the books on coding and information theory are prepared for those who already have good background knowledge in probability and random processes. It is therefore hard to find a ready-to-use textbook in these two subjects suitable for engineering students at the freshmen level, or for non-engineering major students who are interested in knowing, at least conceptually, how information is encoded and decoded in practice and the theories behind it. Since communications has become a part of modern life, such knowledge is more and more of practical significance. For this reason, when our school requested us to offer a preliminary course in coding and information theory for students who do not have any engineering background, we saw this as an opportunity and initiated the plan to write a textbook.
In preparing this material, we hope that, in addition to the aforementioned purpose, the book can also serve as a beginner's guide that inspires and attracts students to enter this interesting area. The material covered in this book has been carefully selected to keep the amount of background mathematics and electrical engineering to a minimum. At most, simple calculus plus a little probability theory are used here, and anything beyond that is developed as needed. Its first version has been used as a textbook in the 2009 summer freshmen course Conversion Between Information and Codes: A Historical View at National Chiao Tung University, Taiwan. The course was attended by 47 students, including 12 from departments other than electrical engineering.
In this chapter we will consider a new type of coding. So far we have concentrated on codes that can help detect or even correct errors; we now would like to use codes to represent some information more efficiently, i.e. we try to represent the same information using fewer digits on average. Hence, instead of protecting data from errors, we try to compress it such as to use less storage space.
To achieve such a compression, we will assume that we know the probability distribution of the messages being sent. If some symbols are more probable than others, we can then take advantage of this by assigning shorter code-words to the more frequent symbols and longer codewords to the rare symbols. Hence, we see that such a code has codewords that are not of fixed length.
Unfortunately, variable-length codes bring with them a fundamental problem: at the receiving end, how do you recognize the end of one codeword and the beginning of the next? To attain a better understanding of this question and to learn more about how to design a good code with a short average codeword length, we start with a motivating example.
A motivating example
You would like to set up your own telephone system that connects you to your three best friends. The question is how to design efficient binary phone numbers. In Table 4.1 you find six different ways of how you could choose them.
This chapter first revisits the cases described in Chapter 5 but utilizing high-level wrapper APIs instead. Then, another case is demonstrated to illustrate how to create connectionoriented communications. To follow this chapter, it is assumed that a reader understands the content covered in Chapter 5 and Chapter 6.
Revisit of previous case
In this section, Case 6 in Chapter 5 is revisited but this time with wrapper APIs.
Open “pk_switch” process model in Process Editor. Save it as “pk_switch_v2”. Now, you can edit “pk_switch_v2” and replace relevant code with wrapper APIs. In SV block, replace the declarations of state variables, as shown in Figure 7.1.
In TV block, replace the declarations of temporary variables, as shown in Figure 7.2.
In HB block, include header files: “routing.h” and “geo_topo.h”, as shown in Figure 7.3. These two files include relevant wrapper APIs for performing routing and topology related operations.
In “init” state, replace previous code for building the graph and routing table with the new code that utilizes wrapper APIs, as shown in Figures 7.4 and 7.5.
From Figures 7.4 and 7.5, it is seen that after applying the wrapper APIs, the processing of vertices and edges in the routing graph is performed by dealing with W_Vertex_Info andW_Edge_Info objects. The steps for implementing routing algorithm can be represented by the corresponding wrapper APIs in the following way:
Initialize graph – w_init_graph().
Set vertices of the graph – w_set_graph_vertices().
This chapter shows the steps for installing and configuring the OPNET Modeler and its related environment variables. Having followed this chapter, one should be able to run the OPNET Modeler correctly. If OPNET Modeler and relevant software have already been installed and environment variables have been configured on the target machine, this chapter can be skipped. If you have problems compiling OPNET models, especially, compiling standard OPNET models which should have no compilation and linking errors, please check this chapter to make sure your software is properly installed and environment variables are correctly configured, since many OPNET model compilation and linking errors come from incorrect configuration of the C/C++ compiler's environment variables.
This chapter first describes the system requirements for using OPNET Modeler, including both hardware and software requirements. Then it shows the steps for installing and configuring OPNET Modeler on both Windows and Linux operating systems respectively.
System requirements for using OPNET Modeler
This section lists the requirements for using OPNET Modeler 14.5 and later versions, and also highlights the relevant key points. For other versions of OPNET Modeler, please check the system requirements datasheet and installation manual shipped with corresponding products, or visit the OPNET website for more information (www.opnet.com). Tables 2.1–2.3 list the system support and hardware and software requirements for using OPNET Modeler.
Installation on Windows
On Windows, you need to install OPNET Modeler and Microsoft Visual Studio or Microsoft Visual C++ for OPNET Modeler to compile C/C++ based simulation models.