To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The effciency or gain of multicast in terms of network resources is compared to unicast. Specifically, we concentrate on a one-to-many communication, where a source sends a same message to m different, uniformly distributed destinations along the shortest path. In unicast, this message is sent m times from the source to each destination. Hence, unicast uses on average fN (m) = mE [HN] link-traversals or hops, where E[HN] is the average number of hops to a uniform location in the graph with N nodes. One of the main properties of multicast is that it economizes on the number of linktraversals: the message is only copied at each branch point of the multicast tree to the m destinations. Let us denote by HN(m) the number of links in the shortest path tree (SPT) to m uniformly chosen nodes. If we define the multicast gain gN(m) = E[HN(m)] as the average number of hops in the SPT rooted at a source to m randomly chosen distinct destinations, then gN(m) ≤ fN(m). The purpose here is to quantify the multicast gain gN (m). We present general results valid for all graphs and more explicit results valid for the random graph GP(N) and for the k-ary tree. The analysis presented here may be valuable to derive a business model for multicast: “How many customers m are needed to make the use of multicast for a service provider profitable?”
Two modeling assumptions are made. First, the multicast process is assumed to deliver packets along the shortest path from a source to each of the m destinations.
Queueing theory describes basic phenomena such as the waiting time, the throughput, the losses, the number of queueing items, etc. in queueing systems. Following Kleinrock (1975), any system in which arrivals place demands upon a finite-capacity resource can be broadly termed a queueing system.
Queuing theory is a relatively new branch of applied mathematics that is generally considered to have been initiated by A. K. Erlang in 1918 with his paper on the design of automatic telephone exchanges, in which the famous Erlang blocking probability, the Erlang B-formula (14.17), was derived (Brockmeyer et al., 1948, p. 139). It was only after the Second World War, however, that queueing theory was boosted mainly by the introduction of computers and the digitalization of the telecommunications infrastructure. For engineers, the two volumes by Kleinrock (1975, 1976) are perhaps the most well-known, while in applied mathematics, apart from the penetrating influence of Feller (1970, 1971), the Single Server Queue of Cohen (1969) is regarded as a landmark. Since Cohen's book, which incorporates most of the important work before 1969, a wealth of books and excellent papers have appeared, an evolution that is still continuing today.
A queueing system
Examples of queueing abound in daily life: queueing situations at a ticket window in the railway station or post office, at the cash points in the supermarket, the waiting room at the airport, train or hospital, etc. In telecommunications, the packets arriving at the input port of a router or switch are buffered in the output queue before transmission to the next hop towards the destination.
Performance analysis belongs to the domain of applied mathematics. The major domain of application in this book concerns telecommunications systems and networks. We will mainly use stochastic analysis and probability theory to address problems in the performance evaluation of telecommunications systems and networks. The first chapter will provide a motivation and a statement of several problems.
This book aims to present methods rigorously, hence mathematically, with minimal resorting to intuition. It is my belief that intuition is often gained after the result is known and rarely before the problem is solved, unless the problem is simple. Techniques and terminologies of axiomatic probability (such as definitions of probability spaces, filtration, measures, etc.) have been omitted and a more direct, less abstract approach has been adopted. In addition, most of the important formulas are interpreted in the sense of “What does this mathematical expression teach me?” This last step justifies the word “applied”, since most mathematical treatises do not interpret as it contains the risk to be imprecise and incomplete.
The field of stochastic processes is much too large to be covered in a single book and only a selected number of topics has been chosen. Most of the topics are considered as classical. Perhaps the largest omission is a treatment of Brownian processes and the many related applications. A weak excuse for this omission (besides the considerable mathematical complexity) is that Brownian theory applies more to physics (analogue fields) than to system theory (discrete components).
In this chapter, the probability density function of the number of hops to the most nearby member of the anycast group consisting of m members (e.g. servers) is analyzed. The results are applied to compute a performance measure η of the effciency of anycast over unicast and to the server placement problem. The server placement problem asks for the number of (replicated) servers m needed such that any user in the network is not more than j hops away from a server of the anycast group with a certain prescribed probability. As in Chapter 17 on multicast, two types of shortest path trees are investigated: the regular k-ary tree and the irregular uniform recursive tree treated in Chapter 16. Since these two extreme cases of trees indicate that the performance measure η ≈ 1 − alogm where the real number a depends on the details of the tree, it is believed that for trees in real networks (as the Internet) a same logarithmic law applies. An order calculus on exponentially growing trees further supplies evidence for the conjecture that η ≈ 1 alogm for small m.
Introduction
IPv6 possesses a new address type, anycast, that is not supported in IPv4. The anycast address is syntactically identical to a unicast address. However, when a set of interfaces is specified by the same unicast address, that unicast address is called an anycast address. The advantage of anycast is that a group of interfaces at different locations is treated as one single address. For example, the information on servers is often duplicated over several secondary servers at different locations for reasons of robustness and accessibility.
In this chapter, we consider block codes with a certain structure, which are defined over alphabets that are fields. Specifically, these codes, which we call linear codes, form linear spaces over their alphabets. We associate two objects with these codes: a generator matrix and a parity-check matrix. The first matrix is used as a compact representation of the code and also as a means for efficient encoding. The parity-check matrix will be used as a tool for analyzing the code (e.g., for computing its minimum distance) and will also be part of the general framework that we develop for the decoding of linear codes.
As examples of linear codes, we will mention the repetition code, the parity code, and the Hamming code with its extensions. Owing to their structure, linear codes are by far the predominant block codes in practical usage, and virtually all codes that will be considered in subsequent chapters are linear.
Definition
Denote by GF(q) a finite (Galois) field of size q. For example, if q is a prime, the field GF(q) coincides with the ring of integer residues modulo q, also denoted by ℤq. We will see more constructions of finite fields in Chapter 3.
An (n, M, d) code C over a field F = GF(q) is called linear if C is a linear subspace of Fn over F; namely, for every two codewords c1, c2 ∈ C and two scalars a1, a2 ∈ F we have a1c1 + a2c2 ∈ C.
For most of this chapter, we deviate from our study of codes to become acquainted with the algebraic concept of finite fields. These objects will serve as our primary tool for constructing codes in upcoming chapters. As a motivating example, we present at the end of this chapter a construction of a double-error-correcting binary code, whose description and analysis make use of finite fields. This construction will turn out to be a special case of a more general family of codes, to be discussed in Section 5.5.
Among the properties of finite fields that we cover in this chapter, we show that the multiplicative group of a finite field is cyclic; this property, in turn, suggests a method for implementing the arithmetic operations in finite fields of moderate sizes through look-up tables, akin to logarithm tables. We also prove that the size of any finite field must be a power of a prime and that this necessary condition is also sufficient, that is, every power of a prime is a size of some finite field. The practical significance of the latter property is manifested particularly through the special case of the prime 2, since in most coding applications, the data is sub-divided into symbols—e.g., bytes—that belong to alphabets whose sizes are powers of 2.
Prime fields
For a prime p, we let GF(p) (Galois field of size p) denote the ring of integer residues modulo p (this ring is also denoted by ℤp).
In Chapter 1, we introduced the concept of a block code with a certain application in mind: the codewords in the code serve as the set of images of the channel encoder. The encoder maps a message into a codeword which, in turn, is transmitted through the channel, and the receiver then decodes that message (possibly incorrectly) from the word that is read at the output of the channel. In this model, the encoding of a message is independent of any previous or future transmissions—and so is the decoding.
In this chapter, we consider a more general coding model, where the encoding and the decoding are context-dependent. The encoder may now be in one of finitely many states, which contain information about the history of the transmission. Such a finite-state encoder still maps messages to codewords, yet the mapping depends on the state which the encoder is currently in, and that state is updated during each message transmission. Finite-state encoders will be specified through directed graphs, where the vertices stand for the states and the edges define the allowed transitions between states. The mapping from messages to codewords will be determined by the edge names and by labels that we assign to the edges.
The chapter is organized as follows. We first review several concepts from the theory of directed graphs. We then introduce the notion of trellis codes, which can be viewed as the state-dependent counterpart of block codes: the elements of a trellis code form the set of images of a finite-state encoder.