To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The study of error-correcting codes concentrates primarily on codes in the Hamming metric. Such codes are designed to correct a prescribed number of errors, where by an error we mean a change of an entry in the transmitted codeword, irrespective of the (nonzero) error value. The assignment of the same weight to each nonzero error value is reflected also in the model of the q-ary symmetric channel, where all nonzero error values occur with the same probability.
In this chapter, we consider codes in the Lee metric. This metric is defined over the ring of integer residues modulo q and it corresponds to an error model where a change of an entry in a codeword by ±1 is counted as one error. This type of errors is found in noisy channels that use phase-shift keying (PSK) modulation, or in channels that are susceptible to synchronization errors.
Our focus herein will be on GRS codes and alternant codes: we first study their distance properties in the Lee metric, and then present an efficient decoding algorithm for these codes, which corrects any error pattern whose Lee weight is less than half the designed minimum Lee distance of the code.
We also describe another family of codes in the Lee metric, due to Berlekamp. For certain parameters, these codes are shown to be perfect in that metric; namely, they attain the Lee-metric analog of the sphere-packing bound.
In this chapter, we continue the discussion on concatenated codes, which was initiated in Section 5.4. The main message to be conveyed in this chapter is that by using concatenation, one can obtain codes with favorable asymptotic performance—in a sense to be quantified more precisely—while the complexity of constructing these codes and decoding them grows polynomially with the code length.
We first present a decoding algorithm for concatenated codes, due to Forney. This algorithm, referred to as a generalized minimum distance (in short, GMD) decoder, corrects any error pattern whose Hamming weight is less than half the product of the minimum distances of the inner and outer codes (we recall that this product is a lower bound on the minimum distance of the respective concatenated code). A GMD decoder consists of a nearest-codeword decoder for the inner code, and a combined error–erasure decoder for the outer code. It then enumerates over a threshold value, marking the output of the inner decoder as erasure if that decoder returns an inner codeword whose Hamming distance from the respective received sub-word equals or exceeds that threshold. We show that under our assumption on the overall Hamming weight of the error word, there is at least one threshold for which the outer decoder recovers the correct codeword. If the outer code is taken as a GRS code, then a GMD decoder has an implementation with time complexity that is at most quadratic in the length of the concatenated code.
In Chapter 6, we introduced an efficient decoder for GRS codes, yet we assumed that the number of errors does not exceed ⌊(d−1)/2⌋, where d is the minimum distance of the code. In this chapter, we present a decoding algorithm for GRS codes, due to Guruswami and Sudan, where this upper limit is relaxed.
When a decoder attempts to correct more than ⌊(d−1)/2⌋ errors, the decoding may sometimes not be unique; therefore, we consider here a more general model of decoding, allowing the decoder to return a list of codewords, rather than just one codeword. In this more general setting, a decoding is considered successful if the computed list of codewords contains the transmitted codeword. The (maximum) number of errors that a list decoder can successfully handle is called the decoding radius of the decoder.
The approach that leads to the Guruswami–Sudan list decoder is quite different from the GRS decoder which was introduced in Chapter 6. Specifically, the first decoding step now computes from the received word a certain bivariate polynomial Q(x, z) over the ground field, F, of the code. Regarding Q(x, z) as a univariate polynomial in the indeterminate z over the ring F[x], a second decoding step computes the roots of Q(x, z) in F[x]; these roots are then mapped to codewords which, in turn, form the returned list.
In this chapter, we establish conditions on the parameters of codes. In the first part of the chapter, we present bounds that relate between the length n, size M, minimum distance d, and the alphabet size q of a code. Two of these bounds—the Singleton bound and the sphere-packing bound—imply necessary conditions on the values of n, M, d, and q, so that a code with the respective parameters indeed exists. We also exhibit families of codes that attain each of these bounds. The third bound which we present—the Gilbert–Varshamov bound—is an existence result: it states that there exists a linear [n, k, d] code over GF(q) whenever n, k, d, and q satisfy a certain inequality. Additional bounds are included in the problems at the end of this chapter. We end this part of the chapter by introducing another example of necessary conditions on codes—now in the form of MacWilliams' identities, which relate the distribution of the Hamming weights of the codewords in a linear code with the respective distribution in the dual code.
The second part of this chapter deals with asymptotic bounds, which relate the rate of a code to its relative minimum distance δ = d/n and its alphabet size, as the code length n tends to infinity.
In the third part of the chapter, we shift from the combinatorial setting of (n, M, d) codes to the probabilistic framework of the memoryless q-ary symmetric channel.
Concatenated codes are examples of compound constructions, as they are obtained by combining two codes—an inner code and an outer code—with a certain relationship between their parameters. This chapter presents another compound construction, now combining an (inner) code C over some alphabet F with an undirected graph G = (V, E). In the resulting construction, which we refer to as a graph code and denote by (G, C), the degrees of all the vertices in G need to be equal to the length of C, and the code (G, C) consists of all the words of length ∣E∣ over F in which certain sub-words, whose locations are defined by G, belong to C. The main result to be obtained in this chapter is that there exist explicit constructions of graph codes that can be decoded in linear-time complexity, such that the code rate is bounded away from zero, and so is the fraction of symbols that are allowed to be in error.
We start this chapter by reviewing several concepts from graph theory. We then focus on regular graphs, i.e., graphs in which all vertices have the same degree. We will be interested in the expansion properties of such graphs; namely, how the number of outgoing edges from a given set of vertices depends on the size of this set.
This book has evolved from lecture notes that I have been using for an introductory course on coding theory in the Computer Science Department at Technion. The course deals with the basics of the theory of error-correcting codes, and is intended for students in the graduate and upper-undergraduate levels from Computer Science, Electrical Engineering, and Mathematics. The material of this course is covered by the first eight chapters of this book, excluding Sections 4.4–4.7 and 6.7. Prior knowledge in probability, linear algebra, modern algebra, and discrete mathematics is assumed. On the other hand, all the required material on finite fields is an integral part of the course. The remaining parts of this book can form the basis of a second, advanced-level course.
There are many textbooks on the subject of error-correcting codes, some of which are listed next: Berlekamp [36], Blahut [46], Blake and Mullin [49], Lin and Costello [230], MacWilliams and Sloane [249], McEliece [259], Peterson and Weldon [278], and Pless [280]. These are excellent sources, which served as very useful references when compiling this book. The two volumes of the Handbook of Coding Theory [281] form an extensive encyclopedic collection of what is known in the area of coding theory.
One feature that probably distinguishes this book from most other classical textbooks on coding theory is that generalized Reed—Solomon (GRS) codes are treated before BCH codes—and even before cyclic codes.
Once we select the graph model of a network, various algorithms can be used to efficiently design and analyze a network architecture. Some of the most fundamental algorithms among them are finding trees in a graph with minimum cost (where cost is defined appropriately) or finding a minimum spanning tree, visiting nodes of a tree in a specific order, finding connected components of a graph, finding the shortest paths from a node to another node, from a node to all nodes, and from all nodes to all nodes in a distributed or centralized fashion, and assigning flows on various links for a given traffic matrix.
In the following we describe some useful graph algorithms that are important in network design. Recall that N represents the number of nodes and M represents the number of links in the graph.
Shortest-path routing
Shortest-path routing, as the name suggests, finds a path of the shortest length in the network from a source to a destination. This path may be computed statically for the given graph regardless of the resources being used (or assuming that all resources are available to set up that path). In that case, if at a given moment all resources on that path are in use then the request to set a path between the given pair is blocked. On the other hand, the path may be computed for the graph of available resources. This will be a reduced graph that is obtained after removing all the links and the nodes that may be busy at the time of computing from the original graph.
Technological advances in semiconductor products have essentially been the primary driver for the growth of networking that led to improvements and simplification in the long-distance communication infrastructure in the twentieth century. Two major networks of networks, the public switched telephone network (PSTN) and the Internet and Internet II, exist today. The PSTN, a low-delay, fixed-bandwidth network of networks based on the circuit switching principle, provides a very high quality of service (QoS) for large-scale, advanced voice services. The Internet provides very flexible data services such as e-mail and access to the World Wide Web. Packet-switched internet protocol (IP) networks are replacing the electronic-switched, connection-oriented networks of the past century. For example, the Internet is primarily based on packet switching. It is a variable-delay, variable-bandwidth network that provides no guarantee on the quality of service in its initial phase. However, the Internet traffic volume has grown considerably over the last decade. Data traffic now exceeds voice traffic. Various methods have evolved to provide high levels of QoS on packet networks – particularly for voice and other real-time services. Further advances in the area of telecommunications over the last half a century have enabled the communication networks to see the light. Over the 1980s and 1990s, research into optical fibers and their applications in networking revolutionized the communications industry. Current telecommunication transmission lines employ light signals to carry data over guided channels, called optical fibers. The transmission of signals that travel at the speed of light is not new and has been in existence in the form of radio broadcasts for several decades.
Several methods discussed for joint working and spare capacity planning in survivable WDM networks in the last chapter considered a static traffic demand and optimized the network cost assuming various cost models and survivability paradigms. The focus here lies in network operation under dynamic traffic. The common framework that captures the various operational phases in a survivable WDM network in a single ILP optimization problem avoids service disruption to the existing connections. However, the complexity of the optimization problem makes the formulation applicable only for network provisioning and offline reconfigurations. The direct use of this method for online reconfiguration remains limited to small networks with a few tens of wavelengths.
Online algorithm
The goal here is to develop an algorithm for fast online reconfiguration using a heuristic algorithm based on an LP relaxation technique. Since the ILP variables are relaxed, a way is needed to derive a feasible solution from the solution of the relaxed problem. The algorithm consists of two steps. In the first step, the network topology is processed based on the demand set to be provisioned. This preprocessing step ensures that the LP yields a feasible solution. The preprocessing step is based on (i) the assumption that in a network, two routes between any given node-pair are generally sufficient to provide effective fault tolerance, and (ii) an observation on the working of the ILP for such networks. In the second step, using the processed topology as input, the LP is solved. It is interesting to obtain some insights into why the LP formulation may yield a feasible solution to the ILP.
The conventional lightpath is an end-to-end system that is exclusively occupied by its source and destination nodes, with no wavelength multiplexing between the multiple intermediate nodes along the lightpath. Thus if there are not enough IP streams to share the lightpath, the wavelength capacity is severely underutilized for low-rate IP bursts unless the wavelength is filled up by the efficiently aggregated IP traffic. The light trail is an architecture concept that has been proposed as a novel architecture designed for carrying finer granularity IP traffic. A light trail is a unidirectional optical trail between the start node and the end node. It is similar to a lightpath with one important difference in that the intermediate nodes can also access this unidirectional trail. Moreover, the light trail architecture, as detailed later on, does not involve any active switching components. However, these differences make the light trail an ideal candidate for traffic grooming. In light trails, the wavelength is shared in time by the nodes on the light trail. Medium access is arbitrated by a control protocol among the nodes that have data ready to transmit at the same time. In a simple algorithm, upstream nodes have a higher priority compared to the nodes downstream.
Current technologies that transport IP-centric traffic in optical networks are often too expensive, due to their reliance on an expensive optical and opto-electronic approach. Consumers generate diverse granularity traffic and service providers need technologies that are affordable and seamlessly upgradable.
The restoration schemes differ in their assumptions concerning the functionality of cross-connects, the traffic demand, the performance metric, and the network control. Survivability paradigms are classified based on their rerouting methodology as being path-/link-based, execution mechanisms as centralized/distributed, by their computation timing as precomputed/real time, and their capacity sharing as dedicated/shared. This classification is shown in Fig. 3.1.
Pro-active vs. reactive restoration. A pro-active or reactive restoration method is either link-based or path-based. In a special case, a segment-based approach can also be used. In a segment-based detouring, a backup segment is assigned for more than one link. A link may be covered by more than one segment. The restoration path, as shown in Fig. 3.2, is computed for each path. In the case of a link failure, the backup segment is used.
Link-based restoration methods reroute disrupted traffic around the failed link, while path-based rerouting replaces the whole path between the source and the destination of a demand. Thus, a link-based method employs local detouring while the path-based method employs end-to-end detouring. The two detouring mechanisms are shown in Fig. 3.3. For a link-based method, all routes passing through a link are transferred to a local rerouting path that replaces that link.
Data traffic in ultra-long-haul WDM networks is usually characterized by large, homogeneous data flows, and metropolitan area WDM networks (MAN) have to deal with dynamic, heterogeneous service requirements. In such WAN and MAN networks, equipment costs increase if separate wavelengths are used for each individual service. Moreover while each wavelength offers a transmission capacity at gigabit per second rates (e.g., OC-48 or OC-192 and on to OC-768 in the future), users may request connections at rates that are lower than the full wavelength capacity. In addition, for networks of practical size, the number of available wavelengths is still lower by a few orders of magnitude than the number of source-to- destination connections that may be active at any given time. Hence, to make the network viable and cost-effective, it must be able to offer subwavelength level services and must be able to pack these services efficiently onto the wavelengths. These subwavelength services, henceforth referred to as low-rate traffic streams, can vary in range from, say, STS-1 (51.84 Mbit/s) capacity up to the full wavelength capacity. Such an act of multiplexing, demultiplexing, and switching of lower-rate traffic streams onto high-capacity lightpaths is referred to as traffic grooming. WDM networks offering such subwavelength low-rate services are referred to as WDM grooming networks. Efficient traffic grooming improves the wavelength utilization and reduces equipment costs.
In WDM grooming networks, each lightpath typically carries many multiplexed lower-rate traffic streams. Optical add–drop multiplexers (OADMs) add/drop the wavelength for which grooming is needed and electronic SONET-ADMs multiplex or demultiplex the traffic streams onto the wavelength.