To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Once we select the graph model of a network, various algorithms can be used to efficiently design and analyze a network architecture. Some of the most fundamental algorithms among them are finding trees in a graph with minimum cost (where cost is defined appropriately) or finding a minimum spanning tree, visiting nodes of a tree in a specific order, finding connected components of a graph, finding the shortest paths from a node to another node, from a node to all nodes, and from all nodes to all nodes in a distributed or centralized fashion, and assigning flows on various links for a given traffic matrix.
In the following we describe some useful graph algorithms that are important in network design. Recall that N represents the number of nodes and M represents the number of links in the graph.
Shortest-path routing
Shortest-path routing, as the name suggests, finds a path of the shortest length in the network from a source to a destination. This path may be computed statically for the given graph regardless of the resources being used (or assuming that all resources are available to set up that path). In that case, if at a given moment all resources on that path are in use then the request to set a path between the given pair is blocked. On the other hand, the path may be computed for the graph of available resources. This will be a reduced graph that is obtained after removing all the links and the nodes that may be busy at the time of computing from the original graph.
Technological advances in semiconductor products have essentially been the primary driver for the growth of networking that led to improvements and simplification in the long-distance communication infrastructure in the twentieth century. Two major networks of networks, the public switched telephone network (PSTN) and the Internet and Internet II, exist today. The PSTN, a low-delay, fixed-bandwidth network of networks based on the circuit switching principle, provides a very high quality of service (QoS) for large-scale, advanced voice services. The Internet provides very flexible data services such as e-mail and access to the World Wide Web. Packet-switched internet protocol (IP) networks are replacing the electronic-switched, connection-oriented networks of the past century. For example, the Internet is primarily based on packet switching. It is a variable-delay, variable-bandwidth network that provides no guarantee on the quality of service in its initial phase. However, the Internet traffic volume has grown considerably over the last decade. Data traffic now exceeds voice traffic. Various methods have evolved to provide high levels of QoS on packet networks – particularly for voice and other real-time services. Further advances in the area of telecommunications over the last half a century have enabled the communication networks to see the light. Over the 1980s and 1990s, research into optical fibers and their applications in networking revolutionized the communications industry. Current telecommunication transmission lines employ light signals to carry data over guided channels, called optical fibers. The transmission of signals that travel at the speed of light is not new and has been in existence in the form of radio broadcasts for several decades.
Several methods discussed for joint working and spare capacity planning in survivable WDM networks in the last chapter considered a static traffic demand and optimized the network cost assuming various cost models and survivability paradigms. The focus here lies in network operation under dynamic traffic. The common framework that captures the various operational phases in a survivable WDM network in a single ILP optimization problem avoids service disruption to the existing connections. However, the complexity of the optimization problem makes the formulation applicable only for network provisioning and offline reconfigurations. The direct use of this method for online reconfiguration remains limited to small networks with a few tens of wavelengths.
Online algorithm
The goal here is to develop an algorithm for fast online reconfiguration using a heuristic algorithm based on an LP relaxation technique. Since the ILP variables are relaxed, a way is needed to derive a feasible solution from the solution of the relaxed problem. The algorithm consists of two steps. In the first step, the network topology is processed based on the demand set to be provisioned. This preprocessing step ensures that the LP yields a feasible solution. The preprocessing step is based on (i) the assumption that in a network, two routes between any given node-pair are generally sufficient to provide effective fault tolerance, and (ii) an observation on the working of the ILP for such networks. In the second step, using the processed topology as input, the LP is solved. It is interesting to obtain some insights into why the LP formulation may yield a feasible solution to the ILP.
The conventional lightpath is an end-to-end system that is exclusively occupied by its source and destination nodes, with no wavelength multiplexing between the multiple intermediate nodes along the lightpath. Thus if there are not enough IP streams to share the lightpath, the wavelength capacity is severely underutilized for low-rate IP bursts unless the wavelength is filled up by the efficiently aggregated IP traffic. The light trail is an architecture concept that has been proposed as a novel architecture designed for carrying finer granularity IP traffic. A light trail is a unidirectional optical trail between the start node and the end node. It is similar to a lightpath with one important difference in that the intermediate nodes can also access this unidirectional trail. Moreover, the light trail architecture, as detailed later on, does not involve any active switching components. However, these differences make the light trail an ideal candidate for traffic grooming. In light trails, the wavelength is shared in time by the nodes on the light trail. Medium access is arbitrated by a control protocol among the nodes that have data ready to transmit at the same time. In a simple algorithm, upstream nodes have a higher priority compared to the nodes downstream.
Current technologies that transport IP-centric traffic in optical networks are often too expensive, due to their reliance on an expensive optical and opto-electronic approach. Consumers generate diverse granularity traffic and service providers need technologies that are affordable and seamlessly upgradable.
The restoration schemes differ in their assumptions concerning the functionality of cross-connects, the traffic demand, the performance metric, and the network control. Survivability paradigms are classified based on their rerouting methodology as being path-/link-based, execution mechanisms as centralized/distributed, by their computation timing as precomputed/real time, and their capacity sharing as dedicated/shared. This classification is shown in Fig. 3.1.
Pro-active vs. reactive restoration. A pro-active or reactive restoration method is either link-based or path-based. In a special case, a segment-based approach can also be used. In a segment-based detouring, a backup segment is assigned for more than one link. A link may be covered by more than one segment. The restoration path, as shown in Fig. 3.2, is computed for each path. In the case of a link failure, the backup segment is used.
Link-based restoration methods reroute disrupted traffic around the failed link, while path-based rerouting replaces the whole path between the source and the destination of a demand. Thus, a link-based method employs local detouring while the path-based method employs end-to-end detouring. The two detouring mechanisms are shown in Fig. 3.3. For a link-based method, all routes passing through a link are transferred to a local rerouting path that replaces that link.
Data traffic in ultra-long-haul WDM networks is usually characterized by large, homogeneous data flows, and metropolitan area WDM networks (MAN) have to deal with dynamic, heterogeneous service requirements. In such WAN and MAN networks, equipment costs increase if separate wavelengths are used for each individual service. Moreover while each wavelength offers a transmission capacity at gigabit per second rates (e.g., OC-48 or OC-192 and on to OC-768 in the future), users may request connections at rates that are lower than the full wavelength capacity. In addition, for networks of practical size, the number of available wavelengths is still lower by a few orders of magnitude than the number of source-to- destination connections that may be active at any given time. Hence, to make the network viable and cost-effective, it must be able to offer subwavelength level services and must be able to pack these services efficiently onto the wavelengths. These subwavelength services, henceforth referred to as low-rate traffic streams, can vary in range from, say, STS-1 (51.84 Mbit/s) capacity up to the full wavelength capacity. Such an act of multiplexing, demultiplexing, and switching of lower-rate traffic streams onto high-capacity lightpaths is referred to as traffic grooming. WDM networks offering such subwavelength low-rate services are referred to as WDM grooming networks. Efficient traffic grooming improves the wavelength utilization and reduces equipment costs.
In WDM grooming networks, each lightpath typically carries many multiplexed lower-rate traffic streams. Optical add–drop multiplexers (OADMs) add/drop the wavelength for which grooming is needed and electronic SONET-ADMs multiplex or demultiplex the traffic streams onto the wavelength.
An interconnection network connects various sources of information using a set of point-to-point links. A link is a connection using a copper wire or an optical fiber, or may be wireless. The nodes are autonomous data sources and can request to transfer any amount of information to any other node. Figure A2.1 shows an example network consisting of four nodes. Node A has a link connected to nodes B and C. Node B is connected to nodes A and D. Nodes C and D are connected to nodes A and B, respectively. If node C desires to send some information to node B, it sends it to node A which in turn routes it to node B. Node A thus acts as an intermediate node. The capacity of a node is the amount of information it can transmit (also called its source capacity) or receive (also called its sink capacity). The capacity of a link is the amount of information that can be transferred over the link in one unit of time.
The network design deals with the interconnection of various nodes and how to transmit information from one node to another. Network architecture and design both have multiple meanings. The most commonly used interpretation relates to the decisions one needs to make to design a network. The four most important aspects of network architecture and design are described here.
Network topology
A topology defines how nodes are interconnected. For example, the topology of the NSF network is shown in Fig. A2.2. Most network topologies are hierarchical in nature.
Before the 1970s, networks were primarily used to carry voice or telephone calls over circuit-switched networks. Failures and service outages in such transport networks were handled mainly at the circuit layer, and many a time manually. Most of the remedial actions included routing of the call by manual configuration of switches by network operators. Over time the capacity of the transport networks increased, data overlay networks were created and a large number of end-users instituted private voice and packet networks.
With the advent of fiber optic transmission systems and eventually wavelength-division multiplexing (WDM) the bandwidth of a single fiber soared. With increasing deployment of fibers in networks, the risk of losing large volumes of traffic due to a span cut or a node failure has also increased tremendously. In the 1990s Bellcore developed the SONET (synchronous optical network) standard and standardized the concept of self-healing rings. It was soon followed by the equivalent standard named SDH (synchronous digital hierarchy) in Europe. This appeared to be the final solution. Many service providers could replace all of their cumbersome and expensive point-to-point transmission systems with a few multi-node, self-healing SONET rings. Many carriers joined the SONET ring bandwagon.
With further developments in technology, more and more mesh-network topologies started emerging. Failure management still remained a part of the solution and recovering from them remained a challenging issue. It soon started to fuel the everlasting question that still prevails to this day: which is a better option, ring-based or mesh-based restoration? Over the years, as the traffic increased a mesh-based approach seemed to be a more viable option for providing restoration, compared to a traditional ring network.
A network can be designed using various topologies. Many interconnection networks have been proposed by the research community; some have been prototyped but few have progressed to become commercial products. A network may be static or dynamic. The topologies can be divided into two categories: (i) regular and (ii) irregular. The regular topologies follow a well-defined function to interconnect nodes. The regularity, symmetry, and most often the strong connectivity of the regular network topologies make them suitable for general purpose interconnection structures where the characteristics of the traffic originating from all nodes are identical and destinations are uniformly distributed over the set of nodes. Thus, the link traffic is also uniformly distributed. The irregular topologies are optimized based on the traffic demands. If there is a high traffic flow between two nodes, then they may be connected using a direct link. If a direct link is not feasible, then an alternative is to provide a short path between the two nodes. Such designs are much more involved and need special attention.
We will first discuss regular topologies and then get into the design of irregular topologies. We will also discuss some specific regular topologies, such as a binary cube and its variations, in greater detail.
Regular topologies
There are several regular topologies that have been proposed by various researchers in the literature. The most important among these are complete connected graphs, star, tree, ring, multi-ring, mesh, and hypercube topologies. One of the desirable properties of a structure is to be able to accommodate or embed an arbitrary permutation.
A routing algorithm establishes an appropriate path from any given source to a destination. The objective of network routing is to maximize network throughput with minimal cost in terms of path length. To maximize throughput, a routing algorithm has to provide as many communication paths as possible. To minimize the cost of paths, the shortest paths have to be provided. However, there is always a trade-off between these two objectives. Most routing algorithms are based on assigning a cost measure to each link in a network. The cost could be a fixed quantity related to parameters such as the link length, the bandwidth of a link, or the estimated propagation delay. Each link has a cost associated with it and in most cases it is assumed that the links have equal cost.
An interconnection network is strictly non-blocking if there exists a routing algorithm to add a new connection without disturbing existing connections. A network is rearrangeable if its permitted states realize every permutation or allowable set of requests; here it is possible to rearrange existing connections if necessary. Otherwise it is blocking.
The store-and-forward operation in packet switching incurs a time delay and causes significant performance degradation. If the algorithm is used in a packet-switching network, the total time delay of a data packet is obtained by summing up the time delay at each intermediate node. Since non-availability of any link along a route causes the route not to be available, the network sees a high probability of blocking under heavy traffic, which rejects the incoming request and eventually causes data loss or delay.
In the last chapter, the characteristics of traffic grooming WDM networks with arbitrary topologies were studied from the perspective of blocking performance. It has been shown that the blocking performance is not only affected by the link traffic and the routing and wavelength assignment strategy, it is also affected by the arrival rates of different low-rate traffic streams, their respective holding times and more importantly, the capacity distribution of the wavelengths on the links. In such networks, call requests arrive randomly and can request for a low-rate traffic connection to be established between the source and the destination. Under dynamic traffic conditions, call requests that ask for capacity nearer to that of the full wavelength experience a higher probability of blocking than those that ask for a smaller fraction. In fact, the difference in blocking performance between the high- and low-capacity traffic streams becomes more significant as the traffic stream switching capability of the network increases. This difference in blocking performance for different capacities is directly affected by the routing and wavelength assignment policy that is used to route the call request. Hence, it is important that a call request is provided with its service in a fair manner commensurate with the capacity it requests. This capacity fairness is different from the fairness measure based on hop count that has traditionally been addressed in the literature.
In optical networks without wavelength conversion, due to the wavelength continuity constraint there is an increase in probability of a call request being blocked.
The popularity of the Internet and internet protocol- (IP-) based internet services is promising enormous growth in data traffic originating from hosts that are IP endpoints. This growth is being fueled by various applications such as those driven by the World Wide Web (WWW) and by the indirect impact of increased computing power and storage capacity at the end systems. The advent of new services with increasing intelligence and the corresponding bandwidth demands are further adding to the traffic growth. New access technologies such as asymmetric digital subscriber line (ADSL), high-bit-rate digital subscriber line (HDSL), and fiber to the home (FTTH) would remove the access bottlenecks and enforce an even faster growth of demand on the backbone network. As noted earlier, these changing trends have led to a fundamental shift in traffic patterns and the traffic is mostly due to data communications.
In the past, the amount of data traffic on carrier networks was small compared with voice-centric traffic. Therefore, the carrier networks were designed to primarily support voice traffic, and the data traffic was transmitted using the voice channels. Now, the core networks are being designed primarily for data traffic with voice support at the edges. Voice traffic can be carried in the core networks using “voice-over- IP” or similar paradigms. To meet these growing demands, the use of WDM will continue to increase in backbone networks. Architectures will be required to satisfy the need for better quality of service (QoS), protection, and availability guarantees in IP networks.
Previous networks used electronics for both the medium of transmission and the processing technology. Hence, the transmission and processing bandwidths at nodes were approximately of the same order. Electronic technology advanced simultaneously on the transmission and processing sides, leading to a matched growth in the evolution of the networks. With the shift to optical technology, the transmission capacity has taken a quantum leap while the processing capacity has seen only modest improvements in electronics. Optical processing is currently in its infancy and therefore the backbone networks are likely to remain circuit-switched with the possibility of having optical switching at intermediate nodes.
The increase in the transmission capacity in terms of multiple wavelengths each operating at a few tens of gigabits per second with multiple time slots within a wavelength requires an equivalent increase in the electronic processing for efficient operation of the networks. However, it is impractical to match the power of the optical technology with that of electronics if the nodes were to process all the information that is received from different links they are connected to. Hence, the switching trends depend on having multiple simple processing devices that work independently on parts of the information that is received at a node. Such a network model is referred to as a trunk-switched network (TSN). A TSN is a two-level network model in which a link is considered as multiple channels and channels are combined together to form groups called trunks. This conceptual architecture is capable of grooming subwavelength level traffic over a link.
As mentioned in earlier chapters, due to the high bandwidths involved, any link failure in the form of a fiber cut has catastrophic results unless protection and restoration schemes for the interrupted services form an integral part of the network design and operation strategies. Although network survivability can be implemented in the higher layers above the optical network layer (e.g., self-healing in SONET rings and the ATM virtual path layer, fast rerouting in MPLS and changing routes using dynamic routing protocols in the IP layer), it is advantageous to use optical WDM survivability mechanisms since they offer a common survivability platform for services to the higher layers. For example, it is possible that several IP routes may eventually be routed through the same fiber. Hence the failure of a single fiber may affect multiple routes, possibly alternative paths for an IP route. Thus, protection at the IP layer requires complete knowledge of the underlying physical fiber topology.
As discussed earlier, a variety of optical path protection schemes can be designed using concepts such as disjoint dedicated backup paths, shared backup multiplexing, and joint primary/backup routing and wavelength assignment. Lightpath restoration schemes, on the other hand, do not rely on prerouted backup channels but instead dynamically recompute new routes to effectively reroute the affected traffic after link failure. Although this saves bandwidth, the timescale for restoration can be difficult to specify and can be of the order of hundreds of milliseconds. Hence in a dynamic scenario, path protection schemes are likely to be more useful and practical than path restoration schemes.
The p-cycle (preconfigured protection cycles) is a cycle-based protection method introduced in. It can be characterized as embedding of multiple rings to act as protection cycles in a mesh network. The p-cycles are configured with spare network capacity to provide protection to connections. The design goal of p-cycle protection is to retain the capacity efficiency of a mesh-restorable network, while approaching the speed of a line-switched self-healing ring. In p-cycle protection, when a link fails, only the end nodes of the failed link need to perform real-time switching. This makes p-cycle similar to SONET/SDH line-switched rings in terms of the speed of recovery from link failures. The key difference between p-cycle and ring protection is that p-cycle protection not only protects the links on the cycle, as is the case for ring protection, it also protects straddling links. A straddling link is an off-cycle link for which the two end nodes are both on the cycle. This important property effectively improves the capacity efficiency of p-cycles. Figure 4.1 depicts an example that illustrates p-cycle protection. In Fig. 4.1(a), A–B–C–D–E–A is a p-cycle formed using reserved capacity on the links for protection. When an on-cycle link A–B fails, the p-cycle can provide protection as shown in Fig. 4.1(b). When a straddling link B–D fails, each p-cycle protects two working paths on the link by providing two alternate paths as shown in Figs. 4.1(c) and (d), for the entire traffic on the link in both directions.
The focus of this chapter is to provide an analytical framework and to obtain some insight into how traffic grooming affects performance in terms of the call blocking probability in different network topologies. Specifically, the performance of constrained and sparse grooming networks are compared using simulation-based studies. Constrained grooming corresponds to the case where grooming is performed only at the SONET-ADMs on an end-to-end basis. Sparse grooming corresponds to the case where, in addition to grooming at the SONET-ADMs, the cross-connects at some or all of the nodes are provided with a traffic stream switching capability. The goal is to develop techniques to minimize electronic equipment costs and to provide solutions for efficient WDM network designs.
It has been established that wavelength conversion, that is, the ability of a routing node to convert one wavelength to another, reduces wavelength conflicts and improves the performance by reducing the blocking probability. Lower bounds on the blocking probability for an arbitrary network for any routing and wavelength assignment algorithm are known. It is further shown that the use of wavelength converters results in a 10–40% increase in wavelength reuse. A reduced load approximation scheme to calculate the blocking probabilities for the optical network model for two routing schemes, fixed routing and least-loaded routing, has been used in. This model does not consider the load correlation between the links. Analytical models of networks, using fixed routing and random wavelength assignment, taking wavelength correlation into account, have been developed in.