To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An interconnection network connects various sources of information using a set of point-to-point links. A link is a connection using a copper wire or an optical fiber, or may be wireless. The nodes are autonomous data sources and can request to transfer any amount of information to any other node. Figure A2.1 shows an example network consisting of four nodes. Node A has a link connected to nodes B and C. Node B is connected to nodes A and D. Nodes C and D are connected to nodes A and B, respectively. If node C desires to send some information to node B, it sends it to node A which in turn routes it to node B. Node A thus acts as an intermediate node. The capacity of a node is the amount of information it can transmit (also called its source capacity) or receive (also called its sink capacity). The capacity of a link is the amount of information that can be transferred over the link in one unit of time.
The network design deals with the interconnection of various nodes and how to transmit information from one node to another. Network architecture and design both have multiple meanings. The most commonly used interpretation relates to the decisions one needs to make to design a network. The four most important aspects of network architecture and design are described here.
Network topology
A topology defines how nodes are interconnected. For example, the topology of the NSF network is shown in Fig. A2.2. Most network topologies are hierarchical in nature.
Before the 1970s, networks were primarily used to carry voice or telephone calls over circuit-switched networks. Failures and service outages in such transport networks were handled mainly at the circuit layer, and many a time manually. Most of the remedial actions included routing of the call by manual configuration of switches by network operators. Over time the capacity of the transport networks increased, data overlay networks were created and a large number of end-users instituted private voice and packet networks.
With the advent of fiber optic transmission systems and eventually wavelength-division multiplexing (WDM) the bandwidth of a single fiber soared. With increasing deployment of fibers in networks, the risk of losing large volumes of traffic due to a span cut or a node failure has also increased tremendously. In the 1990s Bellcore developed the SONET (synchronous optical network) standard and standardized the concept of self-healing rings. It was soon followed by the equivalent standard named SDH (synchronous digital hierarchy) in Europe. This appeared to be the final solution. Many service providers could replace all of their cumbersome and expensive point-to-point transmission systems with a few multi-node, self-healing SONET rings. Many carriers joined the SONET ring bandwagon.
With further developments in technology, more and more mesh-network topologies started emerging. Failure management still remained a part of the solution and recovering from them remained a challenging issue. It soon started to fuel the everlasting question that still prevails to this day: which is a better option, ring-based or mesh-based restoration? Over the years, as the traffic increased a mesh-based approach seemed to be a more viable option for providing restoration, compared to a traditional ring network.
A network can be designed using various topologies. Many interconnection networks have been proposed by the research community; some have been prototyped but few have progressed to become commercial products. A network may be static or dynamic. The topologies can be divided into two categories: (i) regular and (ii) irregular. The regular topologies follow a well-defined function to interconnect nodes. The regularity, symmetry, and most often the strong connectivity of the regular network topologies make them suitable for general purpose interconnection structures where the characteristics of the traffic originating from all nodes are identical and destinations are uniformly distributed over the set of nodes. Thus, the link traffic is also uniformly distributed. The irregular topologies are optimized based on the traffic demands. If there is a high traffic flow between two nodes, then they may be connected using a direct link. If a direct link is not feasible, then an alternative is to provide a short path between the two nodes. Such designs are much more involved and need special attention.
We will first discuss regular topologies and then get into the design of irregular topologies. We will also discuss some specific regular topologies, such as a binary cube and its variations, in greater detail.
Regular topologies
There are several regular topologies that have been proposed by various researchers in the literature. The most important among these are complete connected graphs, star, tree, ring, multi-ring, mesh, and hypercube topologies. One of the desirable properties of a structure is to be able to accommodate or embed an arbitrary permutation.
A routing algorithm establishes an appropriate path from any given source to a destination. The objective of network routing is to maximize network throughput with minimal cost in terms of path length. To maximize throughput, a routing algorithm has to provide as many communication paths as possible. To minimize the cost of paths, the shortest paths have to be provided. However, there is always a trade-off between these two objectives. Most routing algorithms are based on assigning a cost measure to each link in a network. The cost could be a fixed quantity related to parameters such as the link length, the bandwidth of a link, or the estimated propagation delay. Each link has a cost associated with it and in most cases it is assumed that the links have equal cost.
An interconnection network is strictly non-blocking if there exists a routing algorithm to add a new connection without disturbing existing connections. A network is rearrangeable if its permitted states realize every permutation or allowable set of requests; here it is possible to rearrange existing connections if necessary. Otherwise it is blocking.
The store-and-forward operation in packet switching incurs a time delay and causes significant performance degradation. If the algorithm is used in a packet-switching network, the total time delay of a data packet is obtained by summing up the time delay at each intermediate node. Since non-availability of any link along a route causes the route not to be available, the network sees a high probability of blocking under heavy traffic, which rejects the incoming request and eventually causes data loss or delay.
In the last chapter, the characteristics of traffic grooming WDM networks with arbitrary topologies were studied from the perspective of blocking performance. It has been shown that the blocking performance is not only affected by the link traffic and the routing and wavelength assignment strategy, it is also affected by the arrival rates of different low-rate traffic streams, their respective holding times and more importantly, the capacity distribution of the wavelengths on the links. In such networks, call requests arrive randomly and can request for a low-rate traffic connection to be established between the source and the destination. Under dynamic traffic conditions, call requests that ask for capacity nearer to that of the full wavelength experience a higher probability of blocking than those that ask for a smaller fraction. In fact, the difference in blocking performance between the high- and low-capacity traffic streams becomes more significant as the traffic stream switching capability of the network increases. This difference in blocking performance for different capacities is directly affected by the routing and wavelength assignment policy that is used to route the call request. Hence, it is important that a call request is provided with its service in a fair manner commensurate with the capacity it requests. This capacity fairness is different from the fairness measure based on hop count that has traditionally been addressed in the literature.
In optical networks without wavelength conversion, due to the wavelength continuity constraint there is an increase in probability of a call request being blocked.
The popularity of the Internet and internet protocol- (IP-) based internet services is promising enormous growth in data traffic originating from hosts that are IP endpoints. This growth is being fueled by various applications such as those driven by the World Wide Web (WWW) and by the indirect impact of increased computing power and storage capacity at the end systems. The advent of new services with increasing intelligence and the corresponding bandwidth demands are further adding to the traffic growth. New access technologies such as asymmetric digital subscriber line (ADSL), high-bit-rate digital subscriber line (HDSL), and fiber to the home (FTTH) would remove the access bottlenecks and enforce an even faster growth of demand on the backbone network. As noted earlier, these changing trends have led to a fundamental shift in traffic patterns and the traffic is mostly due to data communications.
In the past, the amount of data traffic on carrier networks was small compared with voice-centric traffic. Therefore, the carrier networks were designed to primarily support voice traffic, and the data traffic was transmitted using the voice channels. Now, the core networks are being designed primarily for data traffic with voice support at the edges. Voice traffic can be carried in the core networks using “voice-over- IP” or similar paradigms. To meet these growing demands, the use of WDM will continue to increase in backbone networks. Architectures will be required to satisfy the need for better quality of service (QoS), protection, and availability guarantees in IP networks.
Previous networks used electronics for both the medium of transmission and the processing technology. Hence, the transmission and processing bandwidths at nodes were approximately of the same order. Electronic technology advanced simultaneously on the transmission and processing sides, leading to a matched growth in the evolution of the networks. With the shift to optical technology, the transmission capacity has taken a quantum leap while the processing capacity has seen only modest improvements in electronics. Optical processing is currently in its infancy and therefore the backbone networks are likely to remain circuit-switched with the possibility of having optical switching at intermediate nodes.
The increase in the transmission capacity in terms of multiple wavelengths each operating at a few tens of gigabits per second with multiple time slots within a wavelength requires an equivalent increase in the electronic processing for efficient operation of the networks. However, it is impractical to match the power of the optical technology with that of electronics if the nodes were to process all the information that is received from different links they are connected to. Hence, the switching trends depend on having multiple simple processing devices that work independently on parts of the information that is received at a node. Such a network model is referred to as a trunk-switched network (TSN). A TSN is a two-level network model in which a link is considered as multiple channels and channels are combined together to form groups called trunks. This conceptual architecture is capable of grooming subwavelength level traffic over a link.
As mentioned in earlier chapters, due to the high bandwidths involved, any link failure in the form of a fiber cut has catastrophic results unless protection and restoration schemes for the interrupted services form an integral part of the network design and operation strategies. Although network survivability can be implemented in the higher layers above the optical network layer (e.g., self-healing in SONET rings and the ATM virtual path layer, fast rerouting in MPLS and changing routes using dynamic routing protocols in the IP layer), it is advantageous to use optical WDM survivability mechanisms since they offer a common survivability platform for services to the higher layers. For example, it is possible that several IP routes may eventually be routed through the same fiber. Hence the failure of a single fiber may affect multiple routes, possibly alternative paths for an IP route. Thus, protection at the IP layer requires complete knowledge of the underlying physical fiber topology.
As discussed earlier, a variety of optical path protection schemes can be designed using concepts such as disjoint dedicated backup paths, shared backup multiplexing, and joint primary/backup routing and wavelength assignment. Lightpath restoration schemes, on the other hand, do not rely on prerouted backup channels but instead dynamically recompute new routes to effectively reroute the affected traffic after link failure. Although this saves bandwidth, the timescale for restoration can be difficult to specify and can be of the order of hundreds of milliseconds. Hence in a dynamic scenario, path protection schemes are likely to be more useful and practical than path restoration schemes.
The p-cycle (preconfigured protection cycles) is a cycle-based protection method introduced in. It can be characterized as embedding of multiple rings to act as protection cycles in a mesh network. The p-cycles are configured with spare network capacity to provide protection to connections. The design goal of p-cycle protection is to retain the capacity efficiency of a mesh-restorable network, while approaching the speed of a line-switched self-healing ring. In p-cycle protection, when a link fails, only the end nodes of the failed link need to perform real-time switching. This makes p-cycle similar to SONET/SDH line-switched rings in terms of the speed of recovery from link failures. The key difference between p-cycle and ring protection is that p-cycle protection not only protects the links on the cycle, as is the case for ring protection, it also protects straddling links. A straddling link is an off-cycle link for which the two end nodes are both on the cycle. This important property effectively improves the capacity efficiency of p-cycles. Figure 4.1 depicts an example that illustrates p-cycle protection. In Fig. 4.1(a), A–B–C–D–E–A is a p-cycle formed using reserved capacity on the links for protection. When an on-cycle link A–B fails, the p-cycle can provide protection as shown in Fig. 4.1(b). When a straddling link B–D fails, each p-cycle protects two working paths on the link by providing two alternate paths as shown in Figs. 4.1(c) and (d), for the entire traffic on the link in both directions.
The focus of this chapter is to provide an analytical framework and to obtain some insight into how traffic grooming affects performance in terms of the call blocking probability in different network topologies. Specifically, the performance of constrained and sparse grooming networks are compared using simulation-based studies. Constrained grooming corresponds to the case where grooming is performed only at the SONET-ADMs on an end-to-end basis. Sparse grooming corresponds to the case where, in addition to grooming at the SONET-ADMs, the cross-connects at some or all of the nodes are provided with a traffic stream switching capability. The goal is to develop techniques to minimize electronic equipment costs and to provide solutions for efficient WDM network designs.
It has been established that wavelength conversion, that is, the ability of a routing node to convert one wavelength to another, reduces wavelength conflicts and improves the performance by reducing the blocking probability. Lower bounds on the blocking probability for an arbitrary network for any routing and wavelength assignment algorithm are known. It is further shown that the use of wavelength converters results in a 10–40% increase in wavelength reuse. A reduced load approximation scheme to calculate the blocking probabilities for the optical network model for two routing schemes, fixed routing and least-loaded routing, has been used in. This model does not consider the load correlation between the links. Analytical models of networks, using fixed routing and random wavelength assignment, taking wavelength correlation into account, have been developed in.
Various lightpath protection schemes for a survivable WDM grooming network with dynamic traffic were investigated in Chapter 12. The nodes in the WDM grooming network are assumed to include ADM (add–drop multiplexer)-constrained grooming nodes. This chapter deals with the static survivable WDM grooming network design with wavelength continuity constrained grooming nodes. For static traffic the problem of grooming subwavelength level requests in mesh-restorable WDM networks, the corresponding path selection and wavelength assignment problems are formulated as ILP optimization problems.
Design problem
To address the survivable grooming network design problem, a network with W wavelengths per fiber and K disjoint alternate paths for each s-d pair can be viewed as W × K networks, with each of them representing a single wavelength network. For K = 2, the first W networks contain the first alternate path for each s-d pair on each wavelength. We number the networks from 1 to W, according to the wavelengths associated with them. The second set of W networks contain the second alternate path for each s-d pair on each wavelength. These networks are numbered from W + 1 to 2W, where the (W + i)th network represents the same wavelength as the ith network, i = 1, 2, …,W. Figure 13.1 illustrates this layered model for a six-node network with three wavelengths and two link-disjoint alternate paths. For each node-pair, it also depicts routing of two alternate paths for two connections in the network.
The two most important objectives for network operation are:
(i) capacity minimization
(ii) revenue maximization.
For capacity minimization, there are three operational phases in survivable WDM network operation: (i) initial call setup, (ii) short-/medium-term reconfiguration, and (iii) long-term reconfiguration. All three optimization problems may be modeled using an ILP formulation separately. A single ILP formulation that can incorporate all three phases of the network operation is presented in this chapter. This common framework also takes service disruption into consideration. Typically, most of the design problems in optical networks have considered a static traffic demand and have tried to optimize the network cost assuming various cost models and survivability paradigms. Fast restoration is a key feature addressed in the designs. Once the network is provisioned, the critical issue is how to operate the network in such a way that the network performance is optimized under dynamic traffic.
The framework for revenue maximization is modified to include a service differentiation model based on lightpath protection. A multi-stage solution methodology is developed to solve individual service classes sequentially and to combine them to obtain a feasible solution. Different cost comparisons in terms of the increase in revenue obtained for various service classes with the base case of accepting demands without any protection show the gains of planning and operation efficiency.
Capacity minimization
Among the three phases of capacity minimization the initial call setup phase is a static optimization problem where the network capacity is optimized for the given topology and the traffic matrix to be provisioned on the network.
Optical components are devices that transmit, shape, amplify, switch, transport, or detect light signals. The improvements in optical component technologies over the past few decades have been the key enabler in the evolution and commercialization of optical networks. In this appendix, the basic principles behind the functioning of the various components are briefly reviewed. In general, there are three groups of optical components.
(i) Active components: devices that are electrically powered, such as lasers, wavelength shifters, and modulators.
(ii) Passive components: devices that are not electrically powered and that do not generate light of their own, such as fibers, multiplexers, demultiplexers, couplers, isolators, attenuators, and circulators.
(iii) Optical modules: devices that are a collection of active and/or passive optical elements used to perform specific tasks. This group includes transceivers, erbium-doped amplifiers, optical switches, and optical add/drop multiplexers.
Fiber optic cables
The backbone that connects all of the nodes and systems together is the optical fiber. The fiber allows signals of enormous frequency range (25 THz) to be transmitted over long distances without significant distortion in the information content. While there are losses in the fiber due to reflection, refraction, scattering, dispersion, and absorption, the bandwidth available in this medium is orders of magnitude more than that provided by other conventional mediums such as copper cables. As will be explained below, the bandwidth available in the fiber is limited only by the attenuation characteristics of the medium at low frequencies and its dispersion characteristics at high frequencies.
Optical technology involves research into components, such as couplers, amplifiers, switches, etc., that form the building blocks of the networks. Some of the main components used in optical networking are described in Appendix A1. With the help of these components, one designs a network and operates it. Issues in network design include minimizing the total network cost, the ability of the network to tolerate failures, the scalability of the network to meet future demands based on projected traffic volumes, etc. The operational part of the network involves monitoring the network for proper functionality, routing traffic, handling dynamic traffic in the network, reconfiguring the network in the case of failure, etc. In this chapter, these issues are introduced in brief, followed by a discussion of the two main issues in network operation, namely survivability and how traffic grooming relates to managing smaller traffic streams.
Network design
Network design involves assigning sufficient resources in the network to meet the projected traffic demand. Typically, network design problems consider a static traffic matrix and aim to design a network that is optimized based on certain performance metrics. Network design problems employing a static traffic matrix are typically formulated as optimization problems. If the traffic pattern in the network is dynamic, i.e. the specific traffic is not known a priori, the design problem involves assigning resources based on certain projected traffic distributions. In the case of dynamic traffic the network designer attempts to quantify certain network performance metrics based on the distribution of the traffic. The most commonly used metric in evaluating a network under dynamic traffic patterns is the blocking probability.
A network is represented by a graph G = (V, E), where V is a finite set of elements called nodes or vertices, and E is a set of unordered pairs of nodes called edges or arcs. This is an undirected graph. A directed graph is also defined similarly except that the arcs or edges are ordered pairs. For both directed and undirected graphs, an arc or an edge from a node i to a node j is represented using the notation (i, j). Examples of five-node directed and undirected graphs are shown in Fig. A3.1. In an undirected graph, an edge (i, j) can carry data traffic in both directions (i.e. from node i to node j and from node j to node i), whereas in a directed graph, the traffic is only carried from node i to node j.
Graph representations. A graph is stored either as an adjacency matrix or an incidence matrix, as shown in Fig. A3.2. For a graph with N nodes, an N × N 0−1 matrix stores the link information in the adjacency matrix. The element (i, j) is a 1 if node i has a link to node j. An incidence matrix, on the other hand, is an N × M matrix where M is the number of links numbered from 0 to M - 1. The element (i, j) stores the information on whether link j is incident on node i or not. Thus, the incidence matrix carries information about exactly what links are incident on a node.