To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As networks continue to grow rapidly in size and complexity, it has become increasingly clear that their evolution is closely tied to a detailed understanding of network traffic. Large IP networks are designed with the goal of providing high availability and low delay/loss while keeping operational complexity and cost low. Meeting these goals is a highly challenging task and can only be achieved through a detailed knowledge of the network and its dynamics.
No matter how surprising this may seem, IP network management today is primarily reactive in nature and relies on trial and error when problems arise. Network operators have limited visibility into the traffic that flows on top of their network, the operational state of the network elements and the behavior of the protocols responsible for the routing of traffic and the reliable transmission of packets from end to end. Furthermore, design and planning decisions only partially rely on actual usage patterns. There are a few reasons behind such a phenomenon.
First, the designers of IP networks have traditionally attached less importance to network monitoring and resource accounting than to issues such as distributed management, robustness to failures and support for diverse services and protocols. Thus, IP network elements (routers and end hosts) have not been designed to retain detailed information about the traffic flowing through them, and IP protocols typically do not provide detailed information about the state of the underlying network.
The traffic matrix (TM) of a telecommunications network measures the total amount of traffic entering the network from any ingress point and destined to any egress point. The knowledge captured in the TM constitutes an essential input for optimal network design, traffic engineering and capacity planning. Despite its importance, however, the TM for an IP network is a quantity that has remained elusive to capture via direct measurement. The reasons for this are multiple. First, the computation of the TM requires the collection of flow statistics across the entire edge of the network, which may not be supported by all the network elements. Second, these statistics need to be shipped to a central location for appropriate processing. The shipping costs, coupled with the frequency with which such data would be shipped, translate to communications overhead, while the processing cost at the central location translates to computational overhead. Lastly, given the granularity at which flow statistics are collected with today's technology on a router, the construction of the TM requires explicit information on the state of the routing protocols, as well as the configuration of the network elements. The storage overhead at the central location thus includes routing state and configuration information. It has been widely believed that these overheads would be so significant as to render computation of backbone TMs, through measurement alone, not viable using today's flow monitors.
The convergence of traditional network services to a common IP infrastructure has resulted in a major paradigm shift for many service providers. Service providers are looking for profitable ways to deliver value-added, bundled, or personalized IP services to a greater number of broadband users. As cable operators and Digital Subscriber Line (DSL) providers capitalize on IP networks, they need to create higher-margin, higher-value premium services, such as interactive gaming, Video-on-Demand (VoD), Voice-over-IP (VoIP) and broadband TV (IPTV). The missing element of the current strategy is service differentiation, i.e. the ability to understand at a granular level how subscribers are using the network, identify what applications or services are being consumed, and then intelligently apply network resources to applications and attract subscribers that promise the highest return on investment. Operators need to manage and control subscriber traffic. This can be accomplished by introducing more intelligence into the network infrastructure, which enhances the transport network with application and subscriber awareness. Such unique visibility into the types of bits carried allows the network to identify, classify, guarantee performance and charge for services based on unique application and subscriber criteria. Instead of underwriting the expenses associated with random and unconstrained data capacity, deployment and consumption, this new wave of network intelligence allows operators to consider Quality-of-Service (QoS) constraints while enabling new possibilities for broadband service creation and new revenue-sharing opportunities. The same is true with third-party service providers, who may, in fact, be riding an operator's network undetected.
End-to-end packet delay is an important metric to measure in networks, both from the network operation and application performance points of view. An important component of this delay is the time for packets to traverse the different switching elements along the path. This is particularly important for network providers, who may have SLAs specifying allowable values of delay across the domains they control. A fundamental building block of the path delay experienced by packets in IP networks is the delay incurred when passing through a single IP router. In this chapter we go through a detailed description of the operations performed on an IP packet when transitting an IP router and measurements of their respective time to completion, as collected on an operational high-end router. Our discussion focuses on the most commonly found router architecture, which is based on a cross-bar switch.
To quantify the individual components of through-router delay, we present results obtained through a unique set of measurements that captures all packets transmitted on all links of an operational access router for a duration of 13 hours. Using this data set, this chapter studies the behavior of those router links that experienced congestion and reports on the magnitude and temporal structure of the resulting packet delays. Such an analysis reveals that cases of overload in operational IP links in the core of an IP network do exist, but tend to be of small magnitude and low frequency.
This appendix lists the payload bit-strings used by a payload classifier.
Numbers in parentheses denote the beginning byte in the payload where each string is found. If there is no number the string is found at the beginning of the payload. Note that: “plen” denotes the size of the payload; \x denotes hex; && denotes AND; ∥ denotes OR; plen - 2 = (1) denotes that the payload length minus 2 is given by the first byte in the payload
Open, any-to-any connectivity is clearly one of the fundamentally great properties of the Internet. Unfortunately, the openness of the Internet also enables an expanding and everevolving array of malicious activity. During the early 1990s, when malicious attacks first emerged on the Internet, only a few systems at a time were typically compromised, and those systems were rarely used to continue or broaden the attack activity. At first, the attackers were seemingly motivated simply by the sport of it all. But then, as would seem to be the natural order of things, the miscreants were seized by the profit motive. Today, network infrastructure and end systems are constantly attacked with an increased level of sophistication and virulence.
In this chapter, we discuss and face two of the most dangerous threats known by the Internet community: Denial of Service (DoS) and computer worms. In the following we refer to them simply by DoS and Computer Worms. Those two families of threats have different goals, forms and effects than most of the attacks that are launched at networks and computers. Most attackers involved in cyber-crime seek to break into a system, extract its secrets, or fool it into providing a service without the approprite authorization. Attackers commonly try to steal credit card numbers or proprietary information, gain control of machines to install their software or save their data, deface Web pages, or alter important content on victim machines.
As already presented, the Internet routing system is partitioned into tens of thousands of independently administered Autonomous Systems (ASs). The Border Gateway Protocol (BGP) is the de facto inter-domain routing protocol that maintains and exchanges routing information between ASs. However, the BGP was designed based on the implicit trust between all participants and does not employ any measure to authenticate the routes injected into or propagated through the system. Therefore, virtually any AS can announce any route into the routing system, and sometimes the bogus routes can trigger large-scale anomalies in the Internet. A canonical example occurred on April 25, 1997, when a misconfigured router maintained by a small service provider (AS7007) in Virginia, USA, injected incorrect routing information into the global Internet and claimed to have optimal connectivity to all Internet destinations. As a result, most Internet traffic was routed to this small ISP. The traffic overwhelmed the misconfigured and intermediate routers, and effectively crippled the Internet for almost two hours. Since then, many such events have been reported, some of them due to human mistakes, others due to malicious activities that exploited vulnerabilities in the BGP in order to cause large-scale damage. For example, it is common for spammers to announce an arbitrary prefix and then use that prefix to send spam from the hijacked address space, making the trace back and the spammer identity discovery much more difficult.
Since the 1950s, voice and video services such as telephony and television have established themselves as an integral part of everyone's life. Traditionally, voice and video service providers built their own networks to deliver these services to customers. However, tremendous technical advancements since the 1990s have revolutionized the mode of delivery of these services. Today, these services are delivered to the users over the Internet, and we believe that there are two main reasons for this: (i) delivering services over the Internet in IP packets is much more economical for voice and video service providers and (ii) the massive penetration of broadband (i.e. higher bandwidth) Internet service has ensured that the quality of voice and video services over the Internet is good enough for everyday use. The feasibility of a more economical alternative for voice and video services attracted many ISPs including Comcast, AT&T and Verizon, among several others, to offer these services to end users at a lower cost. However, non-ISPs, such as Skype, Google, Microsoft, etc. have also started offering these services to customers at extremely competitive prices (and, on many occasions, for free).
From an ISP's perspective, traffic classification has always been a critical activity for several important management tasks, such as traffic engineering, network planning and provisioning, security, billing and Quality of Service (QoS). Given the popularity of voice and video services over the Internet, it has now become all the more important for ISPs to identify voice and video traffic from other service providers for three reasons.
Since the late 1990s there has been significant interest and attention from the research community devoted to understanding the key drivers of how ISP networks are designed, built and operated. While recent work by empiricists and theoreticians has emphasized certain statistical and mathematical properties of network structures and their behaviors, this part of the book presents in great detail an optimization-based perspective that focuses on the objectives, constraints and other drivers of engineering design that will help the community gain a better insight into this fascinating world and enable the design of more “realistic” models.
In this chapter we introduce the area of IP network design and the factors commonly used to drive such a process. Our discussion revolves around IP-over-WDM networks, and we define the network design problem as the end-to-end process aimed at identifying the “right” IP topology, the associated routing strategy and its mapping over the physical infrastructure in order to guarantee the efficient utilization of network resources, a high degree of resilience to failures and the satisfaction of SLAs.
We start by providing a high-level overview of the IP-over-WDM technology. We highlight the properties of the physical and IP layers (the IP layer is also known as the logical layer), we discuss their relationship, and introduce the terminology that will be extensively used in the following chapters. Then, we introduce the processes encountered in IP network design and their driving factors.
Due to the challenges of obtaining an AS topology annotated with AS relationships, it is infeasible to use the valley-free rule to identify redistribution path spoofing in the work. Alternatively, we apply the direction-conforming rule to the AS topology annotated with directed AS-links to carry out the detection. The following theorems show that the direction-conforming rule actually shows roughly equivalent efficiency.
Theorem D.1
For an observer AS, a valley-free path in the AS topology annotated with AS relationships must be “direction-conforming” in the corresponding AS topology annotated with inferred directed AS-links.
Theorem D.2
(1) For a Tier-1 AS, the direction-conforming paths in the AS topology annotated with inferred directed AS-links must be valley-free in the real AS topology annotated with AS relationships.
(2) For a non-Tier-1 AS, except the redistribution path-spoofing paths launched by the provider ASs, the direction-conforming paths must be valley-free.
In order to prove these theorems, we first investigate the mapping between the real AS topology annotated with AS relationships and the inferred AS topology annotated with directed AS-links.
Note that, similar to the analysis in the text, we assume that the inferred topology is “ideally” complete, namely it contains all legitimate directed AS-links that the observer AS should see. In order to infer a complete AS topology comprising of directed AS-links based on the route announcements from the observer AS, we assume an ideal inference scenario, in which the AS connections and relationships do not change over the inference period and every AS tries all possible valid routes.
In industrial automation the aim is to control and optimise production processes and to provide high-quality and reliable products and services by minimising material, cost, and energy waste. Automation systems rely on smart sensors, actuators, and other industrial equipment like robotic and mechatronic components. Open and standardised communication networks are employed for the communication as well as configuration and control of the various automation components. The standard architecture consists of PLCs (Programmable Logic Controllers) or DCS (Distributed Control Systems), fieldbus systems, and PCs serving as man/machine interfaces as well as intelligent sensors and actuators (e.g. frequency converters). The fieldbus systems gather the signals from the process level or the sensors and actuators with fieldbus interfaces, and are directly connected to distributed or centralised control devices, such as PLCs.
The standard IEC 61131-3 of the International Electrotechnical Commission provides a range of programming notations suitable for implementation on PLCs. It comprises basic notations close to those in electrical engineering like contact plans, instruction lists, and function plans as well as graphical and textual programming notations called sequential function charts and structured text. Currently, the development of software in automation technology proceeds step by step along the life cycle using the notations of this standard and different tools provided by different PLC vendors.
A problem is that different PLC vendors use their own variants of the standard with different syntax, semantics, and tool sets. Also, the approaches based on the standard are not well suited for the development of distributed applications and applications with hard real-time requirements.