To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The convergence of traditional network services to a common IP infrastructure has resulted in a major paradigm shift for many service providers. Service providers are looking for profitable ways to deliver value-added, bundled, or personalized IP services to a greater number of broadband users. As cable operators and Digital Subscriber Line (DSL) providers capitalize on IP networks, they need to create higher-margin, higher-value premium services, such as interactive gaming, Video-on-Demand (VoD), Voice-over-IP (VoIP) and broadband TV (IPTV). The missing element of the current strategy is service differentiation, i.e. the ability to understand at a granular level how subscribers are using the network, identify what applications or services are being consumed, and then intelligently apply network resources to applications and attract subscribers that promise the highest return on investment. Operators need to manage and control subscriber traffic. This can be accomplished by introducing more intelligence into the network infrastructure, which enhances the transport network with application and subscriber awareness. Such unique visibility into the types of bits carried allows the network to identify, classify, guarantee performance and charge for services based on unique application and subscriber criteria. Instead of underwriting the expenses associated with random and unconstrained data capacity, deployment and consumption, this new wave of network intelligence allows operators to consider Quality-of-Service (QoS) constraints while enabling new possibilities for broadband service creation and new revenue-sharing opportunities. The same is true with third-party service providers, who may, in fact, be riding an operator's network undetected.
End-to-end packet delay is an important metric to measure in networks, both from the network operation and application performance points of view. An important component of this delay is the time for packets to traverse the different switching elements along the path. This is particularly important for network providers, who may have SLAs specifying allowable values of delay across the domains they control. A fundamental building block of the path delay experienced by packets in IP networks is the delay incurred when passing through a single IP router. In this chapter we go through a detailed description of the operations performed on an IP packet when transitting an IP router and measurements of their respective time to completion, as collected on an operational high-end router. Our discussion focuses on the most commonly found router architecture, which is based on a cross-bar switch.
To quantify the individual components of through-router delay, we present results obtained through a unique set of measurements that captures all packets transmitted on all links of an operational access router for a duration of 13 hours. Using this data set, this chapter studies the behavior of those router links that experienced congestion and reports on the magnitude and temporal structure of the resulting packet delays. Such an analysis reveals that cases of overload in operational IP links in the core of an IP network do exist, but tend to be of small magnitude and low frequency.
This appendix lists the payload bit-strings used by a payload classifier.
Numbers in parentheses denote the beginning byte in the payload where each string is found. If there is no number the string is found at the beginning of the payload. Note that: “plen” denotes the size of the payload; \x denotes hex; && denotes AND; ∥ denotes OR; plen - 2 = (1) denotes that the payload length minus 2 is given by the first byte in the payload
Open, any-to-any connectivity is clearly one of the fundamentally great properties of the Internet. Unfortunately, the openness of the Internet also enables an expanding and everevolving array of malicious activity. During the early 1990s, when malicious attacks first emerged on the Internet, only a few systems at a time were typically compromised, and those systems were rarely used to continue or broaden the attack activity. At first, the attackers were seemingly motivated simply by the sport of it all. But then, as would seem to be the natural order of things, the miscreants were seized by the profit motive. Today, network infrastructure and end systems are constantly attacked with an increased level of sophistication and virulence.
In this chapter, we discuss and face two of the most dangerous threats known by the Internet community: Denial of Service (DoS) and computer worms. In the following we refer to them simply by DoS and Computer Worms. Those two families of threats have different goals, forms and effects than most of the attacks that are launched at networks and computers. Most attackers involved in cyber-crime seek to break into a system, extract its secrets, or fool it into providing a service without the approprite authorization. Attackers commonly try to steal credit card numbers or proprietary information, gain control of machines to install their software or save their data, deface Web pages, or alter important content on victim machines.
As already presented, the Internet routing system is partitioned into tens of thousands of independently administered Autonomous Systems (ASs). The Border Gateway Protocol (BGP) is the de facto inter-domain routing protocol that maintains and exchanges routing information between ASs. However, the BGP was designed based on the implicit trust between all participants and does not employ any measure to authenticate the routes injected into or propagated through the system. Therefore, virtually any AS can announce any route into the routing system, and sometimes the bogus routes can trigger large-scale anomalies in the Internet. A canonical example occurred on April 25, 1997, when a misconfigured router maintained by a small service provider (AS7007) in Virginia, USA, injected incorrect routing information into the global Internet and claimed to have optimal connectivity to all Internet destinations. As a result, most Internet traffic was routed to this small ISP. The traffic overwhelmed the misconfigured and intermediate routers, and effectively crippled the Internet for almost two hours. Since then, many such events have been reported, some of them due to human mistakes, others due to malicious activities that exploited vulnerabilities in the BGP in order to cause large-scale damage. For example, it is common for spammers to announce an arbitrary prefix and then use that prefix to send spam from the hijacked address space, making the trace back and the spammer identity discovery much more difficult.
Since the 1950s, voice and video services such as telephony and television have established themselves as an integral part of everyone's life. Traditionally, voice and video service providers built their own networks to deliver these services to customers. However, tremendous technical advancements since the 1990s have revolutionized the mode of delivery of these services. Today, these services are delivered to the users over the Internet, and we believe that there are two main reasons for this: (i) delivering services over the Internet in IP packets is much more economical for voice and video service providers and (ii) the massive penetration of broadband (i.e. higher bandwidth) Internet service has ensured that the quality of voice and video services over the Internet is good enough for everyday use. The feasibility of a more economical alternative for voice and video services attracted many ISPs including Comcast, AT&T and Verizon, among several others, to offer these services to end users at a lower cost. However, non-ISPs, such as Skype, Google, Microsoft, etc. have also started offering these services to customers at extremely competitive prices (and, on many occasions, for free).
From an ISP's perspective, traffic classification has always been a critical activity for several important management tasks, such as traffic engineering, network planning and provisioning, security, billing and Quality of Service (QoS). Given the popularity of voice and video services over the Internet, it has now become all the more important for ISPs to identify voice and video traffic from other service providers for three reasons.
Since the late 1990s there has been significant interest and attention from the research community devoted to understanding the key drivers of how ISP networks are designed, built and operated. While recent work by empiricists and theoreticians has emphasized certain statistical and mathematical properties of network structures and their behaviors, this part of the book presents in great detail an optimization-based perspective that focuses on the objectives, constraints and other drivers of engineering design that will help the community gain a better insight into this fascinating world and enable the design of more “realistic” models.
In this chapter we introduce the area of IP network design and the factors commonly used to drive such a process. Our discussion revolves around IP-over-WDM networks, and we define the network design problem as the end-to-end process aimed at identifying the “right” IP topology, the associated routing strategy and its mapping over the physical infrastructure in order to guarantee the efficient utilization of network resources, a high degree of resilience to failures and the satisfaction of SLAs.
We start by providing a high-level overview of the IP-over-WDM technology. We highlight the properties of the physical and IP layers (the IP layer is also known as the logical layer), we discuss their relationship, and introduce the terminology that will be extensively used in the following chapters. Then, we introduce the processes encountered in IP network design and their driving factors.
Due to the challenges of obtaining an AS topology annotated with AS relationships, it is infeasible to use the valley-free rule to identify redistribution path spoofing in the work. Alternatively, we apply the direction-conforming rule to the AS topology annotated with directed AS-links to carry out the detection. The following theorems show that the direction-conforming rule actually shows roughly equivalent efficiency.
Theorem D.1
For an observer AS, a valley-free path in the AS topology annotated with AS relationships must be “direction-conforming” in the corresponding AS topology annotated with inferred directed AS-links.
Theorem D.2
(1) For a Tier-1 AS, the direction-conforming paths in the AS topology annotated with inferred directed AS-links must be valley-free in the real AS topology annotated with AS relationships.
(2) For a non-Tier-1 AS, except the redistribution path-spoofing paths launched by the provider ASs, the direction-conforming paths must be valley-free.
In order to prove these theorems, we first investigate the mapping between the real AS topology annotated with AS relationships and the inferred AS topology annotated with directed AS-links.
Note that, similar to the analysis in the text, we assume that the inferred topology is “ideally” complete, namely it contains all legitimate directed AS-links that the observer AS should see. In order to infer a complete AS topology comprising of directed AS-links based on the route announcements from the observer AS, we assume an ideal inference scenario, in which the AS connections and relationships do not change over the inference period and every AS tries all possible valid routes.
Change detection is a fundamental problem arising in many fields of engineering, in finance, in the natural and social sciences, and even in the humanities. This book is concerned with the problem of change detection within a specific context. In particular, the framework considered here is one in which changes are manifested in the statistical behavior of quantitative observations, so that the problem treated is that of statistical change detection. Moreover, we are interested in the on–line problem of quickest detection, in which the objective is to detect changes in real time as quickly as possible after they occur. And, finally, our focus is on formulating such problems in such a way that optimal procedures can be sought and found using the tools of stochastic analysis.
Thus, the purpose of this book is to provide an exposition of the extant theory underlying the problem of quickest detection, with an emphasis on providing the reader with the background necessary to begin new research in the field. It is intended both for those familiar with basic statistical procedures for change detection who are interested in understanding these methods from a fundamental viewpoint (and possibly extending them to new applications), and for those who are interested in theoretical questions of change detection themselves.
The approach taken in this book is to cast the problem of quickest detection in the framework of optimal stopping theory.
In Chapter 5, we considered the quickest detection problem within the framework proposed by Kolmogorov and Shiryaev, in which the unknown change point is assumed to be a random variable with a given, geometric, prior distribution. This formulation led to a very natural detection procedure; namely, announce a change at the first upcrossing of a suitable threshold by the posterior probability of a change. Although the assumption of a prior on the change point is rather natural in applications such as condition monitoring, there are other applications in which this assumption is unrealistic. For example, in surveillance or inspection systems, there is often no pre–existing statistical model for the occurence of intruders or flaws.
In such situations, an alternative to the formulations of Chapter 5 must be found, since the absence of a prior precludes the specification of expected delays and similar quantities that involve averaging over the change–point distribution. There are several very useful such formulations, and these will be discussed in this chapter.
We will primarily consider a notable formulation due to Lorden, in which the average delay is replaced with a worst–case value of delay. However, other formulations will be considered as well.
As in the Bayesian formulation of this problem, optimal stopping theory plays a major role in specifying the optimal procedure, although (as we shall see) more work is required here to place the problems of interest within the standard optimal stopping formulation of Chapter 3.
In Chapter 4, we considered the problem of optimally deciding, with a cost on sampling, between two statistical models for a set of sequentially observed data. Within each of these two models the data are homogeneous; that is, the data obey only one of the two alternative statistical models during the entire period of observation. In most of the remainder of this book, we turn to a generalization of this problem in which it is possible for the statistical behavior of observed data to change from one model to another at some unknown time during the period of observation. The objective of the observer is to detect such a change, if one occurs, as quickly as possible. This objective must be balanced with a desire to minimize false alarms. Such problems are known as quickest detection problems. In this and subsequent chapters, we analyze several useful formulations of this type of problem. Again, our focus is on the development of optimal procedures, although the issue of performance analysis will also be considered to a degree.
A useful framework for quickest detection problems is to consider a sequence Z1, Z2,… of random observations, and to suppose that there is a change point t ≥ 1 (possibly t = ∞) such that, given t, Z1, Z2, …, Zt−1 are drawn from one distribution and Zt, Zt+1, …, are drawn from another distribution.