To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Host Identity Protocol (HIP) and architecture is a new piece of technology that may have a profound impact on how the Internet will evolve over the coming years. The original ideas were formed through discussions at a number of Internet Engineering Task Force (IETF) meetings during 1998 and 1999. Since then, HIP has been developed by a group of people from Ericsson, Boeing, HIIT, and other companies and academic institutions, first as an informal activity close to the IETF and later within the IETF HIP working group (WG) and the HIP research group (RG) of the Internet Research Task Force (IRTF), the research arm of the IETF.
From a functional point of view, HIP integrates IP-layer mobility, multihoming and multi-access, security, NAT-traversal, and IPv4/v6 interoperability in a novel way. The result is architecturally cleaner than trying to implement these functions separately, using technologies such as Mobile IP, IPsec, ICE, and Teredo. In a way, HIP can be seen as restoring the now-lost end-to-end connectivity across various IP links and technologies, this time in a way that it secure and supports mobility and multi-homing. As an additional bonus, HIP provides new tools and functions for future network needs, including the ability to securely identify previously unknown hosts and the ability to securely delegate signaling rights between hosts and from hosts to other nodes.
By
John Musacchio, University of California, Santa Cruz, USA,
Galina Schwartz, University of California, Berkeley, USA,
Jean Walrand, University of California, Berkeley, USA
In 2007 Comcast, a cable TV and Internet service provider in the United States, began to selectively rate limit or “shape” the traffic from users of the peer-to-peer application Bit Torrent. The access technology that Comcast uses is asymmetric in its capacity – the “uplink” from users is much slower than the “downlink.” For client-server applications like Web, this asymmetry is fine, but for peer-to-peer, where home users are serving up huge files for others to download, the uplink quickly becomes congested. Comcast felt that it had to protect the rest of its users from a relatively small number of heavy peer-to-peer users that were using a disproportionate fraction of the system's capacity. In other words, peer-to-peer users were imposing a negative externality by creating congestion that harmed other users.
This negative externality reduces the welfare of the system because users act selfishly. The peer-to-peer user is going to continue to exchange movies even though this action is disrupting his neighbor's critical, work-related video conference. Comcast thought that by singling out users of peer-to-peer applications, it could limit the ill effects of this externality and keep the rest of its users (who mostly don't use peer-to-peer) happy. Instead Comcast's decision placed them in the center of the ongoing network neutrality debate. Supporters of the network neutrality concept feel that the Internet access provider ought not to be allowed to “discriminate” between traffic of different users or different applications.
The Internet's simple best-effort packet-switched architecture lies at the core of its tremendous success and impact. Today, the Internet is firmly a commercial medium involving numerous competitive service providers and content providers. However, current Internet architecture neither allows (i) users to indicate their value choices at sufficient granularity, nor (ii) providers to manage risks involved in investment for new innovative quality-of-service (QoS) technologies and business relationships with other providers as well as users. Currently, users can only indicate their value choices at the access/link bandwidth level, not at the routing level. End-to-end (e2e) QoS contracts are possible today via virtual private networks, but with static and long-term contracts. Further, an enterprise that needs e2e capacity contracts between two arbitrary points on the Internet for a short period of time has no way of expressing its needs.
We propose an Internet architecture that allows flexible, finer-grained, dynamic contracting over multiple providers. With such capabilities, the Internet itself will be viewed as a “contract-switched” network beyond its current status as a “packet-switched” network. A contract-switched architecture will enable flexible and economically efficient management of risks and value flows in an Internet characterized by many tussle points. We view “contract-switching” as a generalization of the packet-switching paradigm of the current Internet architecture. For example, size of a packet can be considered as a special case of the capacity of a contract to expire at a very short term, e.g., transmission time of a packet.
Starting in August of 2006 our collaborative team of researchers from North Carolina State University and the Renaissance Computing Institute, UNC-CH, have been working on a Future InterNet Design (NSF FIND) project to envision and describe an architecture that we call the Services Integration, controL, and Optimization (SILO). In this chapter, we describe the output of that project. We start by listing some insights about architectural research, some that we started with and some that we gained along the way, and also state the goals we formulated for our architecture. We then describe that actual architecture itself, connecting it with relevant prior and current research work. We show how the promise of enabling change is validated by showing our recent work on supporting virtualization as well as cross-layer research in optics using SILO. We end with an early case study on the usefulness of SILO in lowering the barrier to contribution and innovation in network protocols.
Toward a new Internet architecture
Back in 1972 Robert Metcalfe was famously able to capture the essence of networking with a phrase “Networking is inter-process communication,” however, describing the architecture that enables this communication to take place is by no means easy. The architecture of something as complex as the modern Internet encompasses a large number of principles, concepts and assumptions, which necessarily bear periodic revisiting and reevaluation in order to assess how well they have withstood the test of time.
A key element of past, current, and future telecommunication infrastructures is the switching node. In recent years, packet switching has taken a dominant role over circuit switching, so that current switching nodes are often packet switches and routers. While a deeper penetration of optical technologies in the switching realm will most likely reintroduce forms of circuit switching, which are more suited to realizations in the optical domain, and optical cross-connects [1, Section 7.4] may end up playing an important role in networking in the long term, we focus in this chapter on high-performance packet switches.
Despite several ups and downs in the telecom market, the amount of information to be transported by networks has been constantly increasing with time. Both the success of new applications and of the peer-to-peer paradigm, and the availability of large access bandwidths (few Mb/s on xDSLs and broadband wireless, but often up to 10's or 100's of Mb/s per residential connection, as currently offered in Passive Optical Networks – PONs), are causing a constant increase of the traffic offered to the Internet and to networking infrastructures in general. The traffic increase rate is fast, and several studies show that it is even faster than the growth rate of electronic technologies (typically embodied by Moore's law, predicting a two-fold performance and capacity increase every 18 months).
Network coding has been shown to help achieve optimal throughput in directed networks with known link capacities. However, as real-world networks in the Internet are bi-directed in nature, it is important to investigate theoretical and practical advantages of network coding in more realistic bi-directed and peer-to-peer (P2P) network settings. In this chapter, we begin with a discussion of the fundamental limitations of network coding in improving routing throughput and cost in the classic undirected network model. A finite bound of 2 is proved for a single communication session. We then extend the discussions to bi-directed Internet-like networks and to the case of multiple communication sessions. Finally, we investigate advantages of network coding in a practical peer-to-peer network setting, and present both theoretical and experimental results on the use of network coding in P2P content distribution and media streaming.
Network coding background
Network coding is a fairly recent paradigm of research in information theory and data networking. It allows essentially every node in a network to perform information coding, besides normal forwarding and replication operations. Information flows can therefore be “mixed” during the course of routing. In contrast to source coding, the encoding and decoding operations are not restricted to the terminal nodes (sources and destinations) only, and may happen at all nodes across the network. In contrast to channel coding, network coding works beyond a single communication channel, it contains an integrated coding scheme that dictates the transmission at every link towards a common network-wise goal. The power of network coding can be appreciated with two classic examples in the literature, one for the wireline setting and one for the wireless setting, as shown in Figure 17.1.
Nowadays, there are many routing protocols available for mobile ad-hoc networks. They mainly use instantaneous parameters rather than the predicted parameters to perform the routing functions. They are not aware of the parameter history. For example, AODV, DSDV, and DSR use the hop counts as the metric to construct the network topology. The value of hop counts is measured by the route control packets. Current physical topology is used to construct the network topology. If the future physical topology is predicted, a better network topology might be constructed by avoiding the potential link failure or finding a data path with high transmission data rate.
Most traditional routing protocols do not consider the channel conditions and link load. In this case, it is assumed that the channel conditions for all links are the same and the load levels for all links are the same. Unlike the wired networks, the channel conditions and the link load in a wireless network tend to vary significantly because of the node mobility or environment changes. Therefore, the nodes in a wireless network should be able to differentiate the links with different channel conditions or load levels to have a general view of the network. In this way, the routing functions can be better performed. Further, the network performance might be increased.
In recent years, cognitive techniques are increasingly common in wireless networks. Most research focuses on the solutions that modify the PHY layer and MAC layer.
The field of computer networking has evolved significantly over the past four decades since the development of ARPANET, the first large-scale computer network. The Internet has become a part and parcel of everyday life virtually worldwide, and its influence on various fields is well recognized. The TCP/IP protocol suite and packet switching constitute the core dominating Internet technologies today. However, this paradigm is facing challenges as we move to next-generation networking applications including multimedia transmissions (IPTV systems), social networking, peer-to-peer networking and so on. The serious limitations of the current Internet include its inability to provide Quality of Service, reliable communication over periodically disconnected networks, and high bandwidth for high-speed mobile devices.
Hence, there is an urgent question as to whether the Internet's entire architecture should be redesigned, from the bottom up, based on what we have learned about computer networking in the past four decades. This is often referred to as the “clean slate” approach to Internet design. In 2005, the US National Science Foundation (www.nsf.gov) started a research program called Future Internet Network Design (FIND) to focus the research community's attention on such activities. Similar funding activities are taking place in Europe (FIRE: Future Internet Research and Experimentation), Asia, and other regions across the globe. This book is an attempt to capture some of the pioneering efforts in designing the next-generation Internet.
Abstract: Internet users and their emerging applications require high-data-rate access networks. Today's broadband access technologies – particularly in US – are Digital Subscriber Line (DSL) and Cable Modem (CM). But their limited capacity is insufficient for some emerging services such as IPTV. This is creating the demand for Fiber-to-the-X (FTTX) networks – typically employing Passive Optical Network (PON) – to bring the high capacity of fiber closer to the user. Long-Reach PON can reduce the cost of FTTX by extending the PON coverage using Optical Amplifier and Wavelength-Division-Multiplexing (WDM) technologies. Since Internet users want to be untethered (and also mobile), whenever possible, wireless access technologies also need to be considered. Thus, to exploit the reliability, robustness, and high capacity of optical network and the flexibility, mobility, and cost savings of wireless networks, the Wireless-Optical Broadband Access Network (WOBAN) is proposed. These topics are reviewed in this chapter.
Introduction
An access network connects its end-users to their immediate service providers and the core network. The growing customer demands for bandwidth-intensive services are accelerating the need to design an efficient “last mile” access network in a cost-effective manner. Traditional “quad-play” applications, which include a bundle of services with voice, video, Internet, and wireless, need to be delivered over the access network to the end-users in a satisfactory and economical way. High-data-rate Internet access, known as broadband access, is therefore essential to support today's and emerging application demands.
One of the key characteristics of the next-generation Internet architecture is its ability to adapt to novel protocols and communication paradigms. This adaptability can be achieved through custom processing functionality inside the network. In this chapter, we discuss the design of a network service architecture that can provide custom in-network processing.
Background
Support for innovation is an essential aspect of the next-generation Internet architecture. With the growing diversity of systems connected to the Internet (e.g., cell phones, sensors, etc.) and the adoption of new communication paradigms (e.g., content distribution, peer-to-peer, etc.), it is essential that not only existing data communication protocols are supported but that emerging protocols can be deployed, too.
Internet architecture
The existing Internet architecture is based on the layered protocol stack, where application and transport layer protocols processing occurs on end-systems and physical, link, and network layer processing occurs inside the network. This design has been very successful in limiting the complexity of operations that need to be performed by network routers. In turn, modern routers can support link speeds to tens of Gigabits per second and aggregate bandwidths of Terabits per second.
However, the existing Internet architecture also poses limitations on deploying functionality that does not adhere to the layered protocol stack model. In particular, functionality that crosses protocol layers cannot be accommodated without violating the principles of the Internet architecture. But in practice, many such extensions to existing protocols are necessary.
An effective optical control plane is crucial in the design and deployment of a transport network as it provides the means for intelligently provisioning, restoring, and managing network resources, leading in turn to their more efficient use. This chapter provides an overview of current protocols utilized for the control plane in optical networks and then delves into a new unified control plane architecture for IP-over-WDM networks that manages both routers and optical switches. Provisioning, routing, and signaling protocols for this control model are also presented, together with its benefits, including the support of interdomain routing/signaling and the support of restoration at any granularity.
Introduction
In the last two decades optical communications have evolved from not only providing transmission capacities to higher transport levels, such as inter-router connectivity in an IP-centric infrastructure, to providing the intelligence required for efficient point-and-click provisioning services, as well as resilience against potential fiber or node failures. This is possible due to the emergence of optical network elements that carry the intelligence required to efficiently manage such networks. Current deployments of wavelength division multiplexed (WDM)-based optical transport networks have met the challenge of accommodating the phenomenal growth of IP data traffic while providing novel services such as rapid provisioning and restoration of very high bandwidth circuits, and bandwidth on demand.
In this chapter we argue that future high-speed switches should have buffers that are much smaller than those used today. We present recent work in queueing theory that will be needed for the design of such switches.
There are two main benefits of small buffers. First, small buffers means very little queueing delay or jitter, which means better quality of service for interactive traffic. Second, small buffers make it possible to design new and faster types of switches. One example is a switch-on-a-chip, in which a single piece of silicon handles both switching and buffering, such as that proposed in [7]; this alleviates the communication bottleneck between the two functions. Another example is an all-optical packet switch, in which optical delay lines are used to emulate a buffer. These two examples are not practicable with large buffers.
Buffers cannot be made arbitrarily small. The reason we have buffers in the first place is to be able to absorb fluctuations in traffic without dropping packets. There are two types of fluctuations to consider: fluctuations due to end-to-end congestion control mechanisms, most notably TCP; and fluctuations due to the inherent randomness of chance alignments of packets.
In Section 15.2 we describe queueing theory which takes account of the interaction between a queue and TCP's end-to-end congestion control. The Transmission Control Protocol tries to take up all available capacity on a path, and in particular it tries to fill the bottleneck buffer.