To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The number of endpoints connected wirelessly to the Internet has long overtaken the number of wired endpoints, and the difference between the two is widening. Wireless mesh networks, sensor networks, and vehicular networks represent some of the new growth segments in wireless networking in addition to mobile data networks, which is currently the fastest-growing segment in the wireless industry. Wireless networks with time-varying bandwidth, error rate, and connectivity beg for opportunistic transport, especially when the link bandwidth is high, error rate is low, and the endpoint is connected to the network in contrast to when the link bandwidth is low, error rate is high, and the endpoint is not connected to the network. “Connected” is a binary attribute in TCP/IP, meaning one is either part of the Internet and can talk to everything or is isolated. In addition, connecting requires a globally unique IP address that is topologically stable on routing timescale (minutes to hours). This makes it difficult and inefficient to handle mobility and opportunistic transport in the Internet. Clearly we need a new networking paradigm that avoids a heavyweight operation like end-to-end connection and enables opportunistic transport. In addition to the these scenarios, given that the predominant use of the Internet today is for content distribution and content retrieval, there is a need for handling dissemination of content in an efficient manner. This chapter describes a network architecture that addresses the previously mentioned unique requirements.
The current Internet is an outgrowth of the ARPANET (Advanced Research Projects Agency Network) that was initiated four decades ago. The TCP/IP (Transmission Control Protocol/Internet Protocol) designed by Vinton Cerf and Robert Kahn in 1973 did not anticipate, quite understandably, such extensive use of wireless channels and mobile terminals as we are witnessing today. The packet-switching technology for the ARPANET was not intended to support real-time applications that are sensitive to delay jitter. Furthermore, the TCP/IP designers assumed that its end users – researchers at national laboratories and universities in the United States, who would exchange their programs, data, and email – would be trustworthy; thus, security was not their concern, although reliability was one of the key considerations in the design and operation of the network.
It is amazing, therefore, that given the age of the TCP/IP, the Internet has successfully continued to grow by supporting the ever increasing numbers of end users and new applications, with a series of ad hoc modifications and extensions made to the original protocol. In recent years, however, many in the Internet research community began to wonder how long they could continue to do “patch work” to accommodate new applications and their requirements. New research initiatives have been launched within the past several years, aimed at a grand design of “a future Internet.” Such efforts include the NSF's FIND (Future Internet Design) and GENI (Global Environment for Network Innovations), the European Community's FP 7 (Frame-network Program, Year 7), Germany's G-Lab, and Japan's NWGN (New Generation Network).
Due to the low cost and ease of deployment associated with wireless devices, wireless networks will continue to be the dominant choice for connecting to the future Internet. Beyond serving as an edge-connecting medium, the rapid improvement in communication rates for emerging wireless technologies suggests that wireless networks will also play an increasingly important role in building the backbone of the future Internet. As wireless components become integrated into the design of future network architectures, one significant concern that will arise is whether their pervasiveness, affordability, and ease of programmability might also serve as a means to undermine the benefits they might bring to the future Internet.
Just as the future Internet initiative has brought new perspectives on how protocols should be designed to take advantage of improvements in technology, the future Internet initiative also allows us to reexamine how we approach securing our network infrastructures. Traditional approaches to building and securing networks are tied tightly to the concept of protocol layer separation. For network protocol design, routing functions are typically considered separately from link layer functions, which are considered independently of transport layer phenomena or even the very applications that utilize such functions. Similarly, in the security arena, MAC-layer security solutions (e.g., WPA2 for 802.11 devices) are typically considered as point-solutions to address threats facing the link layer, while routing and transport layer security issues are dealt with in distinct, nonintegrated protocols like IPSEC, TLS, or even in the abundance of recent secure routing protocols.
Over the next ten-to-fifteen years, it is anticipated that significant qualitative changes to the Internet will be driven by the rapid proliferation of mobile and wireless computing devices. Wireless devices on the Internet will include laptop computers, personal digital assistants, cell phones (more than 3.5 billion in use as of 2009 and growing!), portable media players, and so on, along with embedded sensors used to sense and control real-world objects and events (see Figure 1.1). As mobile computing devices and wireless sensors are deployed in large numbers, the Internet will increasingly serve as the interface between people moving around and the physical world that surrounds them. Emerging capabilities for opportunistic collaboration with other people nearby or for interacting with physical-world objects and machines via the Internet will result in new applications that will influence the way people live and work. The potential impact of the future wireless Internet is very significant because the network combines the power of cloud computation, search engines, and databases in the background with the immediacy of information from mobile users and sensors in the foreground. The data flows and interactions between mobile users, sensors, and their computing support infrastructure are clearly very different from that of today's popular applications such as email, instant messaging, or the World Wide Web.
As a result, one of the broad architectural challenges facing the network research community is that of evolving or redesigning the Internet architecture to incorporate emerging wireless technologies – efficiently, and at scale.
Standards provide the foundation for developing innovative technologies and enabling them to be widely adopted in market. Several major international standard bodies are developing next-generation wireless standards, including the Institute of Electrical and Electronics Engineers (IEEE), the Internet Engineering Task Force (IETF), the International Telecommunication Union Radiocommunication Sector (ITU-R), the European Telecommunications Standards Institute (ETSI), and the Third Generation Partnership Project (3GPP). The standardization activities of IEEE 802 committee mainly focus on physical (PHY) and media access control (MAC) layers, that is, layers 1 and 2 of the network protocol stack, including WLAN, WMAN, and WPAN network interfaces. IETF standards deal with layer 3 and above, in particular with standards of the TCP/IP and Internet protocol suite, including mobile IP and mobile ad hoc networks (MANET) related protocols. ITU-R is one of the three sectors of the ITU and is responsible for radio communications. It plays a vital role in the global management of the radio-frequency spectrum and satellite orbits, and developing standards for radio communications systems to assure the necessary performance and quality and the effective use of the spectrum. ETSI is a European standards organization for producing globally applicable standards for information and communications technologies (ICT), including fixed, mobile, broadcast, and Internet technologies. ETSI inspired the creation of, and is a partner of, 3GPP – a collaboration project between groups of telecommunications associations worldwide.
With the evolution of wireless technologies that continue to offer higher data rates using both licensed and unlicensed spectrum, the number of portable, handheld computing devices using wireless connectivity to the Internet has increased dramatically. Another major category for growth in wireless devices is that of embedded wireless devices or sensors that help monitor and control objects and events in the physical world via the Internet. Vehicular networking is an emerging application for wireless networking with a focus on increased road safety.
The broad architectural challenge facing the wireless and network research communities is that of evolving the Internet architecture to efficiently incorporate emerging wired and wireless network elements such as mobile terminals, ad hoc routers, and embedded sensors and to provide end-to-end service abstractions that facilitate application development. A top-down approach to the problem starts by identifying canonical wireless scenarios that cover a broad range of environments such as cellular data services, WiFi hot spots, mobile peer-to-peer (P2P), ad hoc mesh networks for broadband access, vehicular networks, sensor networks, and pervasive systems. These wireless application scenarios lead to a rich diversity of networking requirements for the future Internet that need to be analyzed and validated experimentally. One of the key challenges faced in characterization and evaluation of these complex wireless scenarios is the lack of generally available tools for modeling, emulation, or rapid prototyping of a complete wireless network.
The rapid explosion of mobile phones over the last decade has enabled a new sensing paradigm – participatory sensing – where individuals act as sensors by using their mobile phones for data collection. Participatory sensing relies on the sensing capabilities of mobile phones, many of which have the ability to detect location, capture images and audio, the networking support provided by cellular and WiFi infrastructure, and the spatial and temporal coverage along with interpretive abilities provided by the individuals that carry and operate mobile phones. If successfully coordinated, participants involved in data collection using their mobile phones can open up new possibilities uniquely relevant to the interests of individuals, groups, and communities as they seek to understand the social and physical processes of the world around them. Responsibly realizing a vision of sensing that is widespread and participatory poses critical technology challenges. To support mobile participatory sensing applications, the future Internet architecture must provide network services that enable applications to select, task, and coordinate mobile users based on measures of coverage, capabilities, and participation and performance patterns; attestation mechanisms that enable sensor data consumers to assess trustworthiness of the data they access; and privacy and auditing mechanisms that enable sensor sources to control sharing and disclosure of data.
Mobile Participatory Sensing Vision
Individuals Carrying Mobile Phones as Sensors
Embedded wireless sensing provides scientists and engineers unique insights into the physical and biological processes of the natural and “built” environments.
Wireless sensor networks (WSNs) are an important emerging class of embedded distributed systems that consist of low-power devices integrating computation, sensing, and wireless communications. WSNs have been deployed for a wide range of applications, including monitoring microclimates in redwood forests (Tolle et al. 2005), collecting seismic signals from active volcanoes (Werner-Allen et al. 2006), sniper detection in urban settings (Simon et al. 2004), and tracking wildlife (Zhang et al. 2004).
One of the most popular WSN node platforms is the Telos node platform (Polastre et al. 2005a), shown in Figure 5.1. The Telos incorporates a low-power microcontroller (TI MSP430) with 10 KB of SRAM and 48 KB of program ROM; a low-power radio (Chipcon CC2420) that supports the IEEE 802.15.4 standard; and 1 MB of on-board flash memory. Various sensors can be attached to the board; a standard set includes light, temperature, and humidity sensors. An external connector provides digital and analog I/O ports that can be used to mate the node to a wide range of sensors and other devices. The USB connector is used to program the node when plugged into a host, as well as to provide a serial interface. This allows the node to act as a USB wireless transceiver when attached to a base station that collects data from and controls the network.
WSN platforms are designed from the ground up for low-power operation. The Telos consumes approximately 41 mW when the CPU and radio are active, but can drop down to a low-power idle state consuming less than 6 µW.
Do you need to get up to speed quickly on LTE? Understand the new technologies of the LTE standard and how they contribute to improvements in system performance with this practical and valuable guide, written by an expert on LTE who was intimately involved in drafting the standard. In addition to a strong grounding in the technical details, you'll also get fascinating insights into why particular technologies were chosen in the development process. Core topics covered include low-PAPR orthogonal uplink multiple access based on SC-FDMA, MIMO multi-antenna technologies, and inter-cell interference mitigation techniques. Low-latency channel structure and single-frequency network (SFN) broadcast are also discussed in detail. With extensive references, a useful discussion of technologies that were not included in the standard, and end-of-chapter summaries that emphasize all the key points, this book is an essential resource for practitioners in the mobile cellular communications industry and for graduate students studying advanced wireless communications.
The Host Identity Protocol (HIP) and architecture is a new piece of technology that may have a profound impact on how the Internet will evolve over the coming years. The original ideas were formed through discussions at a number of Internet Engineering Task Force (IETF) meetings during 1998 and 1999. Since then, HIP has been developed by a group of people from Ericsson, Boeing, HIIT, and other companies and academic institutions, first as an informal activity close to the IETF and later within the IETF HIP working group (WG) and the HIP research group (RG) of the Internet Research Task Force (IRTF), the research arm of the IETF.
From a functional point of view, HIP integrates IP-layer mobility, multihoming and multi-access, security, NAT-traversal, and IPv4/v6 interoperability in a novel way. The result is architecturally cleaner than trying to implement these functions separately, using technologies such as Mobile IP, IPsec, ICE, and Teredo. In a way, HIP can be seen as restoring the now-lost end-to-end connectivity across various IP links and technologies, this time in a way that it secure and supports mobility and multi-homing. As an additional bonus, HIP provides new tools and functions for future network needs, including the ability to securely identify previously unknown hosts and the ability to securely delegate signaling rights between hosts and from hosts to other nodes.
By
John Musacchio, University of California, Santa Cruz, USA,
Galina Schwartz, University of California, Berkeley, USA,
Jean Walrand, University of California, Berkeley, USA
In 2007 Comcast, a cable TV and Internet service provider in the United States, began to selectively rate limit or “shape” the traffic from users of the peer-to-peer application Bit Torrent. The access technology that Comcast uses is asymmetric in its capacity – the “uplink” from users is much slower than the “downlink.” For client-server applications like Web, this asymmetry is fine, but for peer-to-peer, where home users are serving up huge files for others to download, the uplink quickly becomes congested. Comcast felt that it had to protect the rest of its users from a relatively small number of heavy peer-to-peer users that were using a disproportionate fraction of the system's capacity. In other words, peer-to-peer users were imposing a negative externality by creating congestion that harmed other users.
This negative externality reduces the welfare of the system because users act selfishly. The peer-to-peer user is going to continue to exchange movies even though this action is disrupting his neighbor's critical, work-related video conference. Comcast thought that by singling out users of peer-to-peer applications, it could limit the ill effects of this externality and keep the rest of its users (who mostly don't use peer-to-peer) happy. Instead Comcast's decision placed them in the center of the ongoing network neutrality debate. Supporters of the network neutrality concept feel that the Internet access provider ought not to be allowed to “discriminate” between traffic of different users or different applications.
The Internet's simple best-effort packet-switched architecture lies at the core of its tremendous success and impact. Today, the Internet is firmly a commercial medium involving numerous competitive service providers and content providers. However, current Internet architecture neither allows (i) users to indicate their value choices at sufficient granularity, nor (ii) providers to manage risks involved in investment for new innovative quality-of-service (QoS) technologies and business relationships with other providers as well as users. Currently, users can only indicate their value choices at the access/link bandwidth level, not at the routing level. End-to-end (e2e) QoS contracts are possible today via virtual private networks, but with static and long-term contracts. Further, an enterprise that needs e2e capacity contracts between two arbitrary points on the Internet for a short period of time has no way of expressing its needs.
We propose an Internet architecture that allows flexible, finer-grained, dynamic contracting over multiple providers. With such capabilities, the Internet itself will be viewed as a “contract-switched” network beyond its current status as a “packet-switched” network. A contract-switched architecture will enable flexible and economically efficient management of risks and value flows in an Internet characterized by many tussle points. We view “contract-switching” as a generalization of the packet-switching paradigm of the current Internet architecture. For example, size of a packet can be considered as a special case of the capacity of a contract to expire at a very short term, e.g., transmission time of a packet.
Starting in August of 2006 our collaborative team of researchers from North Carolina State University and the Renaissance Computing Institute, UNC-CH, have been working on a Future InterNet Design (NSF FIND) project to envision and describe an architecture that we call the Services Integration, controL, and Optimization (SILO). In this chapter, we describe the output of that project. We start by listing some insights about architectural research, some that we started with and some that we gained along the way, and also state the goals we formulated for our architecture. We then describe that actual architecture itself, connecting it with relevant prior and current research work. We show how the promise of enabling change is validated by showing our recent work on supporting virtualization as well as cross-layer research in optics using SILO. We end with an early case study on the usefulness of SILO in lowering the barrier to contribution and innovation in network protocols.
Toward a new Internet architecture
Back in 1972 Robert Metcalfe was famously able to capture the essence of networking with a phrase “Networking is inter-process communication,” however, describing the architecture that enables this communication to take place is by no means easy. The architecture of something as complex as the modern Internet encompasses a large number of principles, concepts and assumptions, which necessarily bear periodic revisiting and reevaluation in order to assess how well they have withstood the test of time.
A key element of past, current, and future telecommunication infrastructures is the switching node. In recent years, packet switching has taken a dominant role over circuit switching, so that current switching nodes are often packet switches and routers. While a deeper penetration of optical technologies in the switching realm will most likely reintroduce forms of circuit switching, which are more suited to realizations in the optical domain, and optical cross-connects [1, Section 7.4] may end up playing an important role in networking in the long term, we focus in this chapter on high-performance packet switches.
Despite several ups and downs in the telecom market, the amount of information to be transported by networks has been constantly increasing with time. Both the success of new applications and of the peer-to-peer paradigm, and the availability of large access bandwidths (few Mb/s on xDSLs and broadband wireless, but often up to 10's or 100's of Mb/s per residential connection, as currently offered in Passive Optical Networks – PONs), are causing a constant increase of the traffic offered to the Internet and to networking infrastructures in general. The traffic increase rate is fast, and several studies show that it is even faster than the growth rate of electronic technologies (typically embodied by Moore's law, predicting a two-fold performance and capacity increase every 18 months).
Network coding has been shown to help achieve optimal throughput in directed networks with known link capacities. However, as real-world networks in the Internet are bi-directed in nature, it is important to investigate theoretical and practical advantages of network coding in more realistic bi-directed and peer-to-peer (P2P) network settings. In this chapter, we begin with a discussion of the fundamental limitations of network coding in improving routing throughput and cost in the classic undirected network model. A finite bound of 2 is proved for a single communication session. We then extend the discussions to bi-directed Internet-like networks and to the case of multiple communication sessions. Finally, we investigate advantages of network coding in a practical peer-to-peer network setting, and present both theoretical and experimental results on the use of network coding in P2P content distribution and media streaming.
Network coding background
Network coding is a fairly recent paradigm of research in information theory and data networking. It allows essentially every node in a network to perform information coding, besides normal forwarding and replication operations. Information flows can therefore be “mixed” during the course of routing. In contrast to source coding, the encoding and decoding operations are not restricted to the terminal nodes (sources and destinations) only, and may happen at all nodes across the network. In contrast to channel coding, network coding works beyond a single communication channel, it contains an integrated coding scheme that dictates the transmission at every link towards a common network-wise goal. The power of network coding can be appreciated with two classic examples in the literature, one for the wireline setting and one for the wireless setting, as shown in Figure 17.1.