To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we argue that future high-speed switches should have buffers that are much smaller than those used today. We present recent work in queueing theory that will be needed for the design of such switches.
There are two main benefits of small buffers. First, small buffers means very little queueing delay or jitter, which means better quality of service for interactive traffic. Second, small buffers make it possible to design new and faster types of switches. One example is a switch-on-a-chip, in which a single piece of silicon handles both switching and buffering, such as that proposed in [7]; this alleviates the communication bottleneck between the two functions. Another example is an all-optical packet switch, in which optical delay lines are used to emulate a buffer. These two examples are not practicable with large buffers.
Buffers cannot be made arbitrarily small. The reason we have buffers in the first place is to be able to absorb fluctuations in traffic without dropping packets. There are two types of fluctuations to consider: fluctuations due to end-to-end congestion control mechanisms, most notably TCP; and fluctuations due to the inherent randomness of chance alignments of packets.
In Section 15.2 we describe queueing theory which takes account of the interaction between a queue and TCP's end-to-end congestion control. The Transmission Control Protocol tries to take up all available capacity on a path, and in particular it tries to fill the bottleneck buffer.
This chapter describes an architecture for slicing, virtualizing, and federating wireless sensor network (WSN) resources. The architecture, which we call KanseiGenie, allows users – be they sensing/networking researchers or application developers – to specify and acquire node and network resources as well as sensor data resources within one or more facilities for launching their programs. It also includes server-side measurement and management support for user programs, as well as client-side support for experiment composition and control. We illustrate KanseiGenie architectural concepts in terms of a current realization of KanseiGenie that serves WSN testbeds and application-centric fabrics at The Ohio State University and at Wayne State University.
Introduction
Deployed wireless sensor networks (WSN) have typically been both small-scale and focused on a particular application such as environmental monitoring or intrusion detection. However, recent advances in platform and protocol design now permit city-scale WSNs that can be deployed in such a way that new, unanticipated sensing applications can be accommodated by the network. This lets developers focus more on leveraging existing network resources and less on individual nodes.
Network abstractions for WSN development include APIs for scheduling tasks and monitoring system health as well as for in-the-field programming of applications, network components, and sensing components. As a result, WSN deployments have in several cases morphed from application-specific custom solutions to “WSN fabrics” that may be customized and reused in the field.
Research in Grid Computing has become popular with the growth in network technologies and high-performance computing. Grid Computing demands the transfer of large amounts of data in a timely manner.
In this chapter, we discuss Grid Computing and networking. We begin with an introduction to Grid Computing and discuss its architecture. We provide some information on Grid networks and continue with various current applications of Grid networking. The remainder of the chapter is devoted to research in Grid networks. We discuss the techniques developed by various researchers with respect to resource scheduling in Grid networks.
Introduction
Today, the demand for computational, storage, and network resources continues to grow. At the same time, a vast amount of these resources remains underused. To enable the increased utilization of these resources the tasks can be executed using shared computational and storage resources while communicating over a network. Imagine a team of researchers performing a job which contains a number of tasks. Each task demands different computational, storage, and network resources. Distributing the tasks across a network according to resource availability is called distributed computing. Grid Computing is a recent phenomenon in distributed computing. The term “The Grid” was coined in the mid 1990s to denote a proposed distributed computing infrastructure for advanced science and engineering.
Grid Computing enables efficient utilization of geographically distributed and heterogeneous computational resources to execute large-scale scientific computing applications.
Although there are many reasons for the adoption of a multi-path routing paradigm in the Internet, nowadays the required multi-path support is far from universal. It is mostly limited to some domains that rely on IGP features to improve load distribution in their internal infrastructure or some multi-homed parties that base their load balance on traffic engineering. This chapter explains the motivations for a multi-path routing Internet scheme, commenting on the existing alternatives, and detailing two new proposals. Part of this work has been done within the framework of the Trilogy research and development project, whose main objectives are also commented on in the chapter.
Introduction
Multi-path routing techniques enable routers to be aware of the different possible paths towards a particular destination so that they can make use of them according to certain restrictions. Since several next hops for the same destination prefix will be installed in the forwarding table, all of them can be used at the same time. Although multi-path routing has a lot of interesting properties that will be reviewed in Section 12.3, it is important to remark that in the current Internet the required multi-path routing support is far from universal. It is mostly limited to some domains that deploy multi-path routing capabilities relying on Intra-domain Gateway Protocol (IGP) features to improve the load distribution in their internal infrastructure and normally only allowing the usage of multiple paths if they all have the same cost.
Despite the world-changing success of the Internet, shortcomings in its routing and forwarding system (i.e., the network layer) have become increasingly apparent. One symptom is an escalating “arms race” between users and providers: providers understandably want to control use of their infrastructure; users understandably want to maximize the utility of the best-effort connectivity that providers offer. The result is a growing accretion of hacks, layering violations and redundant overlay infrastructures, each intended to help one side or the other achieve its policies and service goals.
Consider the growing number of overlay networks being deployed by users. Many of these overlays are designed specifically to support network layer services that cannot be supported (well) by the current network layer. Examples include resilient overlays that route packets over multiple paths to withstand link failures, distributed hash table overlays that route packets to locations represented by the hash of some value, multicast and content distribution overlays that give users greater control of group membership and distribution trees, and other overlay services. In many of these examples, there is a “tussle” between users and providers over how packets will be routed and processed. By creating an overlay network, users are able to, in a sense, impose their own routing policies – possibly violating those of the provider – by implementing a “stealth” relay service.
The lack of support for flexible business relationships and policies is another problem area for the current network layer.
The last decade has seen some dramatic changes in the demands placed on core networks. Data has permanently replaced voice as the dominant traffic unit. The growth of applications like file sharing and storage area networking took many by surprise. Video distribution, a relatively old application, is now being delivered via packet technology, changing traffic profiles even for traditional services.
The shift in dominance from voice to data traffic has many consequences. In the data world, applications, hardware, and software change rapidly. We are seeing an unprecedented unpredictability and variability in traffic patterns. This means network operators must maintain an infrastructure that quickly adapts to changing subscriber demands, and contain infrastructure costs by efficiently applying network resources to meet those demands.
Current core network transport equipment supports high-capacity global-scale core networks by relying on higher speed interfaces such as 40 and 100 Gb/s. This is necessary but in and of itself not sufficient. Today, it takes considerable time and human involvement to provision a core network to accommodate new service demands or exploit new resources. Agile, autonomous, resource management is imperative for the next-generation network.
Today's core network architectures are based on static point-to-point transport infrastructure. Higher-layer services are isolated within their place in the traditional Open Systems Interconnection (OSI) network stack. While the stack has clear benefits in collecting conceptually similar functions into layers and invoking a service model between them, stovepiped management has resulted in multiple parallel networks within a single network operator's infrastructure.
In the design of large-scale communication networks, a major practical concern is the extent to which control can be decentralized. A decentralized approach to flow control has been very successful as the Internet has evolved from a small-scale research network to today's interconnection of hundreds of millions of hosts; but it is beginning to show signs of strain. In developing new end-to-end protocols, the challenge is to understand just which aspects of decentralized flow control are important. One may start by asking how should capacity be shared among users? Or, how should flows through a network be organized, so that the network responds sensibly to failures and overloads? Additionally, how can routing, flow control, and connection acceptance algorithms be designed to work well in uncertain and random environments?
One of the more fruitful theoretical approaches has been based on a framework that allows a congestion control algorithm to be interpreted as a distributed mechanism solving a global optimization problem; for some overviews see [1, 2, 3]. Primal algorithms, such as the Transmission Control Protocol (TCP), broadly correspond with congestion control mechanisms where noisy feedback from the network is averaged at endpoints, using increase and decrease rules of the form first developed by Jacobson. Dual algorithms broadly correspond with more explicit congestion control protocols where averaging at resources precedes the feedback of relatively precise information on congestion to endpoints.
Text-to-Speech Synthesis provides a complete, end-to-end account of the process of generating speech by computer. Giving an in-depth explanation of all aspects of current speech synthesis technology, it assumes no specialised prior knowledge. Introductory chapters on linguistics, phonetics, signal processing and speech signals lay the foundation, with subsequent material explaining how this knowledge is put to use in building practical systems that generate speech. Including coverage of the very latest techniques such as unit selection, hidden Markov model synthesis, and statistical text analysis, explanations of the more traditional techniques such as format synthesis and synthesis by rule are also provided. Weaving together the various strands of this multidisciplinary field, the book is designed for graduate students in electrical engineering, computer science, and linguistics. It is also an ideal reference for practitioners in the fields of human communication interaction and telephony.
Complex-valued random signals are embedded in the very fabric of science and engineering, yet the usual assumptions made about their statistical behavior are often a poor representation of the underlying physics. This book deals with improper and noncircular complex signals, which do not conform to classical assumptions, and it demonstrates how correct treatment of these signals can have significant payoffs. The book begins with detailed coverage of the fundamental theory and presents a variety of tools and algorithms for dealing with improper and noncircular signals. It provides a comprehensive account of the main applications, covering detection, estimation, and signal analysis of stationary, nonstationary, and cyclostationary processes. Providing a systematic development from the origin of complex signals to their probabilistic description makes the theory accessible to newcomers. This book is ideal for graduate students and researchers working with complex data in a range of research areas from communications to oceanography.
With the rapid growth of new wireless devices and applications over the past decade, the demand for wireless radio spectrum is increasing relentlessly. The development of cognitive radio networking provides a framework for making the best possible use of limited spectrum resources, and it is revolutionising the telecommunications industry. This book presents the fundamentals of designing, implementing, and deploying cognitive radio communication and networking systems. Uniquely, it focuses on game theory and its applications to various aspects of cognitive networking. It covers in detail the core aspects of cognitive radio, including cooperation, situational awareness, learning, and security mechanisms and strategies. In addition, it provides novel, state-of-the-art concepts and recent results. This is an ideal reference for researchers, students and professionals in industry who need to learn the applications of game theory to cognitive networking.
Covering attack detection, malware response, algorithm and mechanism design, privacy, and risk management, this comprehensive work applies unique quantitative models derived from decision, control, and game theories to understanding diverse network security problems. It provides the reader with a system-level theoretical understanding of network security, and is essential reading for researchers interested in a quantitative approach to key incentive and resource allocation issues in the field. It also provides practitioners with an analytical foundation that is useful for formalising decision-making processes in network security.
The characteristic polynomial of real matrices possesses real coefficients. This chapter aims to summarize general results on the location and determination of the zeros of polynomials with mainly real coefficients. The operations here are assumed to be performed over the set ℂ of complex numbers. Restricting operations to other subfields of ℂ, such as the set ℤ of integers or finite fields, is omitted because, in that case, we need to enter an entirely different and more complex area, which requires Galois theory, advanced group theory and number theory. A general outline for the latter is found in Govers et al. (2008) and a nice introduction to Galois theory is written by Stewart (2004).
The study of polynomials belongs to one of the oldest researches in mathematics. The insolubility of the quintic, famously proved by Abel and extended by Galois (see art. 196 and Govers et al. (2008) for more details and for the historical context), shifted the root finding problem in polynomials from pure to numerical analysis. Numerical methods as well as matrix method based on the companion matrix (art. 143) are extensively treated by McNamee (2007), but omitted here. A complex function theoretic approach, covering more recent results such as self-inversive polynomials and extensions of Grace's Theorem (art. 227), is presented by Sheil-Small (2002) and by Milovanović et al. (1994), who also list many polynomial inequalities.
This chapter presents some examples of the spectra of complex networks, which we have tried to interpret or to understand using the theory of previous chapters. In contrast to the mathematical rigor of the other chapters, this chapter is more intuitively oriented and it touches topics that are not yet understood or that lack maturity. Nevertheless, the examples may give a flavor of how real-world complex networks are analyzed as a sequence of small and partial steps towards (hopefully) complete understanding.
Simple observations
When we visualize the density function fλ(x) of the eigenvalues of the adjacency matrix of a graph, defined in art. 121, peaks at x = 0, x = −1 and x = −2 are often observed. The occurrence of adjacency eigenvalue at those integer values has a physical explanation.
A graph with eigenvalue λ (A) = 0
A matrix has a zero eigenvalue if its determinant is zero (art. 138). A determinant is zero if two rows are identical or if some of the rows are linearly dependent. For example, two rows are identical resulting in λ (A) = 0, if two not mutually interconnected nodes are connected to a same set of nodes. Since the elements aij of an adjacency matrix A are only 0 or 1, linear dependence of rows occurs every time the sum of a set of rows equals another row in the adjacency matrix.
Despite the fact that complex networks are the driving force behind the investigation of the spectra of graphs, it is not the purpose of this book to dwell on complex networks. A generally accepted, all-encompassing definition of a complex network does not seem to be available. Instead, complex networks are understood by instantiation: the Internet, transportation (car, train, airplane) and infrastructural (electricity, gas, water, sewer) networks, biological molecules, the human brain network, social networks, software dependency networks, are examples of complex networks. By now, there is such a large literature about complex networks, predominantly in the physics community, that providing a detailed survey is a daunting task. We content ourselves here with referring to some review articles by Strogatz (2001); Newman et al. (2001); Albert and Barabasi (2002); Newman (2003b), and to books in the field by Watts (1999); Barabasi (2002); Dorogovtsev and Mendes (2003); Barrat et al. (2008); Dehmer and Emmert-Streib (2009); Newman (2010), and to references in these works. Application of spectral graph theory to chemistry and physics are found in Cvetković et al. (1995, Chapter 8).
Complex networks can be represented by a graph, denoted by G, consisting of a set N of N nodes connected by a set ℒ of L links. Sometimes, nodes and links are called vertices and edges, respectively, and are correspondingly denoted by the set V and E.