To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Research in Grid Computing has become popular with the growth in network technologies and high-performance computing. Grid Computing demands the transfer of large amounts of data in a timely manner.
In this chapter, we discuss Grid Computing and networking. We begin with an introduction to Grid Computing and discuss its architecture. We provide some information on Grid networks and continue with various current applications of Grid networking. The remainder of the chapter is devoted to research in Grid networks. We discuss the techniques developed by various researchers with respect to resource scheduling in Grid networks.
Introduction
Today, the demand for computational, storage, and network resources continues to grow. At the same time, a vast amount of these resources remains underused. To enable the increased utilization of these resources the tasks can be executed using shared computational and storage resources while communicating over a network. Imagine a team of researchers performing a job which contains a number of tasks. Each task demands different computational, storage, and network resources. Distributing the tasks across a network according to resource availability is called distributed computing. Grid Computing is a recent phenomenon in distributed computing. The term “The Grid” was coined in the mid 1990s to denote a proposed distributed computing infrastructure for advanced science and engineering.
Grid Computing enables efficient utilization of geographically distributed and heterogeneous computational resources to execute large-scale scientific computing applications.
Although there are many reasons for the adoption of a multi-path routing paradigm in the Internet, nowadays the required multi-path support is far from universal. It is mostly limited to some domains that rely on IGP features to improve load distribution in their internal infrastructure or some multi-homed parties that base their load balance on traffic engineering. This chapter explains the motivations for a multi-path routing Internet scheme, commenting on the existing alternatives, and detailing two new proposals. Part of this work has been done within the framework of the Trilogy research and development project, whose main objectives are also commented on in the chapter.
Introduction
Multi-path routing techniques enable routers to be aware of the different possible paths towards a particular destination so that they can make use of them according to certain restrictions. Since several next hops for the same destination prefix will be installed in the forwarding table, all of them can be used at the same time. Although multi-path routing has a lot of interesting properties that will be reviewed in Section 12.3, it is important to remark that in the current Internet the required multi-path routing support is far from universal. It is mostly limited to some domains that deploy multi-path routing capabilities relying on Intra-domain Gateway Protocol (IGP) features to improve the load distribution in their internal infrastructure and normally only allowing the usage of multiple paths if they all have the same cost.
In this paper we study planar first-passage percolation (FPP) models on random Delaunay triangulations. In [14], Vahidi-Asl and Wierman showed, using sub-additivity theory, that the rescaled first-passage time converges to a finite and non-negative constant μ. We show a sufficient condition to ensure that μ>0 and derive some upper bounds for fluctuations. Our proofs are based on percolation ideas and on the method of martingales with bounded increments.
Despite the world-changing success of the Internet, shortcomings in its routing and forwarding system (i.e., the network layer) have become increasingly apparent. One symptom is an escalating “arms race” between users and providers: providers understandably want to control use of their infrastructure; users understandably want to maximize the utility of the best-effort connectivity that providers offer. The result is a growing accretion of hacks, layering violations and redundant overlay infrastructures, each intended to help one side or the other achieve its policies and service goals.
Consider the growing number of overlay networks being deployed by users. Many of these overlays are designed specifically to support network layer services that cannot be supported (well) by the current network layer. Examples include resilient overlays that route packets over multiple paths to withstand link failures, distributed hash table overlays that route packets to locations represented by the hash of some value, multicast and content distribution overlays that give users greater control of group membership and distribution trees, and other overlay services. In many of these examples, there is a “tussle” between users and providers over how packets will be routed and processed. By creating an overlay network, users are able to, in a sense, impose their own routing policies – possibly violating those of the provider – by implementing a “stealth” relay service.
The lack of support for flexible business relationships and policies is another problem area for the current network layer.
The last decade has seen some dramatic changes in the demands placed on core networks. Data has permanently replaced voice as the dominant traffic unit. The growth of applications like file sharing and storage area networking took many by surprise. Video distribution, a relatively old application, is now being delivered via packet technology, changing traffic profiles even for traditional services.
The shift in dominance from voice to data traffic has many consequences. In the data world, applications, hardware, and software change rapidly. We are seeing an unprecedented unpredictability and variability in traffic patterns. This means network operators must maintain an infrastructure that quickly adapts to changing subscriber demands, and contain infrastructure costs by efficiently applying network resources to meet those demands.
Current core network transport equipment supports high-capacity global-scale core networks by relying on higher speed interfaces such as 40 and 100 Gb/s. This is necessary but in and of itself not sufficient. Today, it takes considerable time and human involvement to provision a core network to accommodate new service demands or exploit new resources. Agile, autonomous, resource management is imperative for the next-generation network.
Today's core network architectures are based on static point-to-point transport infrastructure. Higher-layer services are isolated within their place in the traditional Open Systems Interconnection (OSI) network stack. While the stack has clear benefits in collecting conceptually similar functions into layers and invoking a service model between them, stovepiped management has resulted in multiple parallel networks within a single network operator's infrastructure.
In the design of large-scale communication networks, a major practical concern is the extent to which control can be decentralized. A decentralized approach to flow control has been very successful as the Internet has evolved from a small-scale research network to today's interconnection of hundreds of millions of hosts; but it is beginning to show signs of strain. In developing new end-to-end protocols, the challenge is to understand just which aspects of decentralized flow control are important. One may start by asking how should capacity be shared among users? Or, how should flows through a network be organized, so that the network responds sensibly to failures and overloads? Additionally, how can routing, flow control, and connection acceptance algorithms be designed to work well in uncertain and random environments?
One of the more fruitful theoretical approaches has been based on a framework that allows a congestion control algorithm to be interpreted as a distributed mechanism solving a global optimization problem; for some overviews see [1, 2, 3]. Primal algorithms, such as the Transmission Control Protocol (TCP), broadly correspond with congestion control mechanisms where noisy feedback from the network is averaged at endpoints, using increase and decrease rules of the form first developed by Jacobson. Dual algorithms broadly correspond with more explicit congestion control protocols where averaging at resources precedes the feedback of relatively precise information on congestion to endpoints.
While different user simulations are built to assist dialog system development, there is an increasing need to quickly assess the quality of the user simulations reliably. Previous studies have proposed several automatic evaluation measures for this purpose. However, the validity of these evaluation measures has not been fully proven. We present an assessment study in which human judgments are collected on user simulation qualities as the gold standard to validate automatic evaluation measures. We show that a ranking model can be built using the automatic measures to predict the rankings of the simulations in the same order as the human judgments. We further show that the ranking model can be improved by using a simple feature that utilizes time-series analysis.
We introduce a ‘limiting Frobenius structure’ attached to any degeneration of projective varieties over a finite field of characteristic p which satisfies a p-adic lifting assumption. Our limiting Frobenius structure is shown to be effectively computable in an appropriate sense for a degeneration of projective hypersurfaces. We conjecture that the limiting Frobenius structure relates to the rigid cohomology of a semistable limit of the degeneration through an analogue of the Clemens–Schmidt exact sequence. Our construction is illustrated, and conjecture supported, by a selection of explicit examples.
The computation of growth series for the higher Baumslag–Solitar groups is an open problem first posed by de la Harpe and Grigorchuk. We study the growth of the horocyclic subgroup as the key to the overall growth of these Baumslag–Solitar groups BS(p,q), where 1<p<q. In fact, the overall growth series can be represented as a modified convolution product with one of the factors being based on the series for the horocyclic subgroup. We exhibit two distinct algorithms that compute the growth of the horocyclic subgroup and discuss the time and space complexity of these algorithms. We show that when p divides q, the horocyclic subgroup has a geodesic combing whose words form a context-free (in fact, one-counter) language. A theorem of Chomsky–Schützenberger allows us to compute the growth series for this subgroup, which is rational. When p does not divide q, we show that no geodesic combing for the horocyclic subgroup forms a context-free language, although there is a context-sensitive geodesic combing. We exhibit a specific linearly bounded Turing machine that accepts this language (with quadratic time complexity) in the case of BS(2,3) and outline the Turing machine construction in the general case.
Bipartivity is an important network concept that can be applied to nodes, edges and communities. Here we focus on directed networks and look for subnetworks made up of two distinct groups of nodes, connected by ‘one-way’ links. We show that a spectral approach can be used to find hidden substructures of this form. Theoretical support is given for the idealized case where there is limited overlap between subnetworks. Numerical experiments show that the approach is robust to spurious and missing edges. A key application of this work is in the analysis of high-throughput gene expression data, and we give an example where a biologically meaningful directed bipartite subnetwork is found from a cancer microarray dataset.
In recent years game theory has had a substantial impact on computer science, especially on Internet- and e-commerce-related issues. Algorithmic Game Theory, first published in 2007, develops the central ideas and results of this exciting area in a clear and succinct manner. More than 40 of the top researchers in this field have written chapters that go from the foundations to the state of the art. Basic chapters on algorithmic methods for equilibria, mechanism design and combinatorial auctions are followed by chapters on important game theory applications such as incentives and pricing, cost sharing, information markets and cryptography and security. This definitive work will set the tone of research for the next few years and beyond. Students, researchers, and practitioners alike need to learn more about these fascinating theoretical developments and their widespread practical application.
We establish Choquet–Kendall–Matheron theorems on non-Hausdorff topological spaces. This typical result of random set theory is profitably recast in purely topological terms using intuitions and tools from domain theory. We obtain three variants of the theorem, each one characterising distributions, in the form of continuous valuations, over relevant powerdomains of demonic, angelic and erratic non-determinism, respectively.
When generating gaits for soft robots (those with no explicit joints), it is not evident that undulating control schemes are the most efficient. In considering alternative control schemes, however, the computational costs of evaluating continuum mechanic models of soft robots represent a significant bottleneck. We consider the use of lumped dynamic models for soft robotic systems. Such models have not been employed previously to design gaits for soft robotic systems, though they are widely used to simulate robots with compliant joints. A major question is whether these methods are accurate enough to be representations of soft robots to enable gait design and optimization. This paper addresses the potential “reality gap” between simulation and experiment for the particular case of a soft caterpillar-like robot. Experiments with a prototype soft crawler demonstrate that the lumped dynamic model can capture essential soft-robot mechanics well enough to enable gait optimization. Significantly, experiments verified that a prototype robot achieved high performance for control patterns optimized in simulation and dramatically reduced performance for gait parameters perturbed from their optimized values.
Let P be a set of n points in ℝ3, and let k ≤ n be an integer. A sphere σ is k-rich with respect to P if |σ ∩ P| ≥ k, and is η-non-degenerate, for a fixed fraction 0 < η < 1, if no circle γ ⊂ σ contains more than η|σ ∩ P| points of P.
We improve the previous bound given in [1] on the number of k-rich η-non-degenerate spheres in 3-space with respect to any set of n points in ℝ3, from O(n4/k5 + n3/k3), which holds for all 0 < η < 1/2, to O*(n4/k11/2 + n2/k2), which holds for all 0 < η < 1 (in both bounds, the constants of proportionality depend on η). The new bound implies the improved upper bound O*(n58/27) ≈ O(n2.1482) on the number of mutually similar triangles spanned by n points in ℝ3; the previous bound was O(n13/6) ≈ O(n2.1667) [1].