To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Schlipf (1995) proved that Stable Logic Programming (SLP) solves all$\mathit{NP}$decision problems. We extend Schlipf's result to prove that SLPsolves all search problems in the class $\mathit{NP}$. Moreover, we do this in auniform way as defined in Marek and Truszczyński (1991).Specifically, we show that there is a single $\mathrm{DATALOG}^{\neg}$ program$P_{\mathit{Trg}}$such that given any Turing machine $M$, any polynomial $p$ with non-negative integercoefficients and any input $\sigma$ of size $n$ over a fixed alphabet $\Sigma$, there is an extensionaldatabase $\mathit{edb}_{M,p,\sigma}$ such that there isa one-to-one correspondence between the stable models of$\mathit{edb}_{M,p,\sigma} \cupP_{\mathit{Trg}}$ and the acceptingcomputations of the machine $M$ that reach the final state in at most$p(n)$ steps.Moreover, $\mathit{edb}_{M,p,\sigma}$ can be computed inpolynomial time from $p$, $\sigma$ and the description of$M$ and thedecoding of such accepting computations from its correspondingstable model of $\mathit{edb}_{M,p,\sigma} \cupP_{\mathit{Trg}}$ can be computed in lineartime. A similar statement holds for Default Logic with respect to$\Sigma_2^\mathrm{P}$-search problems.The proof of this result involvesadditional technical complications and will be a subject ofanother publication.
We provide a semantic framework for preference handling in answer setprogramming. To this end, we introduce preference preservingconsequence operators. The resulting fixpoint characterizationsprovide us with a uniform semantic framework for characterizingpreference handling in existing approaches. Although our approach isextensible to other semantics by means of an alternating fixpointtheory, we focus here on the elaboration of preferences under answerset semantics. Alternatively, we show how these approaches can becharacterized by the concept of order preservation. These uniformsemantic characterizations provide us with new insights aboutinter-relationships and moreover about ways of implementation.
Connections between the sequentiality/concurrency distinction and the semantics of proofs are investigated, with particular reference to games and Linear Logic.
A multi-agent system architecture for coordination of just-in-time production and distribution is presented. The problem to solve is twofold: first the right amount of resources at the right time should be produced, then these resources should be distributed to the right consumers. In order to solve the first problem, which is hard when the production and/or distribution time is relatively long, each consumer is equipped with an agent that makes predictions of future needs that it sends to a production agent. The second part of the problem is approached by forming clusters of consumers within which it is possible to redistribute resources fast and at a low cost in order to cope with discrepancies between predicted and actual consumption. Redistribution agents are introduced (one for each cluster) to manage the redistribution of resources. The suggested architecture is evaluated in a case study concerning management of district heating systems. Results from a simulation study show that the suggested approach makes it possible to control the trade-off between quality of service and degree of surplus production. We also compare the suggested approach to a reference control scheme (approximately corresponding to the current approach to district heating management), and conclude that it is possible to reduce the amount of resources produced while maintaining the quality of service. Finally, we describe a simulation experiment where the relation between the size of the clusters and the quality of service was studied.
Designing realistic multi-agent systems is a complex process, which involves specifying not only the functionality of individual agents, but also the authority relationships and lines of communication existing among them. In other words, designing a multi-agent system refers to designing an agent organisation. Existing methodologies follow a wide variety of approaches to designing agent organisations, but they do not provide adequate support for the decisions involved in moving from analysis to design. Instead, they require designers to make ad hoc design decisions while working at a low level of abstraction.
We have developed RAMASD (Role Algebraic Multi-Agent System Design), a method for semi-automatic design of agent organisations based on the concept of role models as first-class design constructs. Role models represent agent behaviour, and the design of the agent system is done by systematically allocating roles to agents. The core of this method is a formal model of basic relations between roles, which we call role algebra. The semantics of this role-relationships model are formally defined using a two-sorted algebra.
In this paper, we review existing agent system design methodologies to highlight areas where further work is required, describe how our method can address some of the outstanding issues and demonstrate its application to a case study involving telephone repair service teams.
Event-based systems are developed and used to integrate components in loosely coupled systems. Research and product development have focused so far on efficiency issues but neglected methodological support to build such systems. In this article, the modular design and implementation of an event system is presented which supports scopes and event mappings, two new and powerful structuring methods that facilitate engineering and coordination of components in event-based systems. We give a formal specification of scopes and event mappings within a trace-based formalism adapted from temporal logic. This is complemented by a comprehensive introduction to the event-based style, its benefits and requirements.
This paper focuses on coordination middleware for distributed applications based on active documents and XML technologies. First, the paper introduces the main concepts underlying active documents and XML, and identifies the strict relations between active documents and mobile agents (“document agents”). Then, the paper goes into details about the problem of defining a suitable middleware architecture to support coordination activities in applications including active documents and mobile agents, by specifically focusing on the role played by XML technologies in that context. A simple taxonomy is introduced to characterise coordination middleware architectures depending on the way they exploit XML documents in supporting coordination. The characteristics of several middleware infrastructures are then surveyed and evaluated, also with the help of a simple example scenario in the area of distributed workflow management. This analysis enables us to identify the advantages and the shortcomings of the different approaches, and the basic requirements of a middleware for XML-centric applications.
Parallel and distributed languages specify computations on multiple processors and have a computation language to describe the algorithm, i.e. what to compute, and a coordination language to describe how to organise the computations across the processors. Haskell has been used as the computation language for a wide variety of parallel and distributed languages, and this paper is a comprehensive survey of implemented languages. We outline parallel and distributed language concepts and classify Haskell extensions using them. Similar example programs are used to illustrate and contrast the coordination languages, and the comparison is facilitated by the common computation language. A lazy language is not an obvious choice for parallel or distributed computation, and we address the question of why Haskell is a common functional computation language.
We define a family of embedded domain specific languages for generating HTML and XML documents. Each language is implemented as a combinator library in Haskell. The generated HTML/XML documents are guaranteed to be well-formed. In addition, each library can guarantee that the generated documents are valid XML documents to a certain extent (for HTML only a weaker guarantee is possible). On top of the libraries, Haskell serves as a meta language to define parameterized documents, to map structured documents to HTML/XML, to define conditional content, or to define entire web sites. The combinator libraries support element-transforming style, a programming style that allows programs to have a visual appearance similar to HTML/XML documents, without modifying the syntax of Haskell.
Dependent types reflect the fact that validity of data is often a relative notion by allowing prior data to affect the types of subsequent data. Not only does this make for a precise type system, but also a highly generic one: both the type and the program for each instance of a family of operations can be computed from the data which codes for that instance. Recent experimental extensions to the Haskell type class mechanism give us strong tools to relativize types to other types. We may simulate some aspects of dependent typing by making counterfeit type-level copies of data, with type constructors simulating data constructors and type classes simulating datatypes. This paper gives examples of the technique and discusses its potential.
Since its inception in 1987, Haskell has provided a focal point for research in lazy functional programming. During this time the language has continually evolved, as a result of both theoretical advances and practical experience. Haskell has proved to be a powerful tool for many kinds of programming tasks, and an excellent vehicle for many aspects of computing pedagogy and research. The recent definition of Haskell 98 provides a long-awaited stable version of the language, but there are many exciting possibilities for future versions of Haskell.
This paper gives a static semantics for Haskell 98, a non-strict purely functional programming language. The semantics formally specifies nearly all the details of the Haskell 98 type system, including the resolution of overloading, kind inference (including defaulting) and polymorphic recursion, the only major omission being a proper treatment of ambiguous overloading and its resolution. Overloading is translated into explicit dictionary passing, as in all current implementations of Haskell. The target language of this translation is a variant of the Girard–Reynolds polymorphic lambda calculus featuring higher order polymorphism and explicit type abstraction and application in the term language. Translated programs can thus still be type checked, although the implicit version of this system is impredicative. A surprising result of this formalization effort is that the monomorphism restriction, when rendered in a system of inference rules, compromises the principal type property.
Higher-order languages such as Haskell encourage the programmer to build abstractions by composing functions. A good compiler must inline many of these calls to recover an efficiently executable program. In principle, inlining is dead simple: just replace the call of a function by an instance of its body. But any compiler-writer will tell you that inlining is a black art, full of delicate compromises that work together to give good performance without unnecessary code bloat. The purpose of this paper is, therefore, to articulate the key lessons we learned from a full-scale “production” inliner, the one used in the Glasgow Haskell compiler. We focus mainly on the algorithmic aspects, but we also provide some indicative measurements to substantiate the importance of various aspects of the inliner.
Server applications, and in particular network-based server applications, place a unique combination of demands on a programming language: lightweight concurrency, high I/O throughput, and fault tolerance are all important. This paper describes a prototype web server written in Concurrent Haskell (with extensions), and presents two useful results: firstly, a conforming server could be written with minimal effort, leading to an implementation in less than 1500 lines of code, and secondly the naive implementation produced reasonable performance. Furthermore, making minor modifications to a few time-critical components improved performance to a level acceptable for anything but the most heavily loaded web servers.
This paper is concerned with path techniques for quantitative analysis of the logarithmic Sobolev constant on a countable set. We present new upper bounds on the logarithmic Sobolev constant, which generalize those given by Sinclair [20], in the case of the spectral gap constant involving path combinatorics. Some examples of applications are given. Then, we compare our bounds to the Hardy constant in the particular case of birth and death processes. Finally, following the approach of Rosenthal in [18], we generalize our bounds to continuous sets.
In this article we introduce combinatorial multicolour discrepancies and generalize several classical results from $2$-colour discrepancy theory to $c$ colours ($c \geq 2$). We give a recursive method that constructs $c$-colourings from approximations of $2$-colour discrepancies. This method works for a large class of theorems, such as the ‘six standard deviations’ theorem of Spencer (1985), the Beck–Fiala (1981) theorem, the results of Matoušek, Wernisch and Welzl (1994) and Matoušek (1995) for bounded VC-dimension, and Matoušek and Spencer's (1996) upper bound for the arithmetic progressions. In particular, the $c$-colour discrepancy of an arbitrary hypergraph ($n$ vertices, $m$ hyperedges) is \[ \OO\Bigl(\sqrt{\tfrac n c\,\log m}\Bigr). \] If $m = \OO(n)$, then this bound improves to \[ \OO\Bigl(\sqrt{\tfrac n c\,\log c}\Bigr). \]
On the other hand there are examples showing that discrepancy in $c$ colours can not be bounded in terms of two-colour discrepancies in general, even if $c$ is a power of 2. For the linear discrepancy version of the Beck–Fiala theorem, the recursive approach also fails.
Here we extend the method of floating colours via tensor products of matrices to multicolourings, and prove multicolour versions of the Beck–Fiala theorem and the Bárány–Grinberg theorem. Using properties of the tensor product we derive a lower bound for the $c$-colour discrepancy of general hypergraphs. For the hypergraph of arithmetic progressions in $\{1, \ldots, n\}$ this yields a lower bound of $\frac{1}{25 \sqrt c} \sqrt[4]{n}$ for the discrepancy in $c$ colours. The recursive method shows an upper bound of $\OO(c^{-0.16} \sqrt[4]{n})$
A set $S\subset \R^d$ is $C$-Lipschitz in the$x_i$-coordinate, where $C>0$ is a real number, if, for every two points $a,b\in S$, we have $|a_i-b_i|\leq C \max\{|a_j-b_j|\sep j=1,2,\ldots,d,\,j\neq i\}$. Motivated by a problem of Laczkovich, the author asked whether every $n$-point set in $\Rd$ contains a subset of size at least $cn^{1-1/d}$ that is $C$-Lipschitz in one of the coordinates, for suitable constants $C$ and $c>0$ (depending on $d$). This was answered negatively by Alberti, Csörnyei and Preiss. Here it is observed that a combinatorial result of Ruzsa and Szemerédi implies the existence of a 2-Lipschitz subset of size $n^{1/2}\varphi(n)$ in every $n$-point set in $\R^3$, where $\varphi(n)\to\infty$ as $n\to\infty$.
In this paper we consider a bipartite version of Schütte's well-known tournament problem. A bipartite tournament $T=(A,B,E)$ with teams $A$ and $B$, and set of arcs $E$, has the property $S_{k,l}$ if for any subsets $K\subseteq A$ and $L\subseteq B$, with $|K| =k$ and $| L | =l$, there exist conquerors of $K$ and $L$ in opposite teams. The task is to estimate, for fixed $k$ and $l$, the minimum number $f(k,l)=| A | + | B | $ of players in a tournament satisfying property $S_{k,l}$. We achieve this goal by reformulating the problem in terms of intersecting set families and applying probabilistic as well as constructive methods. Intriguing connections with some famous problems of this area have emerged in this way, leading to new open questions.
Let $G$ be a cyclic group of order $n$ and let $\mu = \{x_1,x_2, \dots, x_m\}$ be a sequence of elements of $G$. Let $k$ be the number of distinct values taken by the sequence $\mu$. Let $n\wedge \mu$ be the set of the $n$-subsequence sums.
We show that one of the following conditions holds:
$\mu$ has a value repeated $n-k+3$ times
$n\wedge \mu$ contains a non-null subgroup
$|n\wedge \mu|\geq m-n+k-2.$
We conjecture that the last condition could be improved to $|n\wedge \mu|\geq m-n+k-1$. This conjecture generalizes several known results. We also obtain a generalization of a recent result due to Bollobás and Leader.