To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Live Room was a temporary site-specific installation presented in building N51, room 117 on the MIT campus on 7 May 1998 and concluded on 10 June 1998. Using small acoustic-intensifying equipment which mounted directly to the structure of the foundation at the site, the system created an enhanced scale of ‘tectonic charging’ through vibration. The system engaged the architecture by running impulsive energy throughout, creating sound and vibration in direct relation to the building and the dimensions of the space. The project describes an intensified site where machines fuse into architecture and combine active forces with the building forms. The action was an attempt towards the liberation of tectonics from the limitations of the static, creating a place where resonant structures vibrate in sympathy with induced frequencies. By using various transducing devices and signal-generating equipment, the project effectively ‘tuned in’ the location by delivering resonant frequencies. The installation engaged directly with a unique floor system which was already present at the location. Mechanical oscillators were mounted into this floor system so that frequencies were imparted into the building, the floor and the persons who were situated in the room. With this work, I was interested in TRANSDUCING ARCHITECTURE, driving the space with external influences of a vibro-kinetic nature.
Most recently, Answer Set Programming (ASP) has been attractinginterest as a new paradigm for problem solving. An important aspect,for which several approaches have been presented, is the handling ofpreferences between rules. In this paper, we consider the problem ofimplementing preference handling approaches by means ofmeta-interpreters in Answer Set Programming. In particular, weconsider the preferred answer set approaches by Brewka and Eiter, byDelgrande, Schaub and Tompits, and by Wang, Zhou and Lin. We presentsuitable meta-interpreters for these semantics using DLV, which isan efficient engine for ASP. Moreover, we also present ameta-interpreter for the weakly preferred answer set approach byBrewka and Eiter, which uses the weak constraint feature of DLV as atool for expressing and solving an underlying optimization problem.We also consider advanced meta-interpreters, which make use ofgraph-based characterizations and often allow for more efficientcomputations. Our approach shows the suitability of ASP in generaland of DLV in particular for fast prototyping. This can befruitfully exploited for experimenting with new languages andknowledge-representation formalisms.
In this paper, we suggest an architecture for a software agent whichoperates a physical device and is capable of making observations andof testing and repairing the device's components. We presentsimplified definitions of the notions of symptom, candidatediagnosis, and diagnosis which are based on the theory of actionlanguage ${\cal AL}$.The definitions allow one to give a simple account of the agent'sbehavior in which many of the agent's tasks are reduced to computingstable models of logic programs.
A bag (literally and mathematically!) of marbles is deemed to be mingled if all the colours of the marbles are different. Given a positive integer $k$ and a collection of $m$ marbles, our objective is to try and extract as many mingled bags as possible from the collection, where each bag contains exactly $k$ marbles.
Logic programs $P$ and$Q$ are stronglyequivalent if, given any program $R$, programs $P\cup R$ and $Q \cupR$ are equivalent (that is, have the sameanswer sets). Strong equivalence is convenient for the study ofequivalent transformations of logic programs: one can prove that alocal change is correct without considering the whole program.Lifschitz, Pearce and Valverde showed that Heyting's logic ofhere-and-there can be used to characterize strong equivalence forlogic programs with nested expressions (which subsume thebetter-known extended disjunctive programs). This note considers asimpler, more direct characterization of strong equivalence for suchprograms, and shows that it can also be applied without modificationto the weight constraint programs of Niemelä and Simons. Thus, thischaracterization of strong equivalence is convenient for the studyof equivalent transformations of logic programs written in the inputlanguages of answer set programming systems dlv and SMODELS. The noteconcludes with a brief discussion of results that can be used toautomate reasoning about strong equivalence, including a novelencoding that reduces the problem of deciding the strong equivalenceof a pair of weight constraint programs to that of deciding theinconsistency of a weight constraint program.
In this paper, bounded model checking of asynchronous concurrentsystems is introduced as a promising application area for answer setprogramming. As the model of asynchronous systems a generalisationof communicating automata, 1-safe Petri nets, are used. It is shownhow a 1-safe Petri net and a requirement on the behaviour of the netcan be translated into a logic program such that the bounded modelchecking problem for the net can be solved by computing stablemodels of the corresponding program. The use of the stable modelsemantics leads to compact encodings of bounded reachability anddeadlock detection tasks as well as the more general problem ofbounded model checking of linear temporal logic. Correctness proofsof the devised translations are given, and some experimental resultsusing the translation and the Smodels system are presented.
The search for an appropriate characterization of negation as failurein logic programs in the mid 1980s led to several proposals. Amongstthem the stable model semantics – later referred to asanswer set semantics, and the well-foundedsemantics are the most popular and widely referred ones. Accordingto the latest (September 2002) list of most cited source documentsin the CiteSeer database (http://citeseer.nj.nec.com) theoriginal stable model semantics paper (Gelfond and Lifschitz, 1988)is ranked 10th with 649 citations and the well-founded semanticspaper (Van Gelder et al., 1991) is ranked 70th with306 citations. Since 1988 – when stable models semantics wasproposed – there has been a large body of work centered around logicprograms with answer set semantics covering topics such as:systematic program development, systematic program analysis,knowledge representation, declarative problem solving, answer setcomputing algorithms, complexity and expressiveness, answer setcomputing systems, relation with other non-monotonic and knowledgerepresentation formalisms, and applications to various tasks.
In this paper we show how the expressive power of a language for the description of timed processes strongly affects the discriminating power of urgent and patient actions. In a sense, it studies the interplay between syntax and semantics of time-critical systems.
Computer Science has witnessed the emergence of a plethora of different logics, models and paradigms for the description of computation. Yet, the classic Church–Turing thesis may be seen as indicating that all general models of computation are equivalent. Alan Perlis referred to this as the ‘Turing tarpit’, and argued that some of the most crucial distinctions in computing methodology, such as sequential versus parallel, deterministic versus non-deterministic, local versus distributed disappear if all one sees in computation is pure symbol pushing. How can we express formally the difference between these models of computation?
We review the conceptual development of (true) concurrency and branching time starting from Petri nets and proceeding via Mazurkiewicz traces, pomsets, bisimulation, and event structures up to higher dimensional automata (HDAs), whose acyclic case may be identified with triadic event structures and triadic Chu spaces. Acyclic HDAs may be understood as extending the two truth values of Boolean logic with a third value $\T$ expressing transition. We prove the necessity of such a third value under mild assumptions about the nature of observable events, and show that the expansion of any complete Boolean basis $L$ to $L_{\T}$ with a third literal $\that a$ expressing $a=\T$ forms an expressively complete basis for the representation of acyclic HDAs. The main contribution is a new value $\cancel$ of cancellation, which is a sibling of $\T$, serving to distinguish $a(b+c)$ from $ab+ac$ while simplifying the extensional definitions of termination $\tick\A$ and sequence $\A\B$. We show that every HDAX (acyclic HDA with $\cancel$) is representable in the expansion of $L_{\T}$ to $L_{\T\cancel}$ with a fourth literal $\can a$ expressing $a=\cancel$.
This note is about the relationship between two theories of negationas failure – one based on program completion, the other based onstable models, or answer sets. François Fages showed that if a logicprogram satisfies a certain syntactic condition, which is now called‘tightness,’ then its stable models can be characterized as themodels of its completion. We extend the definition of tightness andFages' theorem to programs with nested expressions in the bodies ofrules, and study tight logic programs containing the definition ofthe transitive closure of a predicate.
A relational database is inconsistent if it does notsatisfy a given set of integrity constraints. Nevertheless, it islikely that most of the data in it is consistent with theconstraints. In this paper we apply logic programming based onanswer sets to the problem of retrieving consistent information froma possibly inconsistent database. Since consistent informationpersists from the original database to every of its minimal repairs,the approach is based on a specification of database repairs usingdisjunctive logic programs with exceptions,whose answer set semantics can be represented and computed bysystems that implement stable model semantics. These programs allowus to declare persistence by default of data from the originalinstance to the repairs; and changes to restore consistency, byexceptions. We concentrate mainly on logic programs for binaryintegrity constraints, among which we find most of the integrityconstraints found in practice.
It is an open problem whether weak bisimilarity is decidable for Basic Process Algebra (BPA) and Basic Parallel Processes (BPP). A PSPACE lower bound for BPA and NP lower bound for BPP were demonstrated by Stribrna. Mayr recently achieved a result, saying that weak bisimilarity for BPP is $\Pi_2^P$-hard. We improve this lower bound to PSPACE, and, moreover, prove this result for the restricted class of normed BPP. It is also not known whether weak regularity (finiteness) of BPA and BPP is decidable. In the case of BPP there is a $\Pi_2^P$-hardness result by Mayr, which we improve to PSPACE. No lower bound has previously been established for BPA. We demonstrate DP-hardness, which, in particular, implies both NP and co-NP-hardness. In each of the bisimulation/regularity problems we also consider the classes of normed processes. Finally, we show how the technique for proving co-NP lower bound for weak bisimilarity of BPA can be applied to strong bisimilarity of BPP.
In ACP-style process algebra the interpretation of a constant atomic action combines action execution with termination. In a setting with timing, different forms of termination can be distinguished: some-time termination, termination before the next clock tick, urgent termination, having terminated. In a setting with the silent action $\tau$, we also have silent termination. This leads to problems with the interpretation of atomic actions in timed theories that involve some form of the empty process or some form of the silent action. Reflection on these problems led to a redesign of basic process algebra, where action execution and termination are separated. Instead of actions as constants, we have action prefix operators. Sequential composition remains a basic operator, and thus we have two basic constants for termination: $\delta$ for unsuccessful termination (deadlock) and $\epsilon$ for successful termination (skip). We can recover standard ACP-style process algebras as subtheories of the new theory. The new approach has definite advantages over the standard approach. The paper contributes to ongoing work on relationships between algebras with different timing features.
Schlipf (1995) proved that Stable Logic Programming (SLP) solves all$\mathit{NP}$decision problems. We extend Schlipf's result to prove that SLPsolves all search problems in the class $\mathit{NP}$. Moreover, we do this in auniform way as defined in Marek and Truszczyński (1991).Specifically, we show that there is a single $\mathrm{DATALOG}^{\neg}$ program$P_{\mathit{Trg}}$such that given any Turing machine $M$, any polynomial $p$ with non-negative integercoefficients and any input $\sigma$ of size $n$ over a fixed alphabet $\Sigma$, there is an extensionaldatabase $\mathit{edb}_{M,p,\sigma}$ such that there isa one-to-one correspondence between the stable models of$\mathit{edb}_{M,p,\sigma} \cupP_{\mathit{Trg}}$ and the acceptingcomputations of the machine $M$ that reach the final state in at most$p(n)$ steps.Moreover, $\mathit{edb}_{M,p,\sigma}$ can be computed inpolynomial time from $p$, $\sigma$ and the description of$M$ and thedecoding of such accepting computations from its correspondingstable model of $\mathit{edb}_{M,p,\sigma} \cupP_{\mathit{Trg}}$ can be computed in lineartime. A similar statement holds for Default Logic with respect to$\Sigma_2^\mathrm{P}$-search problems.The proof of this result involvesadditional technical complications and will be a subject ofanother publication.
We provide a semantic framework for preference handling in answer setprogramming. To this end, we introduce preference preservingconsequence operators. The resulting fixpoint characterizationsprovide us with a uniform semantic framework for characterizingpreference handling in existing approaches. Although our approach isextensible to other semantics by means of an alternating fixpointtheory, we focus here on the elaboration of preferences under answerset semantics. Alternatively, we show how these approaches can becharacterized by the concept of order preservation. These uniformsemantic characterizations provide us with new insights aboutinter-relationships and moreover about ways of implementation.
Connections between the sequentiality/concurrency distinction and the semantics of proofs are investigated, with particular reference to games and Linear Logic.
A multi-agent system architecture for coordination of just-in-time production and distribution is presented. The problem to solve is twofold: first the right amount of resources at the right time should be produced, then these resources should be distributed to the right consumers. In order to solve the first problem, which is hard when the production and/or distribution time is relatively long, each consumer is equipped with an agent that makes predictions of future needs that it sends to a production agent. The second part of the problem is approached by forming clusters of consumers within which it is possible to redistribute resources fast and at a low cost in order to cope with discrepancies between predicted and actual consumption. Redistribution agents are introduced (one for each cluster) to manage the redistribution of resources. The suggested architecture is evaluated in a case study concerning management of district heating systems. Results from a simulation study show that the suggested approach makes it possible to control the trade-off between quality of service and degree of surplus production. We also compare the suggested approach to a reference control scheme (approximately corresponding to the current approach to district heating management), and conclude that it is possible to reduce the amount of resources produced while maintaining the quality of service. Finally, we describe a simulation experiment where the relation between the size of the clusters and the quality of service was studied.
Designing realistic multi-agent systems is a complex process, which involves specifying not only the functionality of individual agents, but also the authority relationships and lines of communication existing among them. In other words, designing a multi-agent system refers to designing an agent organisation. Existing methodologies follow a wide variety of approaches to designing agent organisations, but they do not provide adequate support for the decisions involved in moving from analysis to design. Instead, they require designers to make ad hoc design decisions while working at a low level of abstraction.
We have developed RAMASD (Role Algebraic Multi-Agent System Design), a method for semi-automatic design of agent organisations based on the concept of role models as first-class design constructs. Role models represent agent behaviour, and the design of the agent system is done by systematically allocating roles to agents. The core of this method is a formal model of basic relations between roles, which we call role algebra. The semantics of this role-relationships model are formally defined using a two-sorted algebra.
In this paper, we review existing agent system design methodologies to highlight areas where further work is required, describe how our method can address some of the outstanding issues and demonstrate its application to a case study involving telephone repair service teams.
Event-based systems are developed and used to integrate components in loosely coupled systems. Research and product development have focused so far on efficiency issues but neglected methodological support to build such systems. In this article, the modular design and implementation of an event system is presented which supports scopes and event mappings, two new and powerful structuring methods that facilitate engineering and coordination of components in event-based systems. We give a formal specification of scopes and event mappings within a trace-based formalism adapted from temporal logic. This is complemented by a comprehensive introduction to the event-based style, its benefits and requirements.