To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper a number of alternative strategies for distributed/parallel association rule mining are investigated. The methods examined make use of a data structure, the T-tree, introduced previously by the authors as a structure for organizing sets of attributes for which support is being counted. We consider six different approaches, representing different ways of parallelizing the basic Apriori-T algorithm that we use. The methods focus on different mechanisms for partitioning the data between processes, and for reducing the message-passing overhead. Both ‘horizontal’ (data distribution) and ‘vertical’ (candidate distribution) partitioning strategies are considered, including a vertical partitioning algorithm (DATA-VP) which we have developed to exploit the structure of the T-tree. We present experimental results examining the performance of the methods in implementations using JavaSpaces. We conclude that in a JavaSpaces environment, candidate distribution strategies offer better performance than those that distribute the original dataset, because of the lower messaging overhead, and the DATA-VP algorithm produced results that are especially encouraging.
This paper addresses the path planning problem for manipulators. The problem of path planning in robotics can be defined as follows: To find a collision free trajectory from an initial configuration to a goal configuration. In this paper a collision-free path planner for manipulators, based on a local constraints method, is proposed. In this approach the task is described by a minimization problem under geometric constraints. The anti-collision constraints are mapped as linear constraints in the configuration space and they are not included in the function to minimize. Also, the task to achieve is defined as a combination of two displacements. The first displacement brings the robot towards to the goal configuration, while the second one allows the robot to avoid the local minima. This formulation solves many of classical problems found in local methods. However, when the robot acts in some heavy cluttered environments, a zig-zaging phenomenon could appear. To solve this situation, a graph based on the local environment of the robot is constructed. On this graph, an A* search is performed, in order to find a dead-lock free position that can be used as a sub-goal in the optimization process. This path-planner has been implemented within SMAR, a CAD-Robotics system developed at our laboratory. Tests in heavy cluttered environments were successfully performed.
In contrast to standard approaches based on agent communication languages (ACLs), environment-based coordination is emerging as an interesting alternative for structuring interactions in multiagent systems (MASs). In particular, the notion of coordination artifacts has been proposed as an engineering methodology to build runtime abstractions effectively providing collaborating agents with specifically designed coordination tasks.
In this paper, we study the semantics for the interaction of agents with coordination artifacts playing the same role of ACL semantics, that is, supporting semantic interoperability between agents developed by different parties through the connection between rationality and interaction. Our approach is rooted on the notion of operating instructions of coordination artifacts, which—as with a manual for a human exploiting a device—describe the interaction protocols the agent can follow as well as the mentalistic semantics of each single interaction. By tackling some of the most relevant issues raised in the context of ACL semantics, our framework allows intelligent, BDI-like agents to carry on complex interactions through coordination artifacts in a rational way.
Light affine logic is a variant of linear logic with a polynomial cut-elimination procedure. We study the extensional expressive power of light affine logic with respect to a general notion of encoding of functions in the setting of the Curry–Howard correspondence. We consider light affine logic with both fixpoints of formulae and second-order quantifiers, and analyse the properties of polytime soundness and polytime completeness for various fragments of this system. In particular, we show that the implicative propositional fragment is not polytime complete if we place some reasonable conditions on the encodings. Following previous work, we show that second order leads to polytime unsoundness. We then introduce simple constraints on second-order quantification and fixpoints, and prove that the fragments obtained are polytime sound and complete.
Knowledge Discovery and Data Mining is a very dynamic research and development area that is reaching maturity. As such, it requires stable and well-defined foundations, which are well understood and popularized throughout the community. This survey presents a historical overview, description and future directions concerning a standard for a Knowledge Discovery and Data Mining process model. It presents a motivation for use and a comprehensive comparison of several leading process models, and discusses their applications to both academic and industrial problems. The main goal of this review is the consolidation of the research in this area. The survey also proposes to enhance existing models by embedding other current standards to enable automation and interoperability of the entire process.
We address the problem of improving the efficiency of natural language text input under degraded conditions (for instance, on mobile computing devices or by disabled users), by taking advantage of the informational redundancy in natural language. Previous approaches to this problem have been based on the idea of prediction of the text, but these require the user to take overt action to verify or select the system's predictions. We propose taking advantage of the duality between prediction and compression. We allow the user to enter text in compressed form, in particular, using a simple stipulated abbreviation method that reduces characters by 26.4%, yet is simple enough that it can be learned easily and generated relatively fluently. We decode the abbreviated text using a statistical generative model of abbreviation, with a residual word error rate of 3.3%. The chief component of this model is an n-gram language model. Because the system's operation is completely independent from the user's, the overhead from cognitive task switching and attending to the system's actions online is eliminated, opening up the possibility that the compression-based method can achieve text input efficiency improvements where the prediction-based methods have not. We report the results of a user study evaluating this method.
A predictive display method and man-virtual robot interaction based on augmented reality are applied to control a telerobot. We first discuss the process of the augmented reality environment development. Then, we present the advantages of predictive display. Simulation of virtual robot's tasks in the augmented environment improves the safety of the telerobot when it executes the planned tasks. In addition, the immediate feedback from the virtual robot avoids the exacerbation of maneuverability caused by time-delay. For a more natural operation process, we apply multi man-virtual robot interactive methods. Lastly, the experiment of pick & place is conducted to validate the system.
We address the problem of extracting bilingual chunk pairs from parallel text to create training sets for statistical machine translation. We formulate the problem in terms of a stochastic generative process over text translation pairs, and derive two different alignment procedures based on the underlying alignment model. The first procedure is a now-standard dynamic programming alignment model which we use to generate an initial coarse alignment of the parallel text. The second procedure is a divisive clustering parallel text alignment procedure which we use to refine the first-pass alignments. This latter procedure is novel in that it permits the segmentation of the parallel text into sub-sentence units which are allowed to be reordered to improve the chunk alignment. The quality of chunk pairs are measured by the performance of machine translation systems trained from them. We show practical benefits of divisive clustering as well as how system performance can be improved by exploiting portions of the parallel text that otherwise would have to be discarded. We also show that chunk alignment as a first step in word alignment can significantly reduce word alignment error rate.
We show that the model of quantum computation based on density matrices and superoperators can be decomposed into a pure classical (functional) part and an effectful part modelling probabilities and measurement. The effectful part can be modelled using a generalisation of monads called arrows. We express the resulting executable model of quantum computing in the Haskell programming language using its special syntax for arrow computations. However, the embedding in Haskell is not perfect: a faithful model of quantum computing requires type capabilities that are not directly expressible in Haskell.
Full formal descriptions of algorithms making use of quantum principles must take into account both quantum and classical computing components, as well as communications between these components. Moreover, to model concurrent and distributed quantum computations and quantum communication protocols, communications over quantum channels that move qubits physically from one place to another must also be taken into account.
Inspired by classical process algebras, which provide a framework for modelling cooperating computations, a process algebraic notation is defined. This notation provides a homogeneous style for formal descriptions of concurrent and distributed computations comprising both quantum and classical parts. Based upon an operational semantics that makes sure that quantum objects, operations and communications operate according to the postulates of quantum mechanics, an equivalence is defined among process states considered as having the same behaviour. This equivalence is a probabilistic branching bisimulation. From this relation, an equivalence on processes is defined. However, it is not a congruence because it is not preserved by parallel composition.
In this paper we develop a functional programming language for quantum computers by extending the simply-typed lambda calculus with quantum types and operations. The design of this language adheres to the ‘quantum data, classical control’ paradigm, following the first author's work on quantum flow-charts. We define a call-by-value operational semantics, and give a type system using affine intuitionistic linear logic. The main results of this paper are the safety properties of the language and the development of a type inference algorithm.
This special issue of Mathematical Structures in Computer Science grew out of the 2nd International Workshop on Quantum Programming Languages (QPL 2004), which was held July 12–13, 2004 in Turku, Finland. The purpose of the workshop was to bring together researchers working on mathematical formalisms and programming languages for quantum computing. It was the second in a series of workshops aimed at addressing a growing interest in logical tools, languages, and semantical methods for analysing quantum computation.
We develop a notion of predicate transformer and, in particular, the weakest precondition, appropriate for quantum computation. We show that there is a Stone-type duality between the usual state-transformer semantics and the weakest precondition semantics. Rather than trying to reduce quantum computation to probabilistic programming, we develop a notion that is directly taken from concepts used in quantum computation. The proof that weakest preconditions exist for completely positive maps follows immediately from the Kraus representation theorem. As an example, we give the semantics of Selinger's language in terms of our weakest preconditions. We also cover some specific situations and exhibit an interesting link with stabilisers.
We define a language CQP (Communicating Quantum Processes) for modelling systems that combine quantum and classical communication and computation. CQP combines the communication primitives of the pi-calculus with primitives for measurement and transformation of the quantum state; in particular, quantum bits (qubits) can be transmitted from process to process along communication channels. CQP has a static type system, which classifies channels, distinguishes between quantum and classical data, and controls the use of quantum states. We formally define the syntax, operational semantics and type system of CQP, prove that the semantics preserves typing, and prove that typing guarantees that each qubit is owned by a unique process within a system. We also define a typechecking algorithm and prove that it is sound and complete with respect to the type system. We illustrate CQP by defining models of several quantum communication systems, and outline our plans for using CQP as the foundation for formal analysis and verification of combined quantum and classical systems.