To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Here is presented a 6-states non minimal-time solution which is intrinsically Minsky-like and solves the three following problems: unrestricted version on a line, with one initiator at each end of a line and the problem on a ring. We also give a complete proof of correctness of our solution, which was never done in a publication for Minsky's solutions.
We describe a method that permits the user of a mechanized mathematical logic to write elegant logical definitions while allowing sound and efficient execution. In particular, the features supporting this method allow the user to install, in a logically sound way, alternative executable counterparts for logically defined functions. These alternatives are often much more efficient than the logically equivalent terms they replace. These features have been implemented in the ACL2 theorem prover, and we discuss several applications of the features in ACL2.
In this article, we introduce Applicative functors – an abstract characterisation of an applicative style of effectful programming, weaker than Monads and hence more widespread. Indeed, it is the ubiquity of this programming pattern that drew us to the abstraction. We retrace our steps in this article, introducing the applicative pattern by diverse examples, then abstracting it to define the Applicative type class and introducing a bracket notation that interprets the normal application syntax in the idiom of an Applicative functor. Furthermore, we develop the properties of applicative functors and the generic operations they support. We close by identifying the categorical structure of applicative functors and examining their relationship both with Monads and with Arrow.
Given m positive integers R = (ri), n positive integers C = (cj) such that Σri = Σcj = N, and mn non-negative weights W=(wij), we consider the total weight T=T(R, C; W) of non-negative integer matrices D=(dij) with the row sums ri, column sums cj, and the weight of D equal to . For different choices of R, C, and W, the quantity T(R,C; W) specializes to the permanent of a matrix, the number of contingency tables with prescribed margins, and the number of integer feasible flows in a network. We present a randomized algorithm whose complexity is polynomial in N and which computes a number T′=T′(R,C;W) such that T′ ≤ T ≤ α(R,C)T′ where . In many cases, ln T′ provides an asymptotically accurate estimate of ln T. The idea of the algorithm is to express T as the expectation of the permanent of an N × N random matrix with exponentially distributed entries and approximate the expectation by the integral T′ of an efficiently computable log-concave function on ℝmn.
Let Γ =(V,E) be a point-symmetric reflexive relation and let υ ∈ V such that |Γ(υ)| is finite (and hence |Γ(x)| is finite for all x, by the transitive action of the group of automorphisms). Let j ∈ℕ be an integer such that Γj(υ)∩ Γ−(υ)={υ}. Our main result states that
As an application we have |Γj(υ)| ≥ 1+(|Γ(υ)|−1)j. The last result confirms a recent conjecture of Seymour in the case of vertex-symmetric graphs. Also it gives a short proof for the validity of the Caccetta–Häggkvist conjecture for vertex-symmetric graphs and generalizes an additive result of Shepherdson.
The refinement calculus for logic programs is a framework for deriving logic programs from specifications. It is based on a wide-spectrum language that can express both specifications and code, and a refinement relation that models the notion of correct implementation. In this paper we extend and generalise earlier work on contextual refinement. Contextual refinement simplifies the refinement process by abstractly capturing the context of a subcomponent of a program, which typically includes information about the values of the free variables. This paper also extends and generalises module refinement. A module is a collection of procedures that operate on a common data type; module refinement between a specification module A and an implementation module C allows calls to the procedures of A to be systematically replaced with calls to the corresponding procedures of C. Based on the conditions for module refinement, we present a method for calculating an implementation module from a specification module. Both contextual and module refinement within the refinement calculus have been generalised from earlier work and the results are presented in a unified framework.
Attempting to automatically learn to identify verb complements from natural language corpora without the help of sophisticated linguistic resources like grammars, parsers or treebanks leads to a significant amount of noise in the data. In machine learning terms, where learning from examples is performed using class-labelled feature-value vectors, noise leads to an imbalanced set of vectors: assuming that the class label takes two values (in this work complement/non-complement), one class (complements) is heavily underrepresented in the data in comparison to the other. To overcome the drop in accuracy when predicting instances of the rare class due to this disproportion, we balance the learning data by applying one-sided sampling to the training corpus and thus by reducing the number of non-complement instances. This approach has been used in the past in several domains (image processing, medicine, etc) but not in natural language processing. For identifying the examples that are safe to remove, we use the value difference metric, which proves to be more suitable for nominal attributes like the ones this work deals with, unlike the Euclidean distance, which has been used traditionally in one-sided sampling. We experiment with different learning algorithms which have been widely used and their performance is well known to the machine learning community: Bayesian learners, instance-based learners and decision trees. Additionally we present and test a variation of Bayesian belief networks, the COr-BBN (Class-oriented Bayesian belief network). The performance improves up to 22% after balancing the dataset, reaching 73.7% f-measure for the complement class, having made use only a phrase chunker and basic morphological information for preprocessing.
We define the class of discrete classical categorial grammars, similar inthe spirit to the notion of reversible class of languages introduced by Angluin andSakakibara. We show that the class of discrete classical categorial grammars is identifiable from positive structured examples. For this, we provide an original algorithm, which runs in quadratic time in the size of the examples. This work extends the previous results of Kanazawa. Indeed, in our work, several types can be associated to a word and the class is still identifiable in polynomial time. We illustrate the relevance of the class of discrete classical categorial grammars with linguistic examples.
A good practice to ensure high-positioning accuracy in industrial robots is to use joint error maximum mutual compensation (JEMMC). This paper presents an application of JEMMC for positioning of hexapod robots to improve end-effector positioning accuracy. We developed an algorithm and simulation framework in MatLab to find optimal hexapod configurations with JEMMC. Based on a real hexapod model, simulation results of the proposed approach are presented. Optimal hexapod configurations were found using the local minimum of the infinity norm of hexapod Jacobian inverse. JEMMC usage in hexapod robots can improve hexapod end-effector positioning accuracy by two times and more.
Zeilberger's enumeration schemes can be used to completely automate the enumeration of many permutation classes. We extend his enumeration schemes so that they apply to many more permutation classes and describe the Maple package WilfPlus, which implements this process. We also compare enumeration schemes to three other systematic enumeration techniques: generating trees, substitution decompositions, and the insertion encoding.
This is the first publication presenting the minihumanoid robot THBIP-2, the second-generation biped of Tsinghua University. It is 70 cm in height and 18 kg in weight with 24 degrees of freedom. This paper mainly addresses its mechatronics system realization, including the conceptual design, actuation system, sensing system, and control system. In addition, a walking stability controller based on zero moment point criterion and the walking simulation are presented. Finally, experiments validate and confirm the efficiency of the design.
This paper describes CLIME, a web-based legal advisory system with a multilingual natural language interface. CLIME is a ‘proof-of-concept’ system which answers queries relating to ship-building and ship-operating regulations. Its core knowledge source is a set of such regulations encoded as a conceptual domain model and a set of formalised legal inference rules. The system supports retrieval of regulations via the conceptual model, and assessment of the legality of a situation or activity on a ship according to the legal inference rules. The focus of this paper is on the natural language aspects of the system, which help the user to construct semantically complex queries using WYSIWYM technology, allow the system to produce extended and cohesive responses and explanations, and support the whole interaction through a hybrid synchronous/asynchronous dialogue structure. Multilinguality (English and French) is viewed simply as interface localisation: the core representations are language-neutral, and the system can present extended or local interactions in either language at any time. The development of CLIME featured a high degree of client involvement, and the specification, implementation and evaluation of natural language components in this context are also discussed.
Genetic algorithm is used to determine the optimal capture points for the multi agents required to grasp a moving generic prismatic object by arresting it in form closure. Thereafter, the agents approach their respective moving goals using a decentralized projective path planning algorithm. Post arrest, the object is guided along a desired linear path to a desired goal point. Form closure of the object is obtained using the concept of accessibility angle. A convex envelop is formed around the object, and the goal points on the object boundary are mapped onto the envelope. The robots approach the mapped goal points first, and then, converge on the actual object. This ensures that the agents reach the actual goal points almost simultaneously, and do not undergo looping at a local concave region. The object is assumed alive while being captured but is assumed compromised thereafter. Post arrest, robots alter their positions optimally around the object to transport it along a desired direction. Frictionless point contact between the object and a robot is assumed. The shape of the mobile robot is considered cylindrical such that it can only apply force along the outward radial direction. Simulation results are presented that illustrate the effectiveness of the proposed method.
This paper presents an operational semantics for the core of Scheme. Our specification improves over the denotational semantics from the Revised5 Report on Scheme specification in four ways. First, it covers a larger part of the language, specifically eval, quote, dynamic-wind, and the top level. Second, it models multiple values in a way that does not require changes to unrelated parts of the language. Third, it provides a faithful model of Scheme's undefined order of evaluation. Finally, we have implemented our specification in PLT Redex, a domain-specific language for writing operational semantics. The implementation allows others to experiment with our specification and allows us to build a specification test suite, which improves our confidence that our system is a faithful model of Scheme. In addition to a specification of Scheme, this paper contributes three novel modeling techniques for Felleisen Hieb-style rewriting semantics. All three techniques are applicable to a wider range of problems than modeling Scheme, and they combine seamlessly in our model, suggesting that they would scale to complete models of other languages.
We consider logics on $\mathbb{Z}$ and $\mathbb{N}$ which are weaker than Presburger arithmetic and we settle the following decision problem: given a k-ary relation on $\mathbb{Z}$ and $\mathbb{N}$ which are first order definable inPresburger arithmetic, are they definable in theseweaker logics? These logics, intuitively,are obtained by considering modulo and threshold counting predicates for differences of two variables.
In this paper we introduce a class of constraint logic programs such that their termination can be proved by using affine level mappings. We show that membership to this class is decidable in polynomial time.
This paper presents a Prolog interface to the MiniSat satisfiability solver. Logic programming with satisfiability combines the strengths of the two paradigms: logic programming for encoding search problems into satisfiability on the one hand and efficient SAT solving on the other. This synergy between these two exposes a programming paradigm that we propose here as a logic programming pearl. To illustrate logic programming with SAT solving, we give an example Prolog program that solves instances of Partial MAXSAT.
We consider random graphs with a fixed degree sequence. Molloy and Reed [11, 12] studied how the size of the giant component changes according to degree conditions. They showed that there is a phase transition and investigated the order of components before and after the critical phase. In this paper we study more closely the order of components at the critical phase, using singularity analysis of a generating function for a branching process which models the random graph with a given degree sequence.
Recently there has been a growing interest in research in tabling in the logic programming community because of its usefulness in a variety of application domains including program analysis, parsing, deductive databases, theorem proving, model checking, and logic-based probabilistic learning. The main idea of tabling is to memorize the answers to some subgoals and use the answers to resolve subsequent variant subgoals. Early resolution mechanisms proposed for tabling such as OLDT and SLG rely on suspension and resumption of subgoals to compute fixpoints. Recently, the iterative approach named linear tabling has received considerable attention because of its simplicity, ease of implementation, and good space efficiency. Linear tabling is a framework from which different methods can be derived on the basis of the strategies used in handling looping subgoals. One decision concerns when answers are consumed and returned. This article describes two strategies, namely, lazy and eager strategies, and compares them both qualitatively and quantitatively. The results indicate that, while the lazy strategy has good locality and is well suited for finding all solutions, the eager strategy is comparable in speed with the lazy strategy and is well suited for programs with cuts. Linear tabling relies on depth-first iterative deepening rather than suspension to compute fixpoints. Each cluster of interdependent subgoals as represented by a topmost looping subgoal is iteratively evaluated until no subgoal in it can produce any new answers. Naive re-evaluation of all looping subgoals, albeit simple, may be computationally unacceptable. In this article, we also introduce semi-naive optimization, an effective technique employed in bottom-up evaluation of logic programs to avoid redundant joins of answers, into linear tabling. We give the conditions for the technique to be safe (i.e., sound and complete) and propose an optimization technique called early answer promotion to enhance its effectiveness. Benchmarking in B-Prolog demonstrates that with this optimization linear tabling compares favorably well in speed with the state-of-the-art implementation of SLG.
Let denote the set of unrooted labelled trees of size n and let ℳ be a particular (finite, unlabelled) tree. Assuming that every tree of is equally likely, it is shown that the limiting distribution as n goes to infinity of the number of occurrences of ℳ is asymptotically normal with mean value and variance asymptotically equivalent to μn and σ2n, respectively, where the constants μ>0 and σ≥0 are computable.