To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study here schedulers for a class of rules that naturally arise in the context of rule-based constraint programming. We systematically derive a scheduler for them from a generic iteration algorithm of Apt (2000). We apply this study to so-called membership rules of Apt and Monfroy (2001). This leads to an implementation that yields a considerably better performance for these rules than their execution as standard CHR rules. Finally, we show how redundant rules can be identified and how appropriately reduced sets of rules can be computed.
In this paper, we present a framework for automatic generation of CHR solvers given the logical specification of the constraints. This approach takes advantage of the power of tabled resolution for constraint logic programming, in order to check the validity of the rules. Compared to previous work (Apt and Monfroy 1999; Ringeissen and Monfroy 2000; Abdennadher and Rigotti 2000; Abdennadher and Rigotti 2001a), where different methods for automatic generation of constraint solvers have been proposed, our approach enables the generation of more expressive rules (even recursive and splitting rules) that can be used directly as CHR solvers.
We introduce the notion of nested distance desert automata as a joint generalization of distance automata and desert automata. We show that limitedness of nested distance desert automata is PSPACE-complete. As an application, we show that it is decidable in 22O(n) space whether the language accepted by an n-state non-deterministic automaton is of a star height less than a given integer h (concerning rational expressions with union, concatenation and iteration), which is the first ever complexity bound for the star height problem.
In classical CLP(FD) systems, domains of variables are completely known at the beginning of the constraint propagation process. However, in systems interacting with an external environment, acquiring the whole domains of variables before the beginning of constraint propagation may cause waste of computation time, or even obsolescence of the acquired data at the time of use. For such cases, the Interactive Constraint Satisfaction Problem (ICSP) model has been proposed (Cucchiara et al. 1999a) as an extension of the CSP model, to make it possible to start constraint propagation even when domains are not fully known, performing acquisition of domain elements only when necessary, and without the need for restarting the propagation after every acquisition. In this paper, we show how a solver for the two sorted CLP language, defined in previous work (Gavanelli et al. 2005) to express ICSPs, has been implemented in the Constraint Handling Rules (CHR) language, a declarative language particularly suitable for high level implementation of constraint solvers.
As explained in Section 1.1.4, rippling is a heuristic that reflects a common pattern of reasoning found in theorem-proving: one wants to prove a goal using a given, and rewriting is used to transform the goal to the point where the given can be used. As noted in Chapter 2, there are various complications, such as multiple goals and givens and universally quantified givens with corresponding sinks. However, the general pattern is the same and can be codified by methods used in proof-planning.
To mechanize this common pattern of reasoning, rippling generalizes rewriting, so that semantic information is used to guide proof construction. The user has expectations (encoded by the proof-plan methods) about how the proof should proceed, namely that differences between the goal and the givens should be minimized. Annotations provide a kind of abstraction that is used to minimize these differences. Under this abstraction, the identity of the different symbols is ignored and one just differentiates whether they belong to the skeleton or not. Rippling constitutes an extension of rewriting that uses these annotations to drive proofs forward in a goal-directed way. Differences are monotonically decreased and rippling terminates with success or failure depending on whether the givens can be used or not.
In this chapter we consider how the concepts described above can be formalized. There is no one best formalization, so we keep our discussion abstract, when possible, and first discuss what properties are desired.
We have come a long way in our investigation of rippling: from the observation of a common pattern in structural induction proofs, to a new paradigm in proof search. Firstly, we noted that this common pattern could be enforced, rather than merely observed, by inserting meta-level annotations into object-level formulas. These annotations – wave-holes and wave-fronts – marked those parts of formulas that were to be preserved and moved, respectively. Ensuring that rewriting respected these annotations enforced additional constraints during proof search: restricting that search to those parts of the search space that made progress towards using the induction hypothesis to prove the induction conclusion.
Secondly, experimental exploration with these annotations suggested a wealth of ways to extend and generalize the original idea beyond simple structural inductions to more complex forms of induction and to many other kinds of proof. Indeed, whenever proving a goal using one or more structurally similar “givens”, rippling could help guide the proof through a potential combinatorial explosion towards a successful conclusion with little or no search.
Thirdly, since rippling imposes such strong expectations on the structure of a proof, any failure of rippling can be analyzed to suggest how to patch an initially failed proof attempt. This productive use of failure often suggests proof patches that had previously been thought beyond the ability of automated reasoners: so-called, “eureka” steps. These may include, for instance, the suggestion of a novel induction rule, a new lemma, a generalization of the original conjecture, or a case split.
Automated theorem proving has been an active research area since the 1950s when researchers began to tackle the problem of automating human-like reasoning. Different techniques were developed early on to automate the use of deduction to show that a goal follows from givens. Deduction could be used to solve problems, play games, or to construct formal, mathematical proofs. In the 1960s and 1970s, interest in automated theorem proving grew, driven by theoretical advances like the development of resolution as well as the growing interest in program verification.
Verification, and more generally, the practical use of formal methods, has raised a number of challenges for the theorem-proving community. One of the major challenges is induction. Induction is required to reason about repetition. In programs, this arises when reasoning about loops and recursion. In hardware, this arises when reasoning about parameterized circuits built from subcomponents in a uniform way, or alternatively when reasoning about the time-dependent behavior of sequential systems.
Carrying out proofs by induction is difficult. Unlike standard proofs in first-order theories, inductive proofs often require the speculation of auxiliary lemmas. This includes both generalizing the conjecture to be proven and speculating and proving additional lemmas about recursively defined functions used in the proof. When induction is not structural induction over data types, then proof search is also complicated by the need to provide a well-founded order over which the induction is performed. As a consequence of these complications, inductive proofs are often carried out interactively rather than fully automatically.
This book has described rippling: a technique for guiding proof search so that a given may be used to prove a goal. We will investigate in this chapter other areas of automated reasoning involving heuristic restrictions on proof search. We will use problems from these areas to illustrate how the ideas behind rippling can be generalized and used systematically to understand and implement many different kinds of deductive reasoning.
many proof calculi, the application of rules in certain situations is known to be unnecessary and can be pruned without sacrificing completeness. For example, in basic ordered paramodulation and basic superposition (Bachmair et al., 1992; Nieuwenhuis & Rubio, 1992), paramodulation is forbidden into terms introduced by applying substitutions in previous inference steps.
tactic-based theorem-proving, it is sometimes useful to track parts of the conjecture and use this to restrict proof search. Focus mechanisms (e.g. Robinson & Staples, 1993; Staples, 1995) for this purpose have been developed and hardwired into several calculi.
analogical reasoning, a previous proof (the source proof) is abstracted to serve as a proof template for subsequent conjectures (the target conjecture). Additional information about the source proof (in addition to the proof tree) is typically required to compute an abstract proof sketch (Kolbe & Walther, 1994; 1998) for a related target conjecture
In each of the above techniques, there is a need to encode and maintain information about individual terms and symbols and their inter-relationships.
In this appendix, we formalize a specific annotation calculus that is able to deal with all the examples that we have presented in Chapter 6. This calculus is based on a first-order proof calculus and a corresponding unification procedure.
We wish to emphasize that the approach presented is an example of a more general technique to combine annotations and logic calculi. Another example is Hutter & Kohlhase (2000), which describes how to incorporate annotations into a calculus based on higher-order logic. Both approaches share the same principal mechanisms to incorporate annotations into calculi.
An annotation calculus
The integration of annotations into a calculus is determined by the definition of annotated substitution, which we have only sketched in Section 6.1. As seen in Section 6.2.3, annotated substitutions instantiate both meta-variables and annotation variables. Note that, in contrast to the formalization in Chapter 4, instantiations of meta-variables are independent of instantiations of annotations, since we have separated signatures and variables for annotation terms and object terms.
To cope with these different kinds of variables, an annotated substitution consists of a substitution for meta-variables as well as a family of substitutions for annotation variables. The definition of annotated substitutions determines the possible ways to inherit information during an inference step. To guarantee that an annotated inference step corresponds to a sound inference step in the non-annotated calculus, an annotated substitution has to denote a “standard” substitution if we erase all annotations.
This book describes rippling, a new technique for automating mathematical reasoning. Rippling captures a common pattern of reasoning in mathematics: the manipulation of one formula to make it resemble another. Rippling was originally developed for proofs by mathematical induction; it was used to make the induction conclusion more closely resemble the induction hypotheses. It was later found to have wider applicability, for instance to problems in summing series and proving equations.
The problem of automating reasoning
The automation of mathematical reasoning has been a long-standing dream of many logicians, including Leibniz, Hilbert, and Turing. The advent of electronic computers provided the tools to make this dream a reality, and it was one of the first tasks to be tackled. For instance, the Logic Theory Machine and the Geometry Theorem-Proving Machine were both built in the 1950s and reported in Computers and Thought (Feigenbaum & Feldman, 1963), the earliest textbook on artificial intelligence. Newell, Shaw and Simon's Logic Theory Machine (Newell et al., 1957), proved theorems in propositional logic, and Gelernter's Geometry Theorem-Proving Machine (Gelernter, 1963), proved theorems in Euclidean geometry.
This early work on automating mathematical reasoning showed how the rules of a mathematical theory could be encoded within a computer and how a computer program could apply them to construct proofs. But they also revealed a major problem: combinatorial explosion. Rules could be applied in too many ways.
In the Note published last year [1], bounds and monotonicity of shot-noise and max-shot-noise processes driven by spatial stationary Cox point processes are discussed in terms of some stochastic order. Although all the statements concerning the shot-noise processes remain valid, those concerning the max-shot-noise processes have to be corrected.