To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The widespread availability of large data-sets poses both an opportunity and a challenge to logic programming. A first approach is to couple a relational database with logic programming, say, a Prolog system with MySQL. While this approach does pay off in cases where the data cannot reside in main memory, it is known to introduce substantial overheads. Ideally, we would like the Prolog system to deal with large data-sets in an efficient way both in terms of memory and of processing time. Just In Time Indexing (JITI) was mainly motivated by this challenge, and can work quite well in many application.
Exo-compilation, designed to deal with large tables, is a next step that achieves very interesting results, reducing the memory footprint over two thirds. We show that combining exo-compilation with Just In Time Indexing can have significant advantages both in terms of memory usage and in terms of execution time.
An alternative path that is relevant for many applications is User-Defined Indexing (UDI). This allows the use of specialized indexing for specific applications, say the spatial indexing crucial to any spatial system. The UDI sees indexing as pluggable modules, and can naturally be combined with Exo-compilation. We do so by using UDI with exo-data, and incorporating ideas from the UDI into high-performance indexers for specific tasks.
We consider disjunctive logic programs without function symbols but with existential quantification in rule heads, under the semantics of general stable models. There are at least two interesting prospects in these programs. The first is that a program can be made more succinct by using existential variables, and the second is on the potential in representing defeasible ontological knowledge by these logic programs. This paper studies some of the properties of these programs. First, we show a simple yet intuitive definition of stable models for these programs that does not resort to second-order logic. Second, the stable models of these programs can be characterized by an extension of progression for disjunctive programs, which provides a native characterization of justification for stable models. We then study the decidability issue. While the stable model existence problem for safe disjunctive programs is decidable, with existential quantification allowed in rule heads the problem becomes undecidable. We identify an interesting decidable fragment by exploring a new notion of stratification over existential quantification.
Fuzzy answer set programming (FASP) is a recent formalism for knowledge representation that enriches the declarativity of answer set programming by allowing propositions to be graded. To now, no implementations of FASP solvers are available and all current proposals are based on compilations of logic programs into different paradigms, like mixed integer programs or bilevel programs. These approaches introduce many auxiliary variables which might affect the performance of a solver negatively. To limit this downside, operators for approximating fuzzy answer sets can be introduced: Given a FASP program, these operators compute lower and upper bounds for all atoms in the program such that all answer sets are between these bounds. This paper analyzes several operators of this kind which are based on linear programming, fuzzy unfounded sets and source pointers. Furthermore, the paper reports on a prototypical implementation, also describing strategies for avoiding computations of these operators when they are guaranteed to not improve current bounds. The operators and their implementation can be used to obtain more constrained mixed integer or bilevel programs, or even for providing a basis for implementing a native FASP solver. Interestingly, the semantics of relevant classes of programs with unique answer sets, like positive programs and programs with stratified negation, can be already computed by the prototype without the need for an external tool.
We study the problem of finding optimal plans for multiple teams of robots through a mediator, where each team is given a task to complete in its workspace on its own and where teams are allowed to transfer robots between each other, subject to the following constraints: 1) teams (and the mediator) do not know about each other's workspace or tasks (e.g., for privacy purposes); 2) every team can lend or borrow robots, but not both (e.g., transportation/calibration of robots between/for different workspaces is usually costly). We present a mathematical definition of this problem and analyze its computational complexity. We introduce a novel, logic-based method to solve this problem, utilizing action languages and answer set programming for representation, and the state-of-the-art ASP solvers for reasoning. We show the applicability and usefulness of our approach by experiments on various scenarios of responsive and energy-efficient cognitive factories.
This paper presents a detailed analysis of the scalability and parallelization of local search algorithms for the Satisfiability problem. We propose a framework to estimate the parallel performance of a given algorithm by analyzing the runtime behavior of its sequential version. Indeed, by approximating the runtime distribution of the sequential process with statistical methods, the runtime behavior of the parallel process can be predicted by a model based on order statistics. We apply this approach to study the parallel performance of two SAT local search solvers, namely Sparrow and CCASAT, and compare the predicted performances to the results of an actual experimentation on parallel hardware up to 384 cores. We show that the model is accurate and predicts performance close to the empirical data. Moreover, as we study different types of instances (random and crafted), we observe that the local search solvers exhibit different behaviors and that their runtime distributions can be approximated by two types of distributions: exponential (shifted and non-shifted) and lognormal.
Recent work in Answer Set Programming has integrated ideas from Constraint Programming. This has led to a new field called ASP Modulo CSP (CASP), in which the ASP language is enriched with constraint atoms representing constraint satisfaction problems. These constraints have a more compact grounding and are handled by a new generation of search algorithms. However, the burden is on the modeler to exploit these new constructs in his declarative problem specifications. Here, we explore how to remove this burden by automatically generating constraint atoms. We do so in the context of FO(·)IDP, a knowledge representation language that extends first-order logic with, among others, inductive definitions, arithmetic and aggregates. We uncover functional dependencies in declarative problem specifications with a theorem prover and exploit them with a transformation that introduces functions. Experimental evaluation shows that we obtain more compact groundings and better search performance.
We investigate the relationship between the generalization of program completion defined in 1984 by Lloyd and Topor and the generalization of the stable model semantics introduced recently by Ferraris et al. The main theorem can be used to characterize, in some cases, the general stable models of a logic program by a first-order formula. The proof uses Truszczynski's stable model semantics of infinitary propositional formulas.
In this paper, we present SeaLion, an integrated development environment (IDE) for answer-set programming (ASP). SeaLion provides source-code editors for the languages of Gringo and DLV and offers popular amenities like syntax highlighting, syntax checking, code completion, visual program outline, and refactoring functionality. The tool has been realised in the context of a research project whose goal is the development of techniques to support the practical coding process of answer-set programs. In this respect, SeaLion is the first IDE for ASP that provides debugging features that work for real-world answer-set programs and supports the rich languages of modern answer-set solvers. Indeed, SeaLion implements a stepping-based debugging approach that allows the developer to quickly track down programming errors by simply following his or her intuitions on the intended semantics. Besides that, SeaLion supports ASP development using model-driven engineering techniques including domain modelling with extended UML class diagrams and visualisation of answer sets in corresponding instance diagrams. Moreover, customised visualisation as well as visual editing of answer sets is realised by the Kara plugin of SeaLion. Further implemented features are a documentation generator based on the Lana annotation language, support for external solvers, and interoperability with external tools. SeaLion comes as a plugin of the popular Eclipse platform and provides interfaces for future extensions of the IDE.
A large body of work has been dedicated to termination analysis of logic programs but relatively little has been done to analyze non-termination. In our opinion, explaining non-termination is a much more important task because it can dramatically improve a user's ability to effectively debug large, complex logic programs without having to abide by punishing syntactic restrictions. Non-termination analysis examines program execution history when the program is suspected to not terminate and informs the programmer about the exact reasons for this behavior. In Liang and Kifer (2013), we studied the problem of non-termination in tabled logic engines with subgoal abstraction, such as XSB, and proposed a suite of algorithms for non-termination analysis, called Terminyzer. These algorithms analyze forest logging traces and output sequences of tabled subgoal calls that are the likely causes of non-terminating cycles. However, this feedback was hard to use in practice: the same subgoal could occur in multiple rule heads and in even more places in rule bodies, so Terminyzer left too much tedious, sometimes combinatorially large amount of work for the user to do manually.
Here we propose a new suite of algorithms, Terminyzer+, which closes this usability gap. Terminyzer+ can detect not only sequences of subgoals that cause non-termination, but, importantly, the exact rules where they occur and the rule sequences that get fired in a cyclic manner, thus causing non-termination. This makes Terminyzer+ suitable as a back-end for user-friendly graphical interfaces on top of Terminyzer+, which can greatly simplify the debugging process. Terminyzer+ back-ends exist for the SILK system as well as for the open-source ${\cal F}$lora-2 system. A graphical interface has been developed for SILK and is currently underway for ${\cal F}$lora-2. We also report experimental studies, which confirm the effectiveness of Terminyzer+ on a host of large real-world knowledge bases. All tests used in this paper are available online.1
In addition, we make a step towards automatic remediation of non-terminating programs by proposing an algorithm that heuristically fixes some causes of misbehavior. Furthermore, unlike Terminyzer, Terminyzer+ does not require the underlying logic engine to support subgoal abstraction, although it can make use of it.
FO(·)IDP3 extends first-order logic with inductive definitions, partial functions, types and aggregates. Its model generator IDP3 first grounds the theory and then uses search to find the models. The grounder uses Lifted Unit Propagation (LUP) to reduce the size of the groundings of problem specifications in IDP3. LUP is in general very effective, but performs poorly on definitions of predicates whose two-valued interpretation can be computed from data in the input structure. To solve this problem, a preprocessing step is introduced that converts such definitions to Prolog code and uses XSB Prolog to compute their interpretation. The interpretation of these predicates is then added to the input structure, their definitions are removed from the theory and further processing is done by the standard IDP3 system. Experimental results show the effectiveness of our method.
FS-rules provide a powerful monotonic extension for Horn clauses that supports monotonic aggregates in recursion by reasoning on the multiplicity of occurrences satisfying existential goals. The least fixpoint semantics, and its equivalent least model semantics, hold for logic programs with FS-rules; moreover, generalized notions of stratification and stable models are easily derived when negated goals are allowed. Finally, the generalization of techniques such as seminaive fixpoint and magic sets, make possible the efficient implementation of DatalogFS, i.e., Datalog with rules with Frequency Support (FS-rules) and stratified negation. A large number of applications that could not be supported efficiently, or could not be expressed at all in stratified Datalog can now be easily expressed and efficiently supported in DatalogFS and a powerful DatalogFS system is now being developed at UCLA.
The paper provides a framework for the verification of business processes, based on an extension of answer set programming (ASP) with temporal logic and constraints. The framework allows to capture expressive fluent annotations as well as data awareness in a uniform way. It allows for a declarative specification of a business process but also for encoding processes specified in conventional workflow languages. Verification of temporal properties of a business process, including verification of compliance to business rules, is performed by bounded model checking techniques in Answer Set Programming, extended with constraint solving for dealing with conditions on numeric data.
Gödel argued that Cantor’s notion of cardinal number was uniquely correct. More recent work has defended alternative “Euclidean”' theories of set size, in which Cantor’s Principle (two sets have the same size if and only if there is a one-to-one correspondence between them) is abandoned in favor of the Part–Whole Principle (if A is a proper subset of B then A is smaller than B). Here we see from simple examples, not that Euclidean theories of set size are wrong, nor merely that they are counterintuitive, but that they must be either very weak or in large part arbitrary and misleading. This limits their epistemic usefulness.
Let {X1 , . . , Xn} be a collection of binary-valued random variables and let f : {0, 1}n → $\mathbb{R}$ be a Lipschitz function. Under a negative dependence hypothesis known as the strong Rayleigh condition, we show that f − ${\mathbb E}$f satisfies a concentration inequality. The class of strong Rayleigh measures includes determinantal measures, weighted uniform matroids and exclusion measures; some familiar examples from these classes are generalized negative binomials and spanning tree measures. For instance, any Lipschitz-1 function of the edges of a uniform spanning tree on vertex set V (e.g., the number of leaves) satisfies the Gaussian concentration inequality
\begin{linenomath}$${{\mathbb P} (f - {\mathbb E} f \geq a) \leq \exp \biggl( - \frac{a^2}{8 \, |V|} \biggr) }.$$\end{linenomath}
We also prove a continuous version for concentration of Lipschitz functionals of a determinantal point process.
The observation of two independent, orthogonally polarized shear waves, one traveling faster than the other, is arguably the most unambiguous indicator of wave propagation through an anisotropic medium. The splitting can be quantified by the time delay (δt) between the two shear waves and the orientation (φ) of the fast shear wave (Fig. 11.1).
In this chapter, we review briefly the theory behind shear wave splitting, with particular focus on the popular method of Silver and Chan (1991), for which we provide source code and documentation.
SHEAR WAVE SPLITTING ANALYSIS
Seismic anisotropy can be studied via shear wave splitting throughout the Earth, ranging in depth from hydrocarbon reservoirs in the shallow crust (e.g., Verdon et al., 2009) to the core and deep mantle (e.g., Wookey and Helffrich, 2008). By selecting earthquakes at angular distances ≥88° from a recording site, phases such as SKS, PKS and SKKS can be readily isolated for analysis of upper mantle anisotropy (for a review, see Savage, 1999). We will focus on analysis of these core phases here.
Patterns of seismic anisotropy can develop due to the preferential alignment of minerals in the crust and/or mantle, the preferential alignment of fluid or melt, layering of isotropic materials, or some combination thereof (e.g., Blackman and Kendall, 1997).