To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
For any finite word w on a finite alphabet, we consider the basic parameters Rw and Kw of w defined as follows: Rw is the minimal natural number for which w has no right special factor of length Rw and Kw is the minimal natural number for which w has no repeated suffix of length Kw. In this paper we study the distributions of these parameters, here called characteristic parameters, among the words of each length on a fixed alphabet.
Many solutions to AI problems require the task to be represented in one of a multitude of rigorous mathematical formalisms. The construction of such mathematical models forms a difficult problem which is often left to the user of the problem-solver. This void between problem-solvers and their problems is studied by the eclectic field of automated modelling. Within this field, compositional modelling, a knowledge-based methodology for system-modelling, has established itself as a leading approach. In general, a compositional modeller organises knowledge in a structure of composable fragments that relate to particular system components or processes. Its embedded inference mechanism chooses the appropriate fragments with respect to a given problem, instantiates and assembles them into a consistent system model. Many different types of compositional modeller exist, however, with significant differences in their knowledge representation and approach to inference. This paper examines compositional modelling. It presents a general framework for building and analysing compositional modellers. Based on this framework, a number of influential compositional modellers are examined and compared. The paper also identifies the strengths and weaknesses of compositional modelling and discusses some typical applications.
This article gives a comprehensive overview of techniques for personalised hypermedia presentation. It describes the data about the computer user, the computer usage and the physical environment that can be taken into account when adapting hypermedia pages to the needs of the current user. Methods for acquiring these data, for representing them as models in formal systems and for making generalisations and predictions about the user based thereon are discussed. Different types of hypermedia adaptation to the individual user's needs are distinguished and recommendations for further research and applications given. While the focus of the article is on hypermedia adaptation for improving customer relationship management utilising the World Wide Web, many of the techniques and distinctions also apply to other types of personalised hypermedia applications within and outside the World Wide Web, like adaptive educational systems.
The first International Workshop on Chance Discovery (CD) was held in the hot-springs resort of Matsue, Shimani Prefecture, Japan, on 22 May 2001, as part of the Fifteenth Annual Conference of the Japanese Society for Artificial Intelligence (JSAI-2001). Thirteen presentations were made to the workshop (Ohsawa, 2001), with 25 people attending. The majority of presenters and attendees were from Japan. An edited selection of the papers presented will be included in a volume of papers from the various International Workshops of JSAI-2001, to be published in the Springer series, Advanced Information Processing. A forthcoming special issue of the journal New Generation Computing will also address this topic.
In this paper, we focus on the problem of existence and computing of small and large stable models. We show that for every fixed integer k, there is a linear-time algorithm to decide the problem LSM (large stable models problem): does a logic program P have a stable model of size at least [mid ]P[mid ]−k? In contrast, we show that the problem SSM (small stable models problem) to decide whether a logic program P has a stable model of size at most k is much harder. We present two algorithms for this problem but their running time is given by polynomials of order depending on k. We show that the problem SSM is fixed-parameter intractable by demonstrating that it is W[2]-hard. This result implies that it is unlikely an algorithm exists to compute stable models of size at most k that would run in time O(mc), where m is the size of the program and c is a constant independent of k. We also provide an upper bound on the fixed-parameter complexity of the problem SSM by showing that it belongs to the class W[3].
In this paper we investigate the theoretical foundation of a new bottom-up semantics for linear logic programs, and more precisely for the fragment of LinLog (Andreoli, 1992) that consists of the language LO (Andreoli & Pareschi, 1991) enriched with the constant 1. We use constraints to symbolically and finitely represent possibly infinite collections of provable goals. We define a fixpoint semantics based on a new operator in the style of TP working over constraints. An application of the fixpoint operator can be computed algorithmically. As sufficient conditions for termination, we show that the fixpoint computation is guaranteed to converge for propositional LO. To our knowledge, this is the first attempt to define an effective fixpoint semantics for linear logic programs. As an application of our framework, we also present a formal investigation of the relations between LO and Disjunctive Logic Programming (Minker et al., 1991). Using an approach based on abstract interpretation, we show that DLP fixpoint semantics can be viewed as an abstraction of our semantics for LO. We prove that the resulting abstraction is correct and complete (Cousot & Cousot, 1977; Giacobazzi & Ranzato, 1997) for an interesting class of LO programs encoding Petri Nets.
Abstract interpretation is a general methodology for systematic development of program analyses. An abstract interpretation framework is centered around a parametrized non-standard semantics that can be instantiated by various domains to approximate different program properties. Many abstract interpretation frameworks and analyses for Prolog have been proposed, which seek to extract information useful for program optimization. Although motivated by practical considerations, notably making Prolog competitive with imperative languages, such frameworks fail to capture some of the control structures of existing implementations of the language. In this paper, we propose a novel framework for the abstract interpretation of Prolog which handles the depth-first search rule and the cut operator. It relies on the notion of substitution sequence to model the result of the execution of a goal. The framework consists of (i) a denotational concrete semantics, (ii) a safe abstraction of the concrete semantics defined in terms of a class of post-fixpoints, and (iii) a generic abstract interpretation algorithm. We show that traditional abstract domains of substitutions may easily be adapted to the new framework, and provide experimental evidence of the effectiveness of our approach. We also show that previous work on determinacy analysis, that was not expressible by existing abstract interpretation frameworks, can be seen as an instance of our framework. The ideas developed in this paper can be applied to other logic languages, notably to constraint logic languages, and the theoretical approach should be of general interest for the analysis of many non-deterministic programming languages.
Given an r-graph F, an r-graph G is called weakly F-saturated if the edges missing from G can be added, one at a time, in some order, each extra edge creating a new copy of F. Let w-sat(n, F) be the minimal size of a weakly F-saturated graph of order n. We compute the w-sat function for a wide class of r-graphs called pyramids: these include many examples for which the w-sat function was known, as well as many new examples, such as the computation of w-sat(n,Ks + Kt), and enable us to prove a conjecture of Tuza.
Our main technique, developed from ideas of Kalai, is based on matroids derived from exterior algebra. We prove that if it succeeds for some graphs then the same is true for the ‘cones’ and ‘joins’ of such graphs, so that the w-sat function can be computed for many graphs that are built up from certain elementary graphs by these operations.
Let G be a graph on vertex set [n], and for X ⊆ [n] let N(X) be the union of X and its neighbourhood in G. A family of sets [Fscr] ⊆ 2[n] is G-intersecting if N(X) ∩ Y ≠ [empty ] for all X, Y ∈ [Fscr]. In this paper we study the cardinality and structure of the largest k-uniform G-intersecting families on a fixed graph G.
In this paper we prove the following almost optimal theorem. For any δ > 0, there exist constants c and n0 such that, if n [ges ] n0, T is a tree of order n and maximum degree at most cn/log n, and G is a graph of order n and minimum degree at least (1/2 + δ)n, then T is a subgraph of G.
We generalize a minimal 3-connectivity result of Halin from graphs to binary matroids. As applications of this theorem to minimally 3-connected matroids, we obtain new results and short inductive proofs of results of Oxley and Wu. We also give new short inductive proofs of results of Dirac and Halin on minimally k-connected graphs for k ∈ {2,3}.
We introduce a notion of the derivative with respect to a function, not necessarily related to a probability distribution, which generalizes the concept of derivative as proposed by Lebesgue [14]. The differential calculus required to solve linear differential equations using this notion of the derivative is included in the paper. The definition given here may also be considered as the inverse operator of a modified notion of the Riemann–Stieltjes integral. Both this unified approach and the results of differential calculus allow us to characterize distributions in terms of three different types of conditional expectations. In applying these results, a test of goodness of fit is also indicated. Finally, two characterizations of a general Poisson process are included. Specifically, a useful result for the homogeneous Poisson process is generalized.
A random interval graph of order n is generated by picking 2n numbers X1,…,X2n independently from the uniform distribution on [0,1] and considering the collection of n intervals with endpoints X2i−1 and X2i for i ∈ {1,…,n}. The graph vertices correspond to intervals. Two vertices are connected if the corresponding intervals intersect. This paper characterizes the fluctuations of the independence number in random interval graphs. This characterization is obtained through the analysis of the greedy algorithm. We actually prove limit theorems (central limit theorem and large deviation principle) on the number of phases of this greedy algorithm. The proof relies on the analysis of first-passage times through a random level.
Let [Mscr] be the class of simple matroids which do not contain the 5-point line U2,5, the Fano plane F7, the non-Fano plane F−7, or the matroid P7 as minors. Let h(n) be the maximum number of points in a rank-n matroid in [Mscr]. We show that h(2) = 4, h(3) = 7, and h(n) = (n+12) for n [ges ] 4, and we also find all the maximum-sized matroids for each rank.
Algorithmic aspects of a chip-firing game on a graph introduced by Biggs are studied. This variant of the chip-firing game, called the dollar game, has the properties that every starting configuration leads to a so-called critical configuration. The set of critical configurations has many interesting properties. In this paper it is proved that the number of steps needed to reach a critical configuration is polynomial in the number of edges of the graph and the number of chips in the starting configuration, but not necessarily in the size of the input. An alternative algorithm is also described and analysed.
We consider an extension of the Monotone Subsequence lemma of Erdős and Szekeres in higher dimensions. Let v1,…,vn ∈ ℝd be a sequence of real vectors. For a subset I ⊆ [n] and vector [srarr ]c ∈ {0,1}d we say that I is [srarr ]c-free if there are no i < j ∈ I, such that, for every k = 1,…,d, vik < vik if and only if [srarr ]ck = 0. We construct sequences of vectors with the property that the largest [srarr ]c-free subset is small for every choice of [srarr ]c. In particular, for d = 2 the largest [srarr ]c-free subset is O(n⅝) for all the four possible [srarr ]c. The smallest possible value remains far from being determined.
We also consider and resolve a simpler variant of the problem.