To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter describes a model of autonomous belief revision (ABR) which discriminates between possible alternative belief sets in the context of change. The model determines preferred revisions on the basis of the relative persistence of competing cognitive states. It has been implemented as ICM (increased coherence model); a belief revision mechanism encompassing a three-tiered ordering structure which represents a blend between coherence and foundational theories of belief revision.
The motivation for developing the model of ABR is as a component of a model of communication between agents. The concern is choice about changing belief. In communication, agents should be designed to choose whether as well as how to revise their beliefs. This is an important aspect of design for multi-agent contexts as open environments (Hewitt, 1986), in which no one element can be in possession of complete information of all parts of the system at all times. Communicated information cannot therefore be assumed to be reliable and fully informed. The model of ABR and system ICM, represent the first phase in the development of a computational model of cooperative, yet autonomously determined communication. The theory of ABR and communication is explicated in section 2.
Section 3 follows with an outline of the problem of multiple alternative revisions, and a discussion of preference and strength of belief issues from an AI perspective. This section includes the relevant comparative and theoretical background for understanding the model of ABR described in section 4.
Consider a knowledge base represented by a theory ψ of some logic, say propositional logic. We want to incorporate into ψ a new fact, represented by a sentence μ of the same language. What should the resulting theory be? A growing body of work (Dalal 1988, Katsuno and Mendelzon 1989, Nebel 1989, Rao and Foo 1989) takes as a departure point the rationality postulates proposed by Alchourrón, Gärdenfors and Makinson (1985). These are rules that every adequate revision operator should be expected to satisfy. For example: the new fact μ must be a consequence of the revised knowledge base.
In this paper, we argue that no such set of postulates will be adequate for every application. In particular, we make a fundamental distinction between two kinds of modifications to a knowledge base. The first one, update, consists of bringing the knowledge base up to date when the world described by it changes. For example, most database updates are of this variety, e.g. “increase Joe's salary by 5%”. Another example is the incorporation into the knowledge base of changes caused in the world by the actions of a robot (Ginsberg and Smith 1987, Winslett 1988, Winslett 1990). We show that the AGM postulates must be drastically modified to describe update.
The second type of modification, revision, is used when we are obtaining new information about a static world.
There are many ways to change a theory. The tasks of adding a sentence to a theory and of retracting a sentence from a theory are non-trivial because they are usually constrained by at least three requirements. The result of a revision or contraction of a theory should again be a theory, i.e., closed under logical consequence, it should be consistent whenever possible, and it should not change the original theory beyond necessity. In the course of the Alchourrón-Gärdenfors-Makinson research programme, at least three different methods for constructing contractions of theories have been proposed. Among these the “safe contraction functions” of Alchourrón and Makinson (1985, 1986) have played as it were the role of an outsider. Gärdenfors and Makinson (1988, p. 88) for instance state that ‘another, quite different, way of doing this [contracting and revising theories] was described by Alchourrón and Makinson (1985).’ (Italics mine.) The aim of the present paper is to show that this is a miscasting.
In any case, it seems that the intuitions behind safe contractions are fundamentally different from those behind its rivals, the partial meet contractions of Alchourrón, Gärdenfors and Makinson (1985) and the epistemic entrenchment contractions of Gärdenfors and Makinson (1988). Whereas the latter notions are tailored especially to handling theories (as opposed to sets of sentences which are not closed under a given consequence operation), safe contraction by its very idea focusses on minimal sets of premises sufficient to derive a certain sentence.
Belief revision is the process of incorporating new information into a knowledge base while preserving consistency. Recently, belief revision has received a lot of attention in AI, which led to a number of different proposals for different applications (Ginsberg 1986; Ginsberg, Smith 1987; Dalal 1988; Gärdenfors, Makinson 1988; Winslett 1988; Myers, Smith 1988; Rao, Foo 1989; Nebel 1989; Winslett 1989; Katsuno, Mendelzon 1989; Katsuno, Mendelzon 1991; Doyle 1990). Most of this research has been considerably influenced by approaches in philosophical logic, in particular by Gärdenfors and his colleagues (Alchourrón, Gärdenfors, Makinson 1985; Gärdenfors 1988), who developed the logic of theory change, also called theory of epistemic change. This theory formalizes epistemic states as deductively closed theories and defines different change operations on such epistemic states.
Syntax-based approaches to belief revision to be introduced in Section 3 have been very popular because of their conceptual simplicity. However, there also has been criticisms since the outcome of a revision operation relies an arbitrary syntactic distinctions (see, e.g., (Dalal 1988; Winslett 1988; Katsuno, Mendelzon 1989))—and for this reason such operations cannot be analyzed on the knowledge level. In (Nebel 1989) we showed that syntax-based approaches can be interpreted as assigning higher relevance to explicitly represented sentences. Based on that view, one particular kind of syntax-based revision, called base revision, was shown to fit into the theory of epistemic change. In Section 4 we generalize this result to prioritized bases.
Recent years have seen considerable work on two approaches to belief revision: the so-called foundations and coherence approaches. The foundations approach supposes that a rational agent derives its beliefs from justifications or reasons for these beliefs: in particular, that the agent holds some belief if and only if it possesses a satisfactory reason for that belief. According to the foundations approach, beliefs change as the agent adopts or abandons reasons. The coherence approach, in contrast, maintains that pedigrees do not matter for rational beliefs, but that the agent instead holds some belief just as long as it logically coheres with the agent's other beliefs. More specifically, the coherence approach supposes that revisions conform to minimal change principles and conserve as many beliefs as possible as specific beliefs are added or removed. The artificial intelligence notion of reason maintenance system (Doyle, 1979) (also called “truth maintenance system”) has been viewed as exemplifying the foundations approach, as it explicitly computes sets of beliefs from sets of recorded reasons. The so-called AGM theory of Alchourrón, Gärdenfors and Makinson (1985; 1988) exemplifies the coherence approach with its formal postulates characterizing conservative belief revision.
Although philosophical work on the coherence approach influenced at least some of the work on the foundations approach (e.g., (Doyle, 1979) draws inspiration from (Quine, 1953; Quine and Ullian, 1978)), Harman (1986) and Gärdenfors (1990) view the two approaches as antithetical. Gärdenfors has presented perhaps the most direct argument for preferring the coherence approach to the foundations approach.
Since the beginning of artificial intelligence research on action, researchers have been concerned with reasoning about actions with preconditions and postconditions. Through the work of Moore (1980), Pratt's (1980) dynamic semantics soon established itself in artificial intelligence as the appropriate semantics for action. Mysteriously, however, actions with preconditions and postconditions were not given a proper treatment within the modal framework of dynamic logic. This paper offers such an analysis. Things are complicated by the need to deal at the same time with the notion of competence, or an actor's ability. Below, a logic of actions with preconditions and postconditions is given a sound and complete syntactic characterization, in a logical formalism in which it is possible to express actor competence, and the utility of this formalism is demonstrated in the generation and evaluation of plans.
The notion of actions with pre- and postconditions arose in artificial intelligence in the field of planning. In formulating a plan to reach some particular goal, there are a number of things which a planning agent must take into account. First, he will have to decide which actions can and may be undertaken in order to reach the goal. The physical, legal, financial and other constraints under which an actor must act will be lumped together below, since we will be interested in what is common to them all, namely that they restrict available options.
The preceding chapter presented the basic difficulties associated with producing semantic representations of sentences in context. This chapter surveys several well-known natural language processors, concentrating on their efforts at overcoming these particular difficulties. The processors use different styles of semantic representation as well as different methods for producing the chosen semantic representation from the syntactic parse. Ideally, clearly defined methods of producing semantic representations should be based on a linguistic theory of semantic analysis; a theory about the relationships between the given syntactic and semantic representations, and not just on the particular style of semantic representation. Computational linguistics has a unique contribution to make to the study of linguistics, in that it offers the opportunity of realizing the processes that must underlie the theories. Unfortunately, it seems to be the case that those systems that adhere most closely to a particular linguistic theory have the least clearly defined processing methods, and vice versa.
Another important aspect to examine is whether or not any of the methods make significant use of procedural representations. An important contribution hoped for from computational linguistics is an understanding of procedural semantics as “a paradigm or a framework for developing and expressing theories of meaning” [Woods, 1981, p. 302]. It is argued that adding procedures to a framework should greatly enrich its expressive power [Wilks, 1982]. In spite of the intuitive appeal of this argument, much work remains to be done before the benefits can be convincingly demonstrated.
A primary problem in the area of natural language processing is the problem of semantic analysis. This involves both formalizing the general and domain-dependent semantic information relevant to the task involved, and developing a uniform method for access to that information. Natural language interfaces are generally also required to have access to the syntactic analysis of a sentence as well as knowledge of the prior discourse to produce a detailed semantic representation adequate for the task.
Previous approaches to semantic analysis, specifically those which can be described as using templates, use several levels of representation to go from the syntactic parse level to the desired semantic representation. The different levels are largely motivated by the need to preserve context-sensitive constraints on the mappings of syntactic constituents to verb arguments. An alternative to the template approach, inference-driven mapping, is presented here, which goes directly from the syntactic parse to a detailed semantic representation without requiring the same intermediate levels of representation. This is accomplished by defining a grammar for the set of mappings represented by the templates. The grammar rules can be applied to generate, for a given syntactic parse, just that set of mappings that corresponds to the template for the parse. This avoids the necessity of having to represent all possible templates explicitly. The context-sensitive constraints on mappings to verb arguments that templates preserved are now preserved by filters on the application of the grammar rules.
This chapter presents the semantic processor that performs the semantic role assignments at the same time as it is decomposing the verb representation. Chapter 3 has described how semantic roles are defined as arguments to the semantic predicates that appear in the lexical entries. These arguments are instantiated as the lexical entries are interpreted. A possible instantiation of a predicate-argument is the referent of a syntactic constituent of the appropriate syntactic and semantic type. The syntactic constituent instantiations correspond to the desired mappings of syntactic constituents onto semantic roles. Other instantiations can be made using pragmatic information to deduce appropriate fillers from previous knowledge about other syntactic constituents or from general world knowledge.
These tasks are performed by interpreting the lexical entries procedurally similarly to the way that Prolog interprets Horn clauses procedurally [Kowalski, 1979]. The lexical entries are in fact Horn clauses, and the predicate-arguments that correspond to the semantic roles are terms that consist of function symbols with one argument. The procedural interpretation drives the application of the lexical entries, and allows the function symbols to be “evaluated” as a means of instantiating the arguments. The predicate environments associated with the mapping constraints correspond to states that may or may not occur during the procedural interpretation of the entries. Thus the same argument can be constrained differently depending on the state the verb interpretation is in. The state can vary according to instantiations of arguments or by the predicates included in the predicate decomposition.
Two pulleys of weights 12 lb and 8 lb are connected by a fine string hanging over a smooth fixed pulley. Over the former is hung a fine string with weights 3 lb and 6 lb at its ends, and over the latter a fine string with weights 4 lb and x lb. Find x so that the string over the fixed pulley remains stationary, and find the tension in it.
2. (Part of Humphrey, p. 75, nos. 566)
A mass of 9 lb resting on a smooth horizontal table is connected by a light string, passing over a smooth pulley at the edge of the table to a mass of 7 lb hanging freely. Find the common acceleration, the tension in the string and the pressure on the pulley.
3. Two particles of mass B and C are connected by a light string passing over a smooth pulley. Find their common acceleration.
4. Particles of mass 3 and 6 lb are connected by a light string passing over a smooth weightless pulley; this pulley is suspended from a smooth weightless pulley and offset by a particle of mass 8 lb. Find the acceleration of each particle.
5. A man of 12 stone and a weight of 10 stone are connected by a light rope passing over a pulley. Find the acceleration of the man. If the man pulls himself up the rope so that his acceleration is one half its former value, what is the upward acceleration of the weight?
This chapter presents the formalization of the pulley domain. In this domain, the entities involved tend to be simple solid entities like particles and strings, while the relationships between them include notions of support, contact, or motion of some form. Section 3.2 describes the formalization of the pulley world in terms of the types of entities and their properties. The relationships are used for the decompositions of the verbs which are described in section 3.3 where the lexical entries of the verbs are listed. Each verb is subcategorized in terms of the primary relationship involved in the decomposition. The semantic roles are arguments of these relationships. The lexical entries include the decompositions of these primary relationships. Section 3.5 introduces the mapping constraints for assigning syntactic constituents to semantic roles. Examples demonstrate how the syntactic cues can be used with predicate environments to preserve the same semantic role interdependences that are preserved by templates. The last section describes the semantic constraints used in conjunction with the mapping constraints to test that the referent of a syntactic constituent is of the correct semantic type. The last category of constraints described, the pragmatic constraints, are used by inference-driven mapping to fill semantic roles that do not have mappings to syntactic constituents. Chapter 4 describes how inference-driven mapping interprets the lexical entries procedurally to drive the semantic analysis of paragraphs of text.