To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The equations for mix assume that it is operating on a two argument function where the first argument is static and the second dynamic. This is the canonical case. In practice we cannot hope that all functions will turn out this way. For example, a function may have many arguments, the first and third being static, say. Alternatively, a single argument may have both static and dynamic parts. We need a framework for reducing the general case to the canonical case.
We can simplify the general case by requiring that all functions have exactly one argument. In first-order languages this is no real restriction. Functions must always be applied to all their arguments, so we can just express them as a single tuple. The next stage is to factorise this single (composite) argument into two parts, the static and the dynamic. We use the results of binding-time analysis to control the factorisation.
Note that, even though functions will only have one argument, we will still loosely describe them as having many. For example, we will talk of a function f (x, y) = … as having two arguments when this is appropriate.
Motivation
For the present we will focus our attention on the static part of the argument. To select the static part, we use a function from the argument domain to some domain of static values. If we make the static domain a sub-domain of the original we can simply “blank out” the dynamic part of the argument and leave the static part unchanged.
This thesis is submitted in partial fulfillment of the requirements for a Doctor of Philosophy Degree at Glasgow University. It comprises a study of partial evaluation, with the thesis that domain projections provide an important theoretical and practical tool for its development.
Our aim, therefore, is not so much to describe a stronger or more robust partial evaluator than has been achieved hitherto, but to improve our understanding of the partial evaluation process. Because of this much of the thesis is theoretical. However, to demonstrate that the ideas are also practical, they have been implemented. As a result, the chapters tend to alternate between theory and practice.
In Chapter 1 we explore the principles of partial evaluation and in Chapter 2 we study the algorithms and techniques used. In Chapters 3 and 4 we address the issue of binding-time analysis. Chapter 3 contains theory, including the relationship between congruence in binding-time analysis and safety in strictness analysis, and Chapter 4 the practice-the equations used in an implementation and a proof of their correctness. In Chapter 5, we discuss the nature of residual functions and their run-time arguments, and develop a theoretical framework based on dependent sums of domains. The practical implications of this are seen in Chapter 6 where we bring the material from the previous chapters together in a working projection-based partial evaluator. In Chapter 7 we turn our attention to polymorphism to address some of the issues it raises, and Chapter 8 concludes the thesis. The appendices which follow contain annotated listings of the programs used to construct the final polymorphic partial evaluator.
The preceding chapter presented the basic difficulties associated with producing semantic representations of sentences in context. This chapter surveys several well-known natural language processors, concentrating on their efforts at overcoming these particular difficulties. The processors use different styles of semantic representation as well as different methods for producing the chosen semantic representation from the syntactic parse. Ideally, clearly defined methods of producing semantic representations should be based on a linguistic theory of semantic analysis; a theory about the relationships between the given syntactic and semantic representations, and not just on the particular style of semantic representation. Computational linguistics has a unique contribution to make to the study of linguistics, in that it offers the opportunity of realizing the processes that must underlie the theories. Unfortunately, it seems to be the case that those systems that adhere most closely to a particular linguistic theory have the least clearly defined processing methods, and vice versa.
Another important aspect to examine is whether or not any of the methods make significant use of procedural representations. An important contribution hoped for from computational linguistics is an understanding of procedural semantics as “a paradigm or a framework for developing and expressing theories of meaning” [Woods, 1981, p. 302]. It is argued that adding procedures to a framework should greatly enrich its expressive power [Wilks, 1982]. In spite of the intuitive appeal of this argument, much work remains to be done before the benefits can be convincingly demonstrated.
A primary problem in the area of natural language processing is the problem of semantic analysis. This involves both formalizing the general and domain-dependent semantic information relevant to the task involved, and developing a uniform method for access to that information. Natural language interfaces are generally also required to have access to the syntactic analysis of a sentence as well as knowledge of the prior discourse to produce a detailed semantic representation adequate for the task.
Previous approaches to semantic analysis, specifically those which can be described as using templates, use several levels of representation to go from the syntactic parse level to the desired semantic representation. The different levels are largely motivated by the need to preserve context-sensitive constraints on the mappings of syntactic constituents to verb arguments. An alternative to the template approach, inference-driven mapping, is presented here, which goes directly from the syntactic parse to a detailed semantic representation without requiring the same intermediate levels of representation. This is accomplished by defining a grammar for the set of mappings represented by the templates. The grammar rules can be applied to generate, for a given syntactic parse, just that set of mappings that corresponds to the template for the parse. This avoids the necessity of having to represent all possible templates explicitly. The context-sensitive constraints on mappings to verb arguments that templates preserved are now preserved by filters on the application of the grammar rules.
This chapter presents the semantic processor that performs the semantic role assignments at the same time as it is decomposing the verb representation. Chapter 3 has described how semantic roles are defined as arguments to the semantic predicates that appear in the lexical entries. These arguments are instantiated as the lexical entries are interpreted. A possible instantiation of a predicate-argument is the referent of a syntactic constituent of the appropriate syntactic and semantic type. The syntactic constituent instantiations correspond to the desired mappings of syntactic constituents onto semantic roles. Other instantiations can be made using pragmatic information to deduce appropriate fillers from previous knowledge about other syntactic constituents or from general world knowledge.
These tasks are performed by interpreting the lexical entries procedurally similarly to the way that Prolog interprets Horn clauses procedurally [Kowalski, 1979]. The lexical entries are in fact Horn clauses, and the predicate-arguments that correspond to the semantic roles are terms that consist of function symbols with one argument. The procedural interpretation drives the application of the lexical entries, and allows the function symbols to be “evaluated” as a means of instantiating the arguments. The predicate environments associated with the mapping constraints correspond to states that may or may not occur during the procedural interpretation of the entries. Thus the same argument can be constrained differently depending on the state the verb interpretation is in. The state can vary according to instantiations of arguments or by the predicates included in the predicate decomposition.
Two pulleys of weights 12 lb and 8 lb are connected by a fine string hanging over a smooth fixed pulley. Over the former is hung a fine string with weights 3 lb and 6 lb at its ends, and over the latter a fine string with weights 4 lb and x lb. Find x so that the string over the fixed pulley remains stationary, and find the tension in it.
2. (Part of Humphrey, p. 75, nos. 566)
A mass of 9 lb resting on a smooth horizontal table is connected by a light string, passing over a smooth pulley at the edge of the table to a mass of 7 lb hanging freely. Find the common acceleration, the tension in the string and the pressure on the pulley.
3. Two particles of mass B and C are connected by a light string passing over a smooth pulley. Find their common acceleration.
4. Particles of mass 3 and 6 lb are connected by a light string passing over a smooth weightless pulley; this pulley is suspended from a smooth weightless pulley and offset by a particle of mass 8 lb. Find the acceleration of each particle.
5. A man of 12 stone and a weight of 10 stone are connected by a light rope passing over a pulley. Find the acceleration of the man. If the man pulls himself up the rope so that his acceleration is one half its former value, what is the upward acceleration of the weight?
This chapter presents the formalization of the pulley domain. In this domain, the entities involved tend to be simple solid entities like particles and strings, while the relationships between them include notions of support, contact, or motion of some form. Section 3.2 describes the formalization of the pulley world in terms of the types of entities and their properties. The relationships are used for the decompositions of the verbs which are described in section 3.3 where the lexical entries of the verbs are listed. Each verb is subcategorized in terms of the primary relationship involved in the decomposition. The semantic roles are arguments of these relationships. The lexical entries include the decompositions of these primary relationships. Section 3.5 introduces the mapping constraints for assigning syntactic constituents to semantic roles. Examples demonstrate how the syntactic cues can be used with predicate environments to preserve the same semantic role interdependences that are preserved by templates. The last section describes the semantic constraints used in conjunction with the mapping constraints to test that the referent of a syntactic constituent is of the correct semantic type. The last category of constraints described, the pragmatic constraints, are used by inference-driven mapping to fill semantic roles that do not have mappings to syntactic constituents. Chapter 4 describes how inference-driven mapping interprets the lexical entries procedurally to drive the semantic analysis of paragraphs of text.
This chapter summarizes the results that have been presented in the preceding chapters, in particular the process by which inference-driven mapping goes directly from the syntactic parse of a sentence to a “deep” semantic representation that corresponds to a traditional linguistic decomposition. The summary illustrates two of the most important benefits offered by inference-driven mapping over the template approach, namely, (1) the clear distinction between the verb definition and the final semantic representation achieved, and (2) an integrated approach to semantic analysis. The first benefit is of special relevance to linguistic theories about semantic representations, in that it provides a testing ground for such theories. The second benefit is of more relevance to computational models of natural language processors, in terms of interfacing semantic processing with syntactic parsing. The last section suggests directions of future research for pursuing these objectives.
Integrated semantic analysis
As discussed in chapter 2, traditional approaches to semantic processing need several levels of description to produce a “deep” semantic representation from a syntactic parse. The most popular of these approaches, termed the template approach, can be seen as using at least two intermediate levels of description, (1) the template level which is used for assigning mappings from syntactic constituents and semantic roles, and (2) the canonical level where the semantic roles are grouped together to simplify derivation of a “deep” semantic representation. These separate levels of description impose several stages of processing on the implementations, since only certain pieces of information are available at any one stage.