To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The following represents a very provisional report on work going on. The author is fully aware of the fact that there are many uncertainties,unclarities and errors in the text as presented here. Yet it seems that the observations which are made are of a certain interest, and that the ideas put forward deserve some consideration in so far as they indicate a general direction in which an explanatory solution to the problem under consideration might be found.
It is a very widespread phenomenon, in many if not all languages, that verbs are semantically decomposable into a fairly general transitive verb and a generic object. An example from English is brew, which is made up of make and beer, or advise, which is ‘give advice’. Assuming this observation to be correct, we must posit a rule in the grammatical semantics of English, which incorporates the object into a lexical verb. Let us speak of Object-Incorporation. Sometimes, as in the case of brew, the incorporation is optional, since we also have brew beer. Further details could be worked out, such as the fact that an adjective qualifying the object ends up as an adverb: advise someone well, but these are not our concern here.
What does concern us here is the fact that the object which is the object of Object-Incorporation is always a generic NP (or a sortal NP, if you like), but never a referring expression.
My goal in this paper is a very modest one. I cannot even begin to discuss the numerous empirical aspects of the phenomena listed in the title, nor can I give a survey of the literature on these subjects in a short paper of this sort. All I can reasonably attempt here is to suggest an explication of the three concepts topic, sentence accentuation, and ellipsis, which may at best be the concepts associated with these expressions by several linguists, and at worst, are only my own. I will do this by making a proposal for the representation of the semantics of topicality, of sentence accentuation, and of ellipsis – as I understand these expressions – in natural generative (NG) grammar. This theory of grammar is characterized in Vennemann (1971 and 1973a) and in Bartsch and Vennemann (1972). My remarks here are an extension of ideas and notation in Bartsch (1972, especially chapter 4) and are drawn to a large extent from the third section of Vennemann (1973b).
In formulating my remarks, I shall try to abide by a methodological principle which was used as a procedural guideline in Bartsch and Vennemann (1972), e.g. chapter 6.1, where we give different semantic representations for constructions with negative adverbs than we do for sentences with negation verbs, and which I propose explicitly in Vennemann (1973b). This principle says that any two discourses that have different surface syntactic representations must have different semantic representations (where I use ‘semantic’ in a broad sense, where some perhaps would use another expression, e.g. “pragmatic’ or ‘pragmato-semantic’).
This paper falls into two parts, the first by Altham, the second by Tennant. The second part contains the formal work; it sets forth a syntactic system for representing sortally quantified sentences of English, and provides semantics for the sentences so represented. Illustrations of the usefulness of the ideas presented are drawn from among the trickier kinds of sentences that have appeared in the literature. The first part gives some indication of the background to some of the ideas in the second part. It also shows the range of expressions that have some claim to treatment by the methods that follow.
Part I
The standard logical quantifiers of classical predicate logic, (∃x), and (∀x) – the existential and universal quantifiers respectively – are familiar, as is also the idea that their workings in the system which is their home may serve as a source of syntactical and semantic insights into a class of English sentences containing such words as some, none, every, and any. The best known example of the use of this idea is perhaps the use of the logician's notion of the scope of a quantifier to explain why it is that in some contexts any may be replaced by every without change of truth value, whereas in other contexts the substitution must be made with some.
In recent years, the ‘point of reference’ has received considerable attention as a way of representing context in formal treatments of natural language. It encodes such information as who is speaking to whom, the time, what entities exist and what relations hold among them. It is an attempt to mirror the way in which the context forms a background against which a sentence can be true or false.
However, utterances do more than merely display themselves before a context and then vanish. They alter the context and become part of it. For instance, if I say
I am a foreigner
I have not only said something with a truth value depending on the situation in which I speak, I have also created a new situation in which
(2) a. I just told you that I was a foreigner
will be true if I address it to the same audience, and
(2) b. He just told you that he was a foreigner
will be true if used by a third party. The usual sort of analysis in terms of reference points sets out truth conditions for (2a) and (2b) that make them true if appropriate entities stand in a ‘telling’ relation. But there is nothing in the analysis of (1) which expresses its power to put the entities into the relation. What I would like to propose is that we invest (1) with such power.
The topic of this paper1 is the role of adnominal and adverbial modifiers in sentence semantics. Before going into details I will sketch the framework in which this special problem will be treated. The framework is that of Natural Generative (NG) Grammar (Bartsch and Vennemann (1972)). A natural generative grammar comprises the following rules:
From the point of view of linguistic production
(a) Formation rules of a properly extended predicate logic (PEPL), i.e. a predicate logic extended by predicates over sentence intensions and by several sorts of individual variables and constants (multi-sortal logic).
(b) Formation rules of a categorial grammar of a natural language.
(c) Constituent building rules. They map forms built according to PEPL (a), the semantic representations, onto forms built in accordance with categorial syntax (b). If the formation rules of PEPL are understood as the generative component of the grammar, then the constituent building rules are conversion rules with semantic representations as input and categorial forms as output. The rules of categorial syntax then have to be understood as restrictions on the forms of the output of constituent building rules and thus as restrictions on the constituent building rules themselves.
(d) Lexicalization rules.
(e) Morphological rules.
(f) Serialization rules.
(g) Intonation rules.
The semantic representations as well as the forms of categorial syntax are not linearly ordered but only hierarchically ordered. Serialization rules apply to the forms of categorial syntax after lexicalization rules and morphological rules have applied.
I will argue here that the task of formally defining logical structures (LS) for natural languages (NL) has a linguistic interest beyond the immediate one inherent in representing logical notions like entailment, presupposition, true answer to, etc. The reason is that LS can be used as a basis for describing, and in some cases explaining, certain kinds of syntactic variation across NL. Below we consider three types of comparison between LS and the surface structures (SS) which can be used to express them in different NL.
In the first comparison we will demonstrate that NL differ significantly in their capacity to form restrictive relative clauses. We explain this variation in terms of the Principle of Conservation of Logical Structure (Keenan (1972b)).
In the second comparison we show that NL differ in their capacity to stipulate the co-reference of NP positions, but we provide no explanation for this variation.
And in the third comparison we show that the expression of indirect questions (indQ) varies in restricted ways across NL and propose a Principle of Logical Variants which explains this on the basis of the LS we propose for indQ.
Type 1 comparison
Here we compare LS with the SS which can be used to express them in various NL. Obviously if a LS can be naturally expressed in some NL but not others then the former are logically more expressive in that respect than the latter.
In this paper I shall be concerned with what Quine (1960: 108) describes as the first two phases in the ontogenesis of reference. Like Quine, I shall venture no psychological details as to the actual order in which ‘the child of our culture’ masters the ‘provincial apparatus of articles, copulas, and plurals’ as he ‘scrambles up an intellectual chimney, supporting himself against each side by pressure against the others’ (1960:102, 80, 93). What I have to say about the child's acquisition of the grammar of referring expressions is not incompatible, as far as I am aware, with any of the data that has been collected and discussed in the psycholinguistic literature: but I am not claiming that all children ‘of our culture’, and still less children of all cultures, must go through the same stages in the acquisition of their native language and that these stages must succeed one another in a fixed order. My purpose, rather, is to show how the grammatical structure and interpretation of referring expressions (other than proper names) can be accounted for in principle on the basis of a prior understanding of the deictic function of demonstrative pronouns and adverbs in what might be loosely described as concrete or practical situations. I will argue that the definite article and the personal pronouns, in English and other languages, are (in a sense of ‘weak’ to be explained below) weak demonstratives (see Sommerstein (1972); Thorne (1973)), and that their anaphoric use is derived from deixis.
When the Messenger in Through the Looking Glass told the King that nobody walked faster than he did, Alice was puzzled that the King accused the Messenger of being a liar: on the grounds that if nobody walked faster than he did, then nobody would have got there first. Clearly NPs like nobody (if indeed they are NPs) do not refer in any straightforward way. I shall, to give a label, call such surface structure NPs as nobody, all men, every stupid academic ‘quantified NPs’: they correspond to what Geach (following W. E. Johnston) calls ‘applicatival phrases’ (being phrases formed from a general term plus an applicative). But if quantified NPs do not refer in any straightforward way, do they refer at all? Moreover, in general what requirements does the presence of such NPs impose on the semantics of host sentences?
Clearly anything like an adequate answer to such questions would take us over much ground. I shall here confine myself to examining a small subset of the possible concatenations involving quantified NPs. In so doing I hope to shed some light on the behaviour of definite descriptions in certain roles.
One possibility for concatenation with quantified NPs has been much discussed in the literature of linguistics: that involving the co-habitation within a sentence of two quantified NPs having superficially the same structure, and being composed of the same morphological units.
In this paper I will consider a topic previously treated by T. R. Hofmann (1969). Hofmann proposes a solution in his article to the problem posed by the ambiguity of the English surface verbal form ‘have + past participle’ (i.e. ‘have – V – en’) in those dependent clauses whose verbs cannot exhibit either a present or simple past tense morpheme. Such clauses are the so-called ‘non-finite’ clauses: participles, gerunds, and infinitives of various types. I will examine and expand on Hofmann's solution, try to show its inadequacies, and then propose what I hope is a more promising alternative. In my proposal, the semantic tense values of clauses are assigned according to the distribution of tense markers in trees, subsequent to the application of certain syntactic transformations.
I have been greatly aided by the discussion of tense in chapters XXIII and XXIV of O. Jespersen (1964), to which I will be referring throughout the paper.
At the outset I want to emphasize that in my analysis, as in Hofmann's, syntactic tense markers occur not only in surface structure but also, with different and more regular distribution, in syntactic deep structures. That is, the deep structure existence of these syntactic tense markers is justified by the fact that we can more easily characterize their distribution in deep structure; yet, I will claim that these markers contribute to meaning only by virtue of their position in trees subsequent to the application of many transformational rules.
One of the main objectives of traditional grammarians was to relate form and meaning. This programme ran into many difficulties and was abandoned by structural linguists who found it much more fruitful to concentrate on the voluntarily limited study of the combinatorial properties of words.
Transformational linguists also exclude meaning from the grammar rules they build. However, the definition of a transformational rule (unlike the definition of a distributional rule) explicitly involves meaning, since transformationally related sentences must have identical meanings.
There are important differences in the ways we just referred to the term ‘meaning’. Traditional grammars classify forms into families, and attribute to these families absolute categories of meanings. For example, the notion of phrase is a notion of form, so is the notion of when-phrase (i.e. adverbial phrase whose left-most word is when). Often, the semantic notion /time/ is associated with these forms (i.e. adverbs of time).
The modern formalized version of this activity is usually stated in the general framework of formal logic. On the one hand, the syntactic rules of some formal system define a set of well-formed formulae (here sentence forms), on the other hand, a semantic model provides interpretation for each formula. As in mathematical logic, the question of setting up a dividing line between the syntactic theory and its model constantly arises. In both the traditional and the formal approach, absolute notions of meaning are needed to interpret the sentences.
The adverbs I wish to consider fall into six groups of near-synonyms, as follows.
Always, invariably, universally, without exception
Sometimes, occasionally, [once]
Never
Usually, mostly, generally, almost always, with few exceptions, [ordinarily], [normally]
Often, frequently, commonly
Seldom, infrequently, rarely, almost never
Bracketed items differ semantically from their list-mates in ways I shall not consider here; omit them if you prefer.
First guess: quantifiers over times?
It may seem plausible, especially if we stop with the first word 011 each list, that these adverbs function as quantifiers over times. That is to say that always, for instance, is a modifier that combines with a sentence Φ to make a sentence Always Φ that is true iff the modified sentence Φ, is true at all times. Likewise, we might guess that Sometimes Φ, Never Φ, Usually Φ, Often Φ, and Seldom Φ are true, respectively, iff Φ is true at some times, none, most, many, or few. But it is easy to find various reasons why this first guess is too simple.
First, we may note that the times quantified over need not be moments of time. They can be suitable stretches of time instead. For instance,
(7) The fog usually lifts before noon here
means that the sentence modified by usually is true on most days, not at most moments. Indeed, what is it for that sentence to be true at a moment?
My aim in this paper is to describe the ways in which model theoretic semantics is used in the study of formal languages, and thence to evaluate it as a tool for the study of natural language. I shall argue that model theoretic semantics is primarily valuable as a theory of consequence; and that, whilst as a tool for the study of natural language it enjoys certain philosophical advantages over other theories of consequence, the fashionable claim that it can provide a theory of truth and meaning for natural language is misguided.
From its conception in the 1930s model theory has led a double life. It has been at once an uncontroversial tool of formal logic and a controversial tool of philosophical and linguistic analysis (recently it has acquired a third and perhaps less transient life as an object with certain algebraic and topological properties of interest to pure mathematicians, see, e.g., Rasiowa and Sikorski (1963)).
To prepare the ground we categorize roughly the uses of model theory:
In the study of formal languages
A theory of truth value assignment and consequence;
A construction for proving metalogical theorems about independently defined formal systems;
A heuristic for the development and teaching of formal systems.
In the study of natural language
A theory of consequence;
An explication of certain connections between extension, truth value assignment and consequence;
A framework for the discussion of questions of ontology;
The papers in this volume are those given at the Cambridge Colloquium on Formal Semantics of Natural Language, April 1973. The purpose of that colloquium was twofold: to stimulate work in natural language semantics and to bring together linguists, philosophers, and logicians working in different countries and, often, from different points of view. Both purposes were, it seems to us, achieved, though of course it was not feasible to represent all countries and all points of view at a single conference.
The questions treated in the colloquium papers represent the following current areas of interest: problems of quantification and reference in natural language, the application of formal logic to natural language semantics, the formal semantics of non-declarative sentences, the relation between natural language semantics and that of programming languages, formal pragmatics and the relation between sentences and their contexts of use, discourse meaning, and the relation between surface syntax and logical meaning.
The papers have been loosely grouped under the six rubrics given in the table of contents. These rubrics were not given to the authors in advance and are intended only as a rough guide to the reader.