To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The purpose of this chapter is twofold. On the one hand, it introduces basic notions from universal algebra (such as terms, substitutions, and identities) on a syntactic level that does not require (or give) much mathematical background. On the other hand, it presents the semantic counterparts of these syntactic notions (such as algebras, homomorphisms, and equational classes), and proves some elementary results on their connections. Most of the definitions and results presented in subsequent chapters can be understood knowing only the syntactic level introduced in Section 3.1. In order to obtain a deeper understanding of the meaning of these results, and of the context in which they are of interest, a study of the other sections in this chapter is recommended, however. For more information on universal algebra see, for example, [100, 55, 173].
Terms, substitutions, and identities
Terms will be built from function symbols and variables in the usual way. For example, if f is a binary function symbol, and x, y are variables, then f(x,y) is a term. To make clear which function symbols are available in a certain context, and which arity they have, one introduces signatures.
Definition 3.1.1 A signature Σ is a set of function symbols, where each f ∈ Σ is associated with a non-negative integer n, the arity of f. For n ≥ 0, we denote the set of all n-ary elements of Σ by Σ(n). The elements of Σ(0) are also called constant symbols.
Given a class of combinatorial structures [Cscr], we consider the quantity N(n, m), the number of multiset constructions [Pscr] (of [Cscr]) of size n having exactly m [Cscr]-components. Under general analytic conditions on the generating function of [Cscr], we derive precise asymptotic estimates for N(n, m), as n→∞ and m varies through all possible values (in general 1[les ]m[les ]n). In particular, we show that the number of [Cscr]-components in a random (assuming a uniform probability measure) [Pscr]-structure of size n obeys asymptotically a convolution law of the Poisson and the geometric distributions. Applications of the results include random mapping patterns, polynomials in finite fields, parameters in additive arithmetical semigroups, etc. This work develops the ‘additive’ counterpart of our previous work on the distribution of the number of prime factors of an integer [20].
In this paper, we present results of a project that investigated the application of lexicon based text retrieval techniques to Alternative and Augmentative Communication (AAC). As a practical outcome of this research, a communication aid based on message retrieval by key words was designed, implemented and evaluated. The message retrieval module in the system uses a large semantic lexicon, derived from the WordNet database, for query expansion. Trials have been carried out with the device to evaluate whether the approach is suitable for AAC, and to determine the semantic relations that lead to efficient message retrieval. The first part of this paper describes the background of the project and highlights the retrieval requirements for a communication aid, which differ considerably from the requirements in standard text retrieval. We then present the overall design of the WordKeys communication aid and describe the tasks of its sub-modules. We summarise trials that have been carried out to determine the effect of semantic query expansion on the success of message retrieval. Evaluation results show that information about word frequency can solve problems that occurred in the semantic query expansion because of taxonomies that have too many intermediate steps between closely related words. Finally, a user evaluation with the improved system showed that full text retrieval is an effective approach to message access in a communication aid.
We apply an idea of Székely to prove a general upper bound on the number of incidences between a set of m points and a set of n ‘well-behaved’ curves in the plane.
We define a weak λ-calculus, λσw, as a subsystem of the full λ-calculus with explicit substitutions λσ[uArr]. We claim that λσw could be the archetypal output language of functional compilers, just as the λ-calculus is their universal input language. Furthermore, λσ[uArr] could be the adequate theory to establish the correctness of functional compilers. Here we illustrate these claims by proving the correctness of four simplified compilers and runtime systems modelled as abstract machines. The four machines we prove are the Krivine machine, the SECD, the FAM and the CAM. Thus, we give the first formal proofs of Cardelli's FAM and of its compiler.
Augmentative and Alternative Communication (AAC) is the field of study concerned with providing devices and techniques to augment the communicative ability of a person whose disability makes it difficult to speak or otherwise communicate in an understandable fashion. For several years, we have been applying natural language processing techniques to the field of AAC to develop intelligent communication aids that attempt to provide linguistically correct output while increasing communication rate. Previous effort has resulted in a research prototype called Compansion that expands telegraphic input. In this paper we describe that research prototype and introduce the Intelligent Parser Generator (IPG). IPG is intended to be a practical embodiment of the research prototype aimed at a group of users who have cognitive impairments that affect their linguistic ability. We describe both the theoretical underpinnings of Compansion and the practical considerations in developing a usable system for this population of users.
A collection H of integers is called an affine d-cube if there exist d+1 positive integers x0,x1,…, xd so that
formula here
We address both density and Ramsey-type questions for affine d-cubes. Regarding density results, upper bounds are found for the size of the largest subset of {1,2,…,n} not containing an affine d-cube. In 1892 Hilbert published the first Ramsey-type result for affine d-cubes by showing that, for any positive integers r and d, there exists a least number n=h(d,r) so that, for any r-colouring of {1,2,…,n}, there is a monochromatic affine d-cube. Improvements for upper and lower bounds on h(d,r) are given for d>2.
Often when analysing randomized algorithms, especially parallel or distributed algorithms, one is called upon to show that some function of many independent choices is tightly concentrated about its expected value. For example, the algorithm might colour the vertices of a given graph with two colours and one would wish to show that, with high probability, very nearly half of all edges are monochromatic.
The classic result of Chernoff [3] gives such a large deviation result when the function is a sum of independent indicator random variables. The results of Hoeffding [5] and Azuma [2] give similar results for functions which can be expressed as martingales with a bounded difference property. Roughly speaking, this means that each individual choice has a bounded effect on the value of the function. McDiarmid [9] nicely summarized these results and gave a host of applications. Expressed a little differently, his main result is as follows.
Consider first-passage percolation on the square lattice. Welsh, who together with Hammersley introduced the subject in 1963, has formulated a problem about mean first-passage times, which, although seemingly simple, has not been proved in any non-trivial case. In this paper we give a general proof of Welsh's problem.
A higher-order function is a function that takes another function as an argument or returns another function as a result. More specifically, a first-order function takes and returns base types, such as integers or lists. A kth-order function takes or returns a function of order k−1. Currying often artificially inflates the order of a function, so we will ignore all inessential currying. (Whether a particular instance of currying is essential or inessential is open to debate, but we expect that our choices will be uncontroversial.) In addition, when calculating the order of a polymorphic function, we instantiate all type variables with base types. Under these assumptions, most common higher-order functions, such as map and foldr, are second-order, so beginning functional programmers often wonder: What good are functions of order three or above? We illustrate functions of up to sixth-order with examples taken from a combinator parsing library.
Combinator parsing is a classic application of functional programming, dating back to at least Burge (1975). Most combinator parsers are based on Wadler's list-of-successes technique (Wadler, 1985). Hutton (1992) popularized the idea in his excellent tutorial Higher-Order Functions for Parsing. In spite of the title, however, he considered only functions of up to order three.
Interest in the nature, development and use of ontologies is becoming increasingly widespread. Since the early nineties, numerous workshops have been held. Representatives from historically separate disciplines concerned with philosophical issues, knowledge acquisition and representation, planning, process management, database schema integration, natural language processing and enterprise modelling, came together to identify a common core of issues of interest. There was highly varied and inconsistent usage of a wide variety of terms, most notably, “ontology”, rendering cross-discipline communication difficult. However, progress was made toward understanding the commonality among the disciplines. Subsequent workshops addressed various aspects of the field, including theoretical issues, methodologies for building ontologies, as well as specific applications in government and industry.
It is known that any k-uniform family with covering number t has at most ktt-covers. In this paper, we deal with intersecting families and give better upper bounds for the number of t-covers. Let pt(k) be the maximum number of t-covers in any k-uniform intersecting families with covering number t. We prove that, for a fixed t,
formula here
In the cases of t=4 and 5, we also prove that the coefficient of kt−1 in pt(k) is exactly (t2).
Let T be a semicomplete digraph on n vertices. Let ak(T) denote the minimum number of arcs whose addition to T results in a k-connected semicomplete digraph and let rk(T) denote the minimum number of arcs whose reversal in T results in a k-connected semicomplete digraph. We prove that if n[ges ]3k−1, then ak(T)=rk(T). We also show that this bound on n is best possible.
We address the problem of highly varied and inconsistent usage of terms by the knowledge technology community in the area of knowledge-level modelling. It is arguably difficult or impossible for any standard set of terms and definitions to be agreed on. However, de facto standard usage is already emerging within and across certain segments of the community. This is very difficult to see, however, especially for newcomers to the field. It is the goal of this paper to identify and reflect the most common usage of terms as currently found in the literature. To this end, we introduce and define the concept of a knowledge level model, comparing how the term is used today with Newell's original usage. We distinguish two major types of knowledge level model: ontologies and problem solving models. We describe what an ontology is, what they may be used for and how they are represented. We distinguish various kinds of ontologies and define a number of additional related concepts. We describe what is meant by a problem solving model, what they are used for, and attempt to clarify some terminological confusion that exists in the literature. We define what is meant by the term ‘problem’, and some common notions used to characterise and represent problems. We introduce and describe the ideas of tasks, problem solving methods and a variety of other important related concepts.
Moggi's computational lambda calculus is a metalanguage for denotational semantics which arose from the observation that many different notions of computation have the categorical structure of a strong monad on a cartesian closed category. In this paper we show that the computational lambda calculus also arises naturally as the term calculus corresponding (by the Curry–Howard correspondence) to a novel intuitionistic modal propositional logic. We give natural deduction, sequent calculus and Hilbert-style presentations of this logic and prove strong normalisation and confluence results.
In this article we explain two different operational interpretations of functional programs by two different logics. The programs are simply typed λ-terms with pairs, projections, if-then-else and least fixed point recursion. A logic for call-by-value evaluation and a logic for call-by-name evaluation are obtained as as extensions of a system which we call the basic logic of partial terms (BPT). This logic is suitable to prove properties of programs that are valid under both strict and non-strict evaluation. We use methods from denotational semantics to show that the two extensions of BPT are adequate for call-by-value and call-by-name evaluation. Neither the programs nor the logics contain the constant ‘undefined’.
This article focuses on the need for technological aid for agrammatics, and presents a system designed to meet this need. The field of Augmentative and Alternative Communication (AAC) explores ways to allow people with speech or language disabilities to communicate. The use of computers and natural language processing techniques offers a range of new possibilities in this direction. Yet AAC addresses speech deficits mainly, not linguistic disabilities. A model of aided AAC interfaces with a place for natural language processing is presented. The PVI system, described in this contribution, makes use of such advanced techniques. It has been developed at Thomson-CSF for the use of children with cerebral palsy. It presents a customizable interface helping the disabled to compose sequences of icons displayed on a computer screen. A semantic parser, using lexical semantics information, is used to determine the best case assignments for predicative icons in the sequence. It maximizes a global value, the ‘semantic harmony’ of the sequence. The resulting conceptual graph is fed to a natural language generation module which uses Tree Adjoining Grammars (TAG) to generate French sentences. Evaluation by users demonstrates the system's strengths and limitations, and shows the ways for future developments.
We study the number of comparisons in Hoare's Find algorithm. Using trivariate generating functions, we get an explicit expression for the variance of the number of comparisons, if we search for the jth element in a random permutation of n elements. The variance is also asymptotically evaluated under the assumption that j is proportional to n. Similar results for the number of passes (recursive calls) are given, too.
Alternative and Augmentative Communication (AAC) for people with speech and language disorders is an interesting and challenging application field for research in Natural Language Processing. Further advances in the development of AAC systems require robust language processing techniques and versatile linguistic knowledge bases. Also NLP research can benefit from studying the techniques used in this field and from the user-centred methodologies used to develop and evaluate AAC systems. Until recently, however, apart from some exceptions, there was little scientific exchange between the two research areas. This paper aims to make a contribution to closing this gap. We will argue that current interest in language use, which can be shown by the large amount of research on comprehensive dictionaries and on corpora processing, makes the results of NLP research more relevant to AAC. We will also show that the increasing interest of AAC researchers in NLP is having positive results. To situate research on communication aids, the first half of this paper gives an overview of the AAC research field. The second half is dedicated to an overview of research prototype systems and commercially available communication aids that specifically involve more advanced language processing techniques.
Non-speaking people often rely on AAC (Augmentative and Alternative Communication) devices to assist them to communicate. These AAC devices are slow to operate, however, and as a result conversations can be very difficult and frequently break down. This is especially the case when the conversation partner is unfamiliar with this method of communication, and is a big obstacle to many people when they wish to conduct simple everyday transactions. A way of improving the performance of AAC devices by using scripts is discussed. A prototype system to test this idea was constructed, and a preliminary experiment performed with promising results. A practical AAC device which incorporates scripts was then developed, and is described.