To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A method is proposed to search for an identifier in a functional program library by using its Hindley–Milner type as a key. This can be seen as an approximation of using the specification as a key.
Functions that only differ in their argument order or currying are essentially the same, which is expressed by a congruence relation on types. During a library search, congruent types are identified. If a programmer is not satisfied with the type of a found value, he can use a conversion function (like curry), which must exist between congruent types, to convert the value into the type of his choice.
Types are congruent if they are isomorphic in all cartesian closed categories. To put it more simply, types are congruent if they are equal under an arithmetical interpretation, with cartesian product as multiplication and function space as exponentiation. This congruence relation is characterized by seven equational axioms. There is a simple term-rewriting algorithm to decide congruence, using which a search system has been implemented for the functional language Lazy ML, with good performance.
The congruence relation can also be used as a basis for other search strategies, e.g. searching for identifiers of a more general type, modulo congruence or allowing free type variables in queries.
In the last two decades, category theory has become one of the main tools for the denotational investigation of programming languages. Taking advantage of the algebraic nature of the categorical semantics, and of the rewriting systems it suggests, it is possible to use these denotational descriptions as a base for research into more operational aspects of programming languages.
This approach proves to be particularly interesting in the study and the definition of environment machines for functional languages. The reason is that category theory offers a simple and uniform language for handling terms and environments (substitutions), and for studying their interaction (through application).
Several examples of known machines are discussed, among which the Categorical Abstract Machine of Cousineau et al. (1987) and Krivine's machine. Moreover, as an example of the power and fruitfulness of this approach, we define two original categorical machines. The first one is a variant of the CAM implementing a λ-calculus with both call-by-value and call-by-name as parameters passing modes. The second one is a variant of Krivine's machine performing complete reduction of λ-terms.
We study the operational semantics of an extension of Girard's System Fω with two control operators: an abort operation that abandons the current control context, and a callcc operation that captures the current control context. Two classes of operational semantics are considered, each with a call-by-value and a call-by-name variant, differing in their treatment of polymorphic abstraction and instantiation. Under the standard semantics, polymorphic abstractions are values and polymorphic instantiation is a significant computation step; under the ML-like semantics evaluation proceeds beneath polymorphic abstractions and polymorphic instantiation is computationally insignificant. Compositional, type-preserving continuation-passing style (cps) transformation algorithms are given for the standard semantics, resulting in terms on which all four evaluation strategies coincide. This has as a corollary the soundness and termination of well-typed programs under the standard evaluation strategies. In contrast, such results are obtained for the call-by-value ML-like strategy only for a restricted sub-language in which constructor abstractions are limited to values. The ML-like call-by-name semantics is indistinguishable from the standard call-by-name semantics when attention is limited to complete programs.
The MetaLanguage of the Edinburgh LCF theorem proving system has become a programming language in its own right, popular among a reasonably wide segment of the research community. ML has also become a lingua franca among applied type theorists, as they investigate type systems for the 90's as extensions of the remarkably influential Hindley-Milner type system.
Communication lifting is a program transformation that can be applied to a synchronous process network to restructure the network. This restructuring in theory improves sequential and parallel performance. The transformation has been formally specified and proved correct and it has been implemented as an automatic program transformation tool. This tool has been applied to a small set of programs consisting of synchronous process networks. For these networks communication lifting generates parallel programs that do not require locking. Measurements indicate performance gains in practice both with sequential and parallel evaluation. Communication lifting is a worthwhile optimization to be included in a compiler for a lazy functional language.
Traditionally the view has been that direct expression of control and store mechanisms and clear mathematical semantics are incompatible requirements. This paper shows that adding objects with memory to the call-by-value lambda calculus results in a language with a rich equational theory, satisfying many of the usual laws. Combined with other recent work, this provides evidence that expressive, mathematically clean programming languages are indeed possible.
In this paper we show that it is possible to implement a symmetric set of finite-list operations efficiently; the set is symmetric in the sense that lists can be manipulated at either end. We derive the definitions of these operations from their specifications by calculation. The operations have O(1) time complexity, provided that we content ourselves with, so-called, amortized efficiency, instead of worst-case efficiency
The integration of functional and logic programming languages has been a topic of great interest in the last decade. Many proposals have been made, yet none is completely satisfactory especially in the context of higher order functions and lazy evaluation. This paper addresses these shortcomings via a new approach: domain theory as a common basis for functional and logic programming. Our integrated language remains essentially within the functional paradigm. The logic programming capability is provided by set abstraction (via Zermelo-Frankel set notation), using the Herbrand universe as a set abstraction generator, but for efficiency reasons our proposed evaluation procedure treats this generator's enumeration parameter as a logical variable. The language is defined in terms of (computable) domain-theoretic constructions and primitives, using the lower (or angelic) powerdomain to model the set abstraction facility. The result is a simple, elegant and purely declarative language that successfully combines the most important features of both pure functional programming and pure Horn logic programming. Referential transparency with respect to the underlying mathematical model is maintained throughout. An implicitly correct operational semantics is obtained by direct execution of the denotational semantic definition, modified suitably to permit logical variables whenever the Herbrand universe is being generated within a set abstraction. Completeness of the operational semantics requires a form of parallel evaluation, rather than the more familiar left-most rule.
The Flagship Project1 was a research collaboration between the University of Manchester, Imperial College London and International Computers Ltd. The project was unusual in that it aimed to produce a complete computing system based on a declarative programming style. Three areas of a declarative system were addressed: (1) programming languages and programming environments; (2) the machine architecture and computational models; and (3) the software environment. This overview paper discusses each of these areas, the intention being to present the project as a coherent whole.
The resource constrained shortest path problem is an NP-hard problem for which many ingenious algorithms have been developed. These algorithms are usually implemented in Fortran or another imperative programming language. We have implemented some of the simpler algorithms in a lazy functional language. Benefits accrue in the software engineering of the implementations. Our implementations have been applied to a standard benchmark of data files, which is available from the Operational Research Library of Imperial College, London. The performance of the lazy functional implementations, even with the comparatively simple algorithms that we have used, is competitive with a reference Fortran implementation.
We present here a generalization of A-translation to a class of pure type systems. We apply this translation to give a direct proof of the existence of a looping combinator in a large class of inconsistent type systems, a class which includes type systems with a type of all types. This is the first non-automated solution to this problem.
A substantial amount of work has been devoted to the proof of correctness of various program analyses but much less attention has been paid to the correctness of compiler optimisations based on these analyses. In this paper we tackle the problem in the context of strictness analysis for lazy functional languages. We show that compiler optimisations based on strictness analysis can be expressed formally in the functional framework using continuations. This formal presentation has two benefits: it allows us to give a rigorous correctness proof of the optimised compiler; and it exposes the various optimisations made possible by a strictness analysis.
We analyse the computational complexity of type inference for untyped λ-terms in the second-order polymorphic typed λ-calculus (F2) invented by Girard and Reynolds, as well as higher-order extensions F3, F4, …, Fω proposed by Girard. We prove that recognising the F2-typable terms requires exponential time, and for Fω the problem is non-elementary. We show as well a sequence of lower bounds on recognising the Fk-typable terms, where the bound for Fk+1 is exponentially larger than that for Fk.
The lower bounds are based on generic simulation of Turing Machines, where computation is simulated at the expression and type level simultaneously. Non-accepting computations are mapped to non-normalising reduction sequences, and hence non-typable terms. The accepting computations are mapped to typable terms, where higher-order types encode reduction sequences, and first-order types encode the entire computation as a circuit, based on a unification simulation of Boolean logic. A primary technical tool in this reduction is the composition of polymorphic functions having different domains and ranges.
These results are the first nontrivial lower bounds on type inference for the Girard/Reynolds system as well as its higher-order extensions. We hope that the analysis provides important combinatorial insights which will prove useful in the ultimate resolution of the complexity of the type inference problem.
In 1979, Klop (1980), answering a question raised by Mann in 1972, showed that the extension of λ-calculus with subjective pairing is not confluent. We refer to Klop (1980) and Barendregt (1981, revised 1984) for a perspective. The term presented by Klop to provide a counterexample is fairly simple, but the proof of non-confluence, although intuitively quite simple, involves some technical properties. Among others, a suitable standardization result on derivations in the extended system is needed in the proof. Klop's proof was revisited by Bunder (1985), who seemingly used less technical apparatus than Klop, starting with the same term as Klop. Although Bunder's proof does not explicitly use a standardization result, his proof proceeds internally with some rearrangements of derivations, so that it is fair to say that some standardization technique is present in Bunder (1985).