To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Traditional functional languages do not have an explicit distinction between binding times. It arises implicitly, however, as one typically instantiates a higher-order function with the arguments that are known, whereas the unknown arguments remain to be taken as parameters. The distinction between ‘known’ and ‘unknown’ is closely related to the distinction between binding times like ‘compile-time’ and ‘run-time’. This leads to the use of a combination of polymorphic type inference and binding time analysis for obtaining the required information about which arguments are known.
Following the current trend in the implementation of functional languages we then transform the run-time level of the program (not the compile-time level) into categorical combinators. At this stage we have a natural distinction between two kinds of program transformations: partial evaluation, which involves the compile-time level of our notation, and algebraic transformations (i.e., the application of algebraic laws), which involves the run-time level of our notation.
By reinterpreting the combinators in suitable ways we obtain specifications of abstract interpretations (or data flow analyses). In particular, the use of combinators makes it possible to use a general framework for specifying both forward and backward analyses. The results of these analyses will be used to enable program transformations that are not applicable under all circumstances.
Finally, the combinators can also be reinterpreted to specify code generation for various (abstract) machines. To improve the efficiency of the code generated, one may apply abstract interpretations to collect extra information for guiding the code generation. Peephole optimizations may be used to improve the code at the lowest level.
This paper presents a simple framework for performing calculations with the elements of (finite) lattices. A particular feature of this work is the use of type classes to enable the use of overloaded function symbols within a strongly typed language. Previous applications of type classes have been in areas that are of most interest to language implementors. This paper suggests that type classes might also be useful as a general tool in the development of clear and modular programs.
There are situations in programming where some dynamic typing is needed, even in the presence of advanced static type systems. We investigate the interplay of dynamic types with other advanced type constructions, discussing their integration into languages with explicit polymorphism (in the style of system F), implicit polymorphism (in the style of ML), abstract data types, and subtyping.
We argue that the novel combination of type classes and existential types in a single language yields significant expressive power. We explore this combination in the context of higher-order functional languages with static typing, parametric polymorphism, algebraic data types and Hindley–Milner type inference. Adding existential types to an existing functional language that already features type classes requires only a minor syntactic extension. We first demonstrate how to provide existential quantification over type classes by extending the syntax of algebraic data type definitions, and give examples of possible uses. We then develop a type system and a type inference algorithm for the resulting language. Finally, we present a formal semantics by translation to an implicitly-typed second-order λ-calculus and show that the type system is semantically sound. Our extension has been implemented in the Chalmers Haskell B. system, and all examples from this paper have been developed using this system.
At the recent TC2 working conference on constructing programs from specifications (Moeller, 1991), I presented the derivation of a functional program for solving a problem posed by Knuth (1990). Slightly simplified, the problem was to construct a shortest decimal fraction representing a given integer multiple of 1/216. Later in the conference – and in a different context – Robert Dewar described a second problem that he had recently set as an examination question. In brief, the problem was to replace sequences of blanks in a file by tab characters wherever possible. Although Knuth's and Dewar's problems appear to have little in common, I suspected that both had the same ‘deep structure’ and were instances of a single general result about greedy algorithms. The purpose of this note is to bring the genral result to light and to unify the treatment of the two problems. We begin by describing the problems more precisely.
We describe the design, implementation and use of a new kind of profiling tool that yields valuable information about the memory use of lazy functional programs. The tool has two parts: a modified functional language implementation which generates profiling information during the execution of programs, and a separate program which converts this information to graphical form. With the aid of profile graphs, one can make alterations to a functional program which dramatically reduce its space consumption. We demonstrate this in the case of a genuine example - the first to which the tool was applied - for which the results are strikingly successful.
Quest is a programming language based on impredicative type quantifiers and subtyping within a three-level structure of kinds, types and type operators, and values.
The semantics of Quest is rather challenging. In particular, difficulties arise when we try to model simultaneously features such as contravariant function spaces, record types, subtyping, recursive types and fixpoints.
In this paper we describe in detail the type inference rules for Quest, and give them meaning using a partial equivalence relation model of types. Subtyping is interpreted as in previous work by Bruce and Longo (1989), but the interpretation of some aspects – namely subsumption, power kinds, and record subtyping – is novel. The latter is based on a new encoding of record types.
We concentrate on modelling quantifiers and subtyping; recursion is the subject of current work.
We introduce a λ-calculus notation which enables us to detect in a term, more β-redexes than in the usual notation. On this basis, we define an extended β-reduction which is yet a subrelation of conversion. The Church Rosser property holds for this extended reduction. Moreover, we show that we can transform generalised redexes into usual ones by a process called ‘term reshuffling’.
Compiler generation based on Mosses' action semantics has been studied by Brown, Moura, and Watt, and also by the second author. The core of each of their systems is a handwritten action compiler, producing either C or machine code. We have obtained an action compiler in a much simpler way: by partial evaluation of an action interpreter. Even though our compiler produces Scheme code, the code runs as fast as that produced by the previous action compilers.
The projection-based strictness analysis of Wadler and Hughes is elegant and theoretically satisfying except in one respect: the need for lifting. The domains and functions over which the analysis is performed need to be transformed, leading to a less direct correspondence between analysis and program than might be hoped for. In this paper we shall see that the projection analysis may be reformulated in terms of partial projections, so removing this infelicity. There are additional benefits of the formulation: the two forms of information captured by the projection are distinguished, and the operational significance of the range of the projection fits exactly with the theory of unboxed types.
We prove a strong normalisation result for the linear term calculus of Benton, Bierman, Hyland and de Paiva. Rather than prove the result from first principles, we give a translation of linear terms into terms in the second-order polymorphic lambda calculus (λ2) which allows the result to be proved by appealing to the well-known strong normalisation property of λ2. An interesting feature of the translation is that it makes use of the λ2 coding of a coinductive datatype as the translation of the !-types (exponentials) of the linear calculus.