To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we describe an implementation of an interactive version of the purely functional programming language Lazy ML (LML). The most remarkable fact about the interactive system is that it is written in a pure functional style using LML, yet the efficiency still compares favourably to other conventional interpretative systems.
We describe how the system is designed, and also the exception mechanism that was added to facilitate the handling of errors in the system.
This paper describes a flexible type system that combines overloading and higher-order polymorphism in an implicitly typed language using a system of constructor classes—a natural generalization of type classes in Haskell. We present a range of examples to demonstrate the usefulness of such a system. In particular, we show how constructor classes can be used to support the use of monads in a functional language. The underlying type system permits higher-order polymorphism but retains many of the attractive features that have made Hindley/Milner type systems so popular. In particular, there is an effective algorithm that can be used to calculate principal types without the need for explicit type or kind annotations. A prototype implementation has been developed providing, amongst other things, the first concrete implementation of monad comprehensions known to us at the time of writing.
Abstract interpretation is the collective name for a family of semantics-based techniques for compile-time analysis of programs. One of the most costly operations in automating such analyses is the computation of fixed points. The frontiers algorithm is an elegant method, invented by Chris Clack and Simon Peyton Jones, which addresses this issue.
In this article we present a new approach to the frontiers algorithm based on the insight that frontiers represent upper and lower subsets of a function's argument domain. This insight leads to a new formulation of the frontiers algorithm for higher-order functions, which is considerably more concise than previous versions.
We go on to argue that for many functions, especially in the higher-order case, finding fixed points is an intractable problem unless the sizes of the abstract domains are reduced. We show how the semantic machinery of abstract interpretation allows us to place upper and lower bounds on the values of fixed points in large lattices by working within smaller ones.
We present a new static system which reconstructs the types, regions and effects of expressions in an implicitly typed functional language that supports imperative operations on reference values. Just as types structurally abstract collections of concrete values, regions represent sets of possibly aliased reference values and effects represent approximations of the imperative behaviour on regions.
We introduce a static semantics for inferring types, regions and effects, and prove that it is consistent with respect to the dynamic semantics of the language. We present a reconstruction algorithm that computes the types and effects of expressions, and assigns regions to reference values. We prove the correctness of the reconstruction algorithm with respect to the static semantics. Finally, we discuss potential applications of our system to automatic stack allocation and parallel code generation.
This paper compares three different sparse matrix representations in Miranda for solving linear systems of equations: quadtrees, binary trees and run-length encoding. It compares the three data structures in each of two common linear system solvers, Conjugate Gradient and SOR. The test problems used in the paper arise from a simple reservoir model.
The λσ-calculus is a refinement of the λ-calculus where substitutions are manipulated explicitly. The λσ-calculus provides a setting for studying the theory of substitutions, with pleasant mathematical properties. It is also a useful bridge between the classical λ-calculus and concrete implementations.
Most programming environments for functional languages offer a single tool used to evaluate programs – either a batch compiler or an interpreter with a read-eval-print loop. This paper presents a programming environment that supports not only evaluation, but also a range of other programming activities including transformation. The environment is designed to encourage working in an incremental and exploratory style, avoiding constraints on the order in which things must be done yet guarenteeing security. What has already been done towards the development of a program automatically persists, as does information about what has yet to be done. For instance, new laws can be introduced as conjectures and used in program transformation, but full details of proof obligations and dependencies are maintained.
The paper outlines the functional language supported by the environment, and uses an extended example to illustrate program construction, execution, tracing, modification and transformation.
We have built a portable, instrumentation-based, replay debugger for the Standard ML of New Jersey compiler. Traditional ‘source-level’ debuggers for compiled languages actually operate at machine level, which makes them complex, difficult to port, and intolerant of compiler optimization. For secure languages like ML, however, debugging support can be provided without reference to the underlying machine, by adding instrumentation to program source code before compilation. Because instrumented code is (almost) ordinary source, it can be processed by the ordinary compiler. Our debugger is thus independent from the underlying hardware and runtime system, and from the optimization strategies used by the compiler. The debugger also provides reverse execution, both as a user feature and an internal mechanism. Reverse execution is implemented using a checkpoint and replay system; checkpoints are represented primarily by first-class continuations.
A binding-time analysis is correct if it always produces consistent binding-time information. Consistency prevents partial evaluators from ‘going wrong’. A sufficient and decidable condition for consistency, called well-annotatedness, was first presented by Gomard and Jones. In this paper we prove that a weaker condition implies consistency. Our condition is decidable, subsumes the one of Gomard and Jones, and was first studied by Schwartzbach and the present author. Our result implies the correctness of the binding-time analysis of Mogensen, and it indicates the correctness of the core of the binding-time analyses of Bondorf and Consel. We also prove that all partial evaluators will on termination have eliminated all ‘eliminable’-marked parts of an input which satisfies our condition. This generalizes a result of Gomard. Our development is for the pure λ-calculus with explicit binding-time annotations.
This paper describes a data-intensive application written in a lazy functional language: a server for textual information retrieval. The design illustrates the importance of interoperability, the capability of interacting with code written in other programming languages. Lazy functional programming is shown to be a powerful and elegant means of accomplishing several desirable concrete goals: delivering initial results promptly, using space economically, and avoiding unnecessary I/O. Performance results, however, are mixed.