To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It has been claimed that recent developments in the research on the efficiency of code generation and on graphical input/output interfacing have made it possible to use a functional language to write efficient programs that can compete with industrial applications written in a traditional imperative language. As one of the early steps in verifying this claim, this paper describes a first attempt to implement a spreadsheet in a lazy, purely functional language. An interesting aspect of the design is that the language with which the user specifies the relations between the cells of the spreadsheet is itself a lazy, purely functional and higher order language as well, and not some special dedicated spreadsheet language. Another interesting aspect of the design is that the spreadsheet incorporates symbolic reduction and normalisation of symbolic expressions (including equations). This introduces the possibility of asking the system to prove equality of symbolic cell expressions: a property which can greatly enhance the reliability of a particular user-defined spreadsheet. The resulting application is by no means a fully mature product. It is not intended as a competitor to commercially available spreadsheets. However, with its higher order lazy functional language and its symbolic capabilities it may serve as an interesting candidate to fill the gap between calculators with purely functional expressions and full-featured spreadsheets with dedicated non-functional spreadsheet languages. This paper describes the global design and important implementation issues in the development of the application. The experience gained and lessons learnt during this project are discussed. Performance and use of the resulting application are compared with related work.
A number of difficulties in the formalism of Pure Type Systems (PTS) is discussed and an alternative classification system for typed calculi is proposed. In the new approach the main novelty is that one first explicitly specifies the dependencies that may occur. This is especially useful to describe constants, but it also facilitates the description of other type theoretic features like dependent sums.
Many of the details that a programmer must manage when programming in a procedural language are handled by the implementation in a functional language. In a parallel functional language, we would expect the assignment of processes to processors and the scheduling of processes to be handled by the implementation. The partitioning of a program into processes may be handled by the implementation as well.
We will assume that a parallel functional program may create processes dynamically, and that processes may synchronize in an arbitrary manner. It does not matter whether processes are defined in the source program or generated by the implementation.
On parallel systems where each processor has some local memory, it is common practice not to move processes once they have started to execute. We will show that if each process must be executed on a single processor, then no fully automatic scheduling strategy can ensure good performance.
We also will show that if all processes are sequential processes (i.e. do not contain internal parallelism), and if a process may be moved whenever it resumes execution following synchronization with another process, then good performance can be assured, at least in theory.
List terminators are discussed as a new form of qualifier in list comprehensions for early termination of a list. The semantics of list terminators is expressed in terms of an optimal translation of list comprehensions (cf. Wadler, 1987) because it makes direct use of a continuation list.
We extend the definition of natural semantics to include simply typed λ-terms, instead of first-order terms, for representing programs, and to include inference rules for the introduction and discharge of hypotheses and eigenvariables. This extension, which we call extended natural semantics, affords a higher-level notion of abstract syntax for representing programs and suitable mechanisms for manipulating this syntax. We present several examples of semantic specifications for a simple functional programming language and demonstrate how we achieve simple and elegant manipulations of bound variables in functional programs. All the examples have been implemented and tested in λProlog, a higher-order logic programming language that supports all of the features of extended natural semantics.
An extension of ML with continuation primitives similar to those found in Scheme is considered. A number of alternative type systems are discussed, and several programming examples are given. A continuation-based operational semantics is defined for a small, purely functional language, and the soundness of the Damas–Milner polymorphic type assignment system with respect to this semantics is proved. The full Damas–Milner type system is shown to be unsound in the presence of first-class continuations. Restrictions on polymorphism similar to those introduced in connection with reference types are shown to suffice for soundness.
The Third Homomorphism Theorem is a folk theorem of the constructive algorithmics community. It states that a function on lists that can be computed both from left to right and from right to left is necessarily a list homomorphism – it can be computed according to any parenthesization of the list. We formalize and prove the theorem, and use it to improve an O(n2) sorting algorithm to O(n log n).
Standard ML is an excellent language for many kinds of programming. It is safe, efficient, suitably abstract, and concise. There are many aspects of the language that work well.
However, nothing is perfect: Standard ML has a few shortcomings. In some cases there are obvious solutions, and in other cases further research is required.
We have developed a simple algebraic approach to music description and composition called Haskore. In this framework, musical objects consist of primitive notions such as notes and rests, operations to transform musical objects such as transpose and tempo-scaling, and operations to combine musical objects to form more complex ones, such as concurrent and sequential composition. When these simple notions are embedded into a functional language such as Haskell, rather complex musical relationships can be expressed clearly and succinctly. Exploiting the algebraic properties of Haskore, we have further defined a notion of literal performance (devoid of articulation) through which observationally equivalent musical objects can be determined. With this basis many useful properties can be proved, such as commutative, associative, and distributive properties of various operators. An algebra of music thus surfaces.
Based on Henglein's efficient binding-time analysis for the lambda calculus (with constants and ‘fix’) (Henglein, 1991), we develop three efficient analyses for use in the preprocessing phase of Similix, a self-applicable partial evaluator for a higher-order subset of Scheme. The analyses developed in this paper are almost-linear in the size of the analysed program. (1) A flow analysis determines possible value flow between lambda-abstractions and function applications and between constructor applications and selector/predicate applications. The flow analysis is not particularly biased towards partial evaluation; the analysis corresponds to the closure analysis of Bondorf (1991b). (2) A (monovariant) binding-time analysis distinguishes static from dynamic values; the analysis treats both higher-order functions and partially static data structures. (3) A new is-used analysis, not present in Bondorf (1991b), finds a non-minimal binding-time annotation which is ‘safe’ in a certain way: a first-order value may only become static if its result is ‘needed’ during specialization; this ‘poor man's generalization’ (Holst, 1988) increases termination of specialization. The three analyses are performed sequentially in the above mentioned order since each depends on results from the previous analyses. The input to all three analyses are constraint sets generated from the program being analysed. The constraints are solved efficiently by a normalizing union/find-based algorithm in almost-linear time. Whenever possible, the constraint sets are partitioned into subsets which are solved in a specific order; this simplifies constraint normalization. The framework elegantly allows expressing both forwards and backwards components of analyses. In particular, the new is-used analysis is of backwards nature. The three constraint normalization algorithms are proved correct (soundness, completeness, termination, existence of a best solution). The analyses have been implemented and integrated in the Similix system. The new analyses are indeed much more efficient than those of Bondorf (1991b); the almost-linear complexity of the new analyses is confirmed by the implementation.
This paper discusses the transformation power of Burstall and Darlington's folding/unfolding system, i.e. what kind of programs can be derived from a given one. A necessary condition of derivability is proved. The notion of inherent complexity of recursive functions in introduced. A bound on efficiency gain by folding/unfolding transformations is obtained for all reasonable computation models. The well-known partial correctness and incompleteness of the system are corollaries of the result. Examples of underivability are given, e.g. binary searching cannot be derived from linear searching, merge sorting cannot be derived from insert sorting.
In this paper we present functional Id and Haskell versions of a large Monte Carlo radiation transport code, and compare the two languages with respect to their expressiveness. Monte Carlo transport simulation exercises such abilities as parsing, input/output, recursive data structures and traditional number crunching, which makes it a good test problem for languages and compilers. Using some code examples, we compare the programming styles encouraged by the two languages. In particular, we discuss the effect of laziness on programming style. We point out that resource management problems currently prevent running realistically large problem sizes in the functional versions of the code.
A common technique for improving the speed of program specialization is to specialize the program specializer itself on the program to be specialized, creating a custom program generator. Much research has been devoted to the problem of generating efficient program generators, which do not perform reductions at program generation time which could instead have been performed when the program generator was constructed. The conventional wisdom holds that only program specializers using binding-time approximations can be specialized into such efficient program generators. This paper argues that this is not the case, and demonstrates that the specialization of a nontrivial online program specializer similar to the original ‘naive MIX’ can indeed yield an efficient program generator. The key to our argument is that, while the use of binding-time information at program generator generation time is necessary for the construction of an efficient custom specializer, the use of explicit binding-time approximation techniques is not. This allows us to distinguish the problem at hand (i.e. the use of binding-time information during program generator generation) from particular solutions to that problem (i.e. offline specialization). We show that, given a careful choice of specializer data structures, and sufficiently powerful specialization techniques, binding-time information can be inferred and utilized without explicit approximations. This allows the construction of efficient, optimizing program generators from online program specializers.
A combinator-based parser is a parser constructed directly from a BNF grammar, using higher-order functions (combinators) to model the alternative and sequencing operations of BNF. This paper describes a method for constructing parser combinators that can be used to build efficient predictive parsers which accurately report the cause of parsing errors. The method uses parsers that return values (parse trees or error indications) decorated with one of four tags.