We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Having seen some of the principles of partial evaluation we now consider practicalities. In this chapter we will study the standard algorithm used in partial evaluation and introduce an extended example which we develop throughout the thesis. The material of this chapter draws very heavily on the experience of the DIKU group and much of the material presented here may be found in [JSS85], [Ses86] and [JSS89].
Partial evaluation has been attempted in a number of different programming paradigms. The earliest work used LISP-like languages because programs in such languages can easily be treated as data. In particular, the first self-applicable partial evaluator was written in a purely functional subset of first-order, statically scoped LISP. Since then work has been done to incorporate other language features of LISP like languages including, for example, global variables [BD89]. A self-applicable partial evaluator for a term rewriting language has been achieved [Bon89], and more recently a higher-order A-calculus version has been developed [Gom89].
Because of these successes, partial evaluation is sometimes linked with functional languages. Indeed the word “evaluation” itself is expression orientated. However, partial evaluation has also become popular in logic languages, and in Prolog in particular. Kursawe, investigating “pure partial evaluation”, shows that the principles are the same in both the logic and functional paradigms [Kur88]. Using the referentially opaque clause primitive, very compact interpreters (and hence partial evaluators) can be written. However, it is not clear how the clause predicate itself should be handled by a partial evaluator and, hence, whether this approach can ever lead to self-application. Other “features” of Prolog that can cause problems for partial evaluation are the cut and negation-by-failure.
We have studied some of the theoretical aspects of using projections in binding-time analysis and how, again in theory, the dependent sum construction can be used to define the run-time arguments. In this chapter we will draw these threads together in the implementation of a projection-based partial evaluator. The current version is written in LML [Aug84] and not in PEL itself, so it is not yet self-applicable. Indeed there are still some problems about self-application of LML-like languages, which we discuss in the concluding chapter.
One slightly surprising feature is that the moderately complicated dependent sum construction turns out to be almost trivial to implement. In contrast, however, the binding-time analysis is fairly intricate because of the complexity involved in representing projections. Of necessity, parts of the following will interest only those intending to produce an implementation themselves. Anyone uninterested in the gory details should skim much of this chapter and turn to the final section where we develop the extended example.
General
A PEL program, as defined in Chapter 4, consists of type definitions followed by a series of function definitions. At the end of these is an expression to be evaluated. The value of this expression gives the value of the whole program. When we intend to partially evaluate a program we present it in exactly the same form except that the final expression is permitted to have free variables. These free variables indicate non-static data.
There are two almost separate issues to be addressed when we consider polymorphic languages: How to perform polymorphic binding-time analysis, and how to specialise polymorphic functions. We address both here.
Strachey identified two flavours of polymorphism [Str67] which he styled parametric and ad hoc. We will only consider parametric polymorphism, as arises in the widely used Hindley-Milner type system, for example. As ad hoc polymorphism may be reduced to parametric polymorphism by introducing higher-order functions [WB89], this decision is consistent with the thrust of the thesis where we have been considering a first-order language only.
A polymorphic function is a collection of monomorphic instances which, in some sense, behave the same way. Ideally, we would like to take advantage of this uniformity to analyse (and perhaps even specialise) a polymorphic function once, and then to use the result in each instance. Up to now the only work in polymorphic partial evaluation has been by Mogensen [Mog89]. However, with his polymorphic instance analysis each instance of a polymorphic function is analysed independently of the other instances and, as a result, a single function may be analysed many times.
To capture the notion of uniformity across instances Abramsky defined the notion of polymorphic invariance [Abr86]. A property is polymorphically invariant if, when it holds in one instance, it holds in all. Abramsky showed, for example, that a particular strictness analysis was polymorphically invariant. Unfortunately this does not go far enough. Polymorphic invariance guarantees that the result of the analysis of any monomorphic instance of a polymorphic function can be used in all instances, but not that the abstraction of the function can. An example of this distinction appears in [Hug89a].
There seems to be a fundamental dichotomy in computing between clarity and efficiency. From the programmer's point of view it is desirable to break a problem into sub problems and to tackle each of the sub problems independently. Once these have been solved the solutions are combined to provide a solution to the original problem. If the decomposition has been well chosen, the final solution will be a clear implementation of the algorithm, but because of intermediate values passing between the various modules, whether they are functions and procedures or separate processes connected by pipes, the solution is unlikely to be as efficient as possible. Conversely, if efficiency is considered paramount, many logically separate computations may need to be performed together. As a consequence, the algorithm will be reflected less directly in the program, and correctness may be hard to ascertain. Thus, in most programs we find a tradeoff between these conflicting requirements of clarity and efficiency.
An extreme form of modularisation is to write programs in an interpretive style, where flow of control is determined by stored data. Programs in this style are comparatively easy to prove correct and to modify when requirements change, but are well known to have extremely poor run-time behaviour–often an order of magnitude slower than their non-interpretive counterparts. Because of this, the interpretive style tends to be used infrequently and in non time-critical contexts. Instead, flow of control is determined deep within the program where a reasonable level of efficiency may be obtained.