To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The goal of the chip design problem is the specification in absolute units of the precise geometric content of each mask in a particular fabrication process, such that the integrated circuits created by the application of the masks exhibit the intended function. This specification forms the interface between the design and fabrication domains. The intersection of shapes in the set of masks characterises the circuit by determining the nature, location and size of devices and wiring. At the final stage the specification will be a stream of data suitable for either driving mask making equipment or controlling a direct write device.
As the scale of integration increases, so does the complexity of the design problem. The huge volumes of data involved stretch to the limit the human capability to specify a functionally correct layout in reasonable time. An individual's ability to conceive a complex design and successfully translate it into the very detailed information required for mask making is doubtful. The introduction of abstraction is an essential element in chip design. Abstraction is the process of symbolising concepts by extracting common qualities from individual cases. The power of computer automation can then be applied to manipulate the symbols of abstractions.
Axioms
Properties of materials and devices for a particular process can be derived empirically from observations and measurements made in the fabrication plant. These properties are taken to be axiomatic to the chip design problem. There are two sets of such axioms. The first set describe the properties and behaviour of semiconductor and wiring materials in particular configurations.
The purpose of this chapter is to present a detailed description of the design and implementation of an automated layout system based on the principle of three-dimensional cell tessellation proposed in the previous chapter. The description begins with a statement of the experimental aims and required features of the layout system and continues with details of circuit representation. This is followed by details of the design and implementation of cell and layout representation, and of the abutment algorithms. A number of important properties which abutting layouts must possess are highlighted in the details of the algorithms. Finally, the control of the abutment process is discussed through the construction of merit functions and mechanisms for combining elements of the algorithms.
Experimental requirements
A number of requirements of the layout system are stated at the outset. These are:
Ability to specify a range of cell shapes,
Ability to specify the connection interface,
Flexible merit function and layout control,
Full layout automation requiring no manual editing,
Ability to handle circuits of non-trivial size,
Extraction of layout data for comparison,
A high degree of automated layout validation.
Although the cell model for abutment experiments is primarily intended to be the cube, the system is nevertheless required to be sufficiently general to be able to perform layout using one of a number of cell models. Specification of the cell model includes both the shape of the cell and the number of connections per face. An important aspect of the experiment is the determination of suitable merit functions and layout control.
Having seen some of the principles of partial evaluation we now consider practicalities. In this chapter we will study the standard algorithm used in partial evaluation and introduce an extended example which we develop throughout the thesis. The material of this chapter draws very heavily on the experience of the DIKU group and much of the material presented here may be found in [JSS85], [Ses86] and [JSS89].
Partial evaluation has been attempted in a number of different programming paradigms. The earliest work used LISP-like languages because programs in such languages can easily be treated as data. In particular, the first self-applicable partial evaluator was written in a purely functional subset of first-order, statically scoped LISP. Since then work has been done to incorporate other language features of LISP like languages including, for example, global variables [BD89]. A self-applicable partial evaluator for a term rewriting language has been achieved [Bon89], and more recently a higher-order A-calculus version has been developed [Gom89].
Because of these successes, partial evaluation is sometimes linked with functional languages. Indeed the word “evaluation” itself is expression orientated. However, partial evaluation has also become popular in logic languages, and in Prolog in particular. Kursawe, investigating “pure partial evaluation”, shows that the principles are the same in both the logic and functional paradigms [Kur88]. Using the referentially opaque clause primitive, very compact interpreters (and hence partial evaluators) can be written. However, it is not clear how the clause predicate itself should be handled by a partial evaluator and, hence, whether this approach can ever lead to self-application. Other “features” of Prolog that can cause problems for partial evaluation are the cut and negation-by-failure.
We have studied some of the theoretical aspects of using projections in binding-time analysis and how, again in theory, the dependent sum construction can be used to define the run-time arguments. In this chapter we will draw these threads together in the implementation of a projection-based partial evaluator. The current version is written in LML [Aug84] and not in PEL itself, so it is not yet self-applicable. Indeed there are still some problems about self-application of LML-like languages, which we discuss in the concluding chapter.
One slightly surprising feature is that the moderately complicated dependent sum construction turns out to be almost trivial to implement. In contrast, however, the binding-time analysis is fairly intricate because of the complexity involved in representing projections. Of necessity, parts of the following will interest only those intending to produce an implementation themselves. Anyone uninterested in the gory details should skim much of this chapter and turn to the final section where we develop the extended example.
General
A PEL program, as defined in Chapter 4, consists of type definitions followed by a series of function definitions. At the end of these is an expression to be evaluated. The value of this expression gives the value of the whole program. When we intend to partially evaluate a program we present it in exactly the same form except that the final expression is permitted to have free variables. These free variables indicate non-static data.
There are two almost separate issues to be addressed when we consider polymorphic languages: How to perform polymorphic binding-time analysis, and how to specialise polymorphic functions. We address both here.
Strachey identified two flavours of polymorphism [Str67] which he styled parametric and ad hoc. We will only consider parametric polymorphism, as arises in the widely used Hindley-Milner type system, for example. As ad hoc polymorphism may be reduced to parametric polymorphism by introducing higher-order functions [WB89], this decision is consistent with the thrust of the thesis where we have been considering a first-order language only.
A polymorphic function is a collection of monomorphic instances which, in some sense, behave the same way. Ideally, we would like to take advantage of this uniformity to analyse (and perhaps even specialise) a polymorphic function once, and then to use the result in each instance. Up to now the only work in polymorphic partial evaluation has been by Mogensen [Mog89]. However, with his polymorphic instance analysis each instance of a polymorphic function is analysed independently of the other instances and, as a result, a single function may be analysed many times.
To capture the notion of uniformity across instances Abramsky defined the notion of polymorphic invariance [Abr86]. A property is polymorphically invariant if, when it holds in one instance, it holds in all. Abramsky showed, for example, that a particular strictness analysis was polymorphically invariant. Unfortunately this does not go far enough. Polymorphic invariance guarantees that the result of the analysis of any monomorphic instance of a polymorphic function can be used in all instances, but not that the abstraction of the function can. An example of this distinction appears in [Hug89a].
There seems to be a fundamental dichotomy in computing between clarity and efficiency. From the programmer's point of view it is desirable to break a problem into sub problems and to tackle each of the sub problems independently. Once these have been solved the solutions are combined to provide a solution to the original problem. If the decomposition has been well chosen, the final solution will be a clear implementation of the algorithm, but because of intermediate values passing between the various modules, whether they are functions and procedures or separate processes connected by pipes, the solution is unlikely to be as efficient as possible. Conversely, if efficiency is considered paramount, many logically separate computations may need to be performed together. As a consequence, the algorithm will be reflected less directly in the program, and correctness may be hard to ascertain. Thus, in most programs we find a tradeoff between these conflicting requirements of clarity and efficiency.
An extreme form of modularisation is to write programs in an interpretive style, where flow of control is determined by stored data. Programs in this style are comparatively easy to prove correct and to modify when requirements change, but are well known to have extremely poor run-time behaviour–often an order of magnitude slower than their non-interpretive counterparts. Because of this, the interpretive style tends to be used infrequently and in non time-critical contexts. Instead, flow of control is determined deep within the program where a reasonable level of efficiency may be obtained.
The static projection tells us which part of a function's argument will be present during partial evaluation. In any particular call of the function, this part of the argument is used in the production of a residual function. However, this still leaves the question: which part of the argument should the residual function be given at run-time? Obviously we could pass the whole argument if we wanted to, but we can do a lot better. After all, we assume that the partial evaluator will have taken the static part of the argument into account in producing the residual function. It ought to be unnecessary to supply the residual function with the same information all over again.
We need a way to select the run-time information. The original argument to a function ƒ must be factorised, or decomposed, into static and dynamic factors, and this factorisation should be as complete as possible. That is, the amount of static information which is also regarded as dynamic should be minimised. Then, when we pass the dynamic argument to the residual function, we will be passing as little information at run-time as possible. There are, of course, many possible factorisation methods. Some produce an exact decomposition while others do not in that they contain extra junk.
We will look at two methods in this chapter. While the first does not produce an exact factorisation of the original argument, it is based on very familiar constructions and is interesting in its own right. The second method, which is exact, arises as a generalisation of the first, and provides a practical application of some fairly deep mathematics.