To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This book is about the design of asynchronous VLSI circuits based on a programming and compilation approach. It introduces handshake circuits as an intermediate architecture between the algorithmic programming language Tangram and VLSI circuits.
The work presented in this book grew out of the project “VLSI programming and compilation into asynchronous circuits” being conducted at Philips Research Laboratories Eindhoven, since 1986. Our original motivation was to increase the productivity of VLSI design by treating circuit design as a programming activity. We chose asynchronous circuits as target for automatic silicon compilation, because asynchronous circuits simplified the translation process and made it easier to take advantage from the abundantly available parallelism in VLSI. Later we discovered that the potential for low power consumption inherent in asynchronous circuits may turn out to be highly relevant to battery-powered products.
The core of this book is about handshake circuits. A handshake circuit is a network of handshake components connected by handshake channels, along which components interact exclusively by means of handshake signaling. It presents a theoretical model of handshake circuits, a compilation method, and a number of VLSI-implementation issues. This core is sandwiched between an informal introduction to VLSI programming and handshake circuits on the one side and a discussion on practical experiences including tooling and chip evaluations on the other side.
The most interesting operation on handshake processes is parallel composition. Parallel composition is defined only for connectable processes. Connectability of handshake processes captures the idea that ports form the unit of connection (as opposed to individual port symbols), and that a passive port can only be connected to a single active port and vice versa. A precise definition will be given later.
The communication between connectable handshake processes is asynchronous: the sending of a signal by one process and the reception of that signal by another process are two distinct events. Asynchronous communication is more complicated than synchronized communication, because of the possible occurrence of interference. The concept of interference with respect to voltage transitions has been mentioned in Section 0.1. Interference with respect to symbols occurs when one process sends a symbol and the other process is not ready to receive it. The receptiveness of handshake processes and the imposed handshake protocol exclude the possibility of interference. We are therefore allowed to apply the simpler synchronized communication in the definition of parallel composition of handshake processes.
Another complication is, however, the possibility of divergence: an unbounded amount of internal communication, which cannot be distinguished externally from deadlock. From an implementation viewpoint divergence is undesirable: it forms a drain on the power source, without being productive.
The external behavior of the parallel composition of connectable P and Q will be denoted by P ∥ Q, which is again a handshake process.
Tangram is a VLSI-programming language based on CSP, and has much in common with the programming language OCCAM [INM89] (see Section 2.7 for some of the differences). The main construct of Tangram is the command. Commands are either primitive commands, such as a?x and x := x + 1, or composite commands, such as R; S and R ∥ S, where R and S are commands themselves.
Execution of a command may result in a number of communications with the environment through external ports. Another form of interaction with the environment is the reading from and writing into external variables. A Tangram program is a command without external variables, prefixed by an explicit definition of its external ports.
Not all compositions of commands are valid in Tangram. For instance, in a sequential composition the two constituent commands must agree on the input/output direction of their common ports. Also, two commands composed in parallel may not write concurrently into a common variable. Similarly, concurrent reading from and writing into a common variable is not allowed. Section 6.1 defines the syntax of Tangram, including these composition rules. The meaning of each command is described informally.
For a subset of the Tangram commands the handshake-process denotations are given in Section 6.3. This subset is referred to as Core Tangram.
Tangram
The main syntactic constructs of Tangram are program, command, guarded-command set, and expression. With each construct we associate a so-called alphabet structure: a set of typed ports and variables.
Handshake circuits are proposed as an intermediary between communicating processes (Tangram programs) and VLSI circuits. Chapter 7 describes the translation of Tangram programs into handshake circuits. This chapter is concerned with the realization of handshake circuits as efficient and testable VLSI circuits. First we observe that the fine-grained parallelism available in VLSI circuits matches the fine-grained concurrency in handshake circuits nicely. The mapping of handshake circuits to VLSI circuits can therefore be relatively direct.
A rather naive mapping is suggested by the following correspondence:
a channel corresponds to a set of wires, one per symbol;
an event with name a corresponds to a voltage transition along wire a;
each handshake component corresponds to a VLSI circuit that satisfies the specification at the transition level.
There is no doubt that the above mapping can result in functional circuits. In general, however, the resulting circuits will be prohibitive in size, poor in performance, probably hard to initialize, and impractical to test for fabrication faults. Concerns for circuit size, performance, initialization and testability are therefore recurring themes in this chapter.
A full treatment of all relevant VLSI-realization issues is beyond the scope of this monograph. Issues that directly relate to (properties of) handshake circuits have been selected for a relatively precise treatment; other topics are sketched more briefly. This chapter discusses:
peephole optimization: the replacement of subcircuits by cheaper ones;
relaxation of the receptiveness requirement of handshake processes;
So far the quiescent trace set of a handshake process was specified in one of the following forms: by enumeration, by a predicate, by a state graph, or by parallel composition of other handshake processes.
For many handshake processes none of the above forms may be convenient. An example of such a process is the process that first behaves like P and then, “after successful termination of P”, behaves like Q. Of course, such sequential composition of the handshake processes P and Q requires a notion of successful termination of a process. A sequential handshake process is a handshake process in which that notion is incorporated.
The aim of this chapter is to develop a model for sequential handshake processes and a calculus for these processes. An important application of this calculus is the description of the handshake components required for the compilation of Tangram. Another application is the semantics of Tangram itself.
Sequential handshake processes
A sequential handshake process is a handshake process, some of whose traces are designated as terminal traces, i.e. traces that lead to successful termination. In a sequential composition these terminal traces can be prefixed to traces of the subsequent sequential handshake process.
Let T denote the set of quiescent traces and let U denote the set of terminal traces of sequential handshake process P. Sets T and U must satisfy a number of conditions, which are introduced informally.
The topic of this chapter is the translation of Tangram programs into handshake circuits. Let T be a Tangram program. In Chapter 6 we have defined the meaning of T as the handshake process H-T. The translation to handshake circuits is presented as a mathematical function C, from the set of Tangram programs to the set of handshake circuits. Thus, C-T is a handshake circuit, and handshake process ∥.C.T is the behavior of that circuit. Function C is designed such that
where .P was defined as #[: P] (see Definition 6.7). That is, the translation preserves all the nondeterminism of the program. From a practical viewpoint it is sufficient to realize
in which the behavior of the handshake circuit is a refinement of the handshake behavior of the Tangram program. It may be expected that this relaxed form results in cheaper handshake circuits. The advantage of defining the most nondeterministic handshake circuit of T is that alternative translation functions that synthesize more deterministic circuits can readily be derived from it. Some of these alternatives will be indicated.
Also, we have chosen to translate a Tangram program into the handshake circuit with the most internal parallelism. In particular, all guards of a guarded command are evaluated in parallel, as are the two subexpressions of a binary operation. In general this leads to the fastest implementation, but not necessarily the most area-efficient one. If the VLSI programmer wishes a more sequential handshake circuit he can specify such at the Tangram level.
A handshake is a means of synchronization among communicating mechanisms. In its simplest form it involves two mechanisms connected by a pair of so-called links, one for sending signals and one for receiving signals. The sending of a signal and the reception of a signal are atomic actions, and constitute the possible events by which a mechanisms can interact with its environment.
A signal sent by one mechanism is bound to arrive at the other mechanism, after a finite, non-zero amount of time. Hence, this form of communication is asynchronous; the sending and the arrival of a signal correspond to two distinct events. It is assumed that a link allows at most one signal to be on its way. Consequently, a signal sent must arrive at the other end of the link before the next one can be sent. When the traveling time of a signal along the link is unknown, the only way to know that a signal has arrived at the other side is to be so informed by the other mechanism. The other link in the pair performs the acknowledgement.
Such a causally ordered sequence of events is called a handshake. The two mechanisms involved play different (dual) roles in a handshake. One mechanism has the active role: it starts with the sending of a request and then waits for an acknowledgement. The other mechanism has the passive role: it waits for a request to arrive and responds by acknowledging.
Too often there is thought to be a dichotomy between science and engineering: science as a quest for knowledge and understanding, and engineering as the art of constructing useful objects. This book, based on the author's experience in leading a silicon compilation project at Philips Research, is exceptional in that it very convincingly demonstrates the effectiveness of combining the scientific method with sound engineering practices.
Aimed at bridging the gap between program construction and VLSI design, the research reported in this book extends over an unusually wide spectrum of disciplines, ranging from computer science and electrical engineering to logic and mathematics. In this exciting arena we encounter such topics as the power dissipation of an assignment statement, the mathematical theory of handshake circuits, the correctness proof of a compiler, and the problem of circuit initialization without reset wires, to mention just a few.
Such a multi-faceted study can be successful only if it is able to demonstrate a clear separation of concerns. In this respect, Kees van Berkel does an admirable job: his concept of handshake circuits provides an extremely elegant interface between algorithm design on the one hand and circuit implementations on the other. This separation between ‘what’ and ‘how’, which many researchers and practitioners find difficult to apply, turns out to be amazingly fruitful, as the readers of this book are encouraged to discover for themselves. In my opinion we are, with the publication of this book, witnessing a major step forward in the development of the discipline of VLSI programming.
This book is about the design of digital VLSI circuits. Whereas LSI circuits perform basic functions such as multiplication, control, storage, and digital-to-analog conversion, VLSI circuits contain complex compositions of these basic functions. In many cases all data and signal processing in a professional or consumer system can be integrated on a few square centimeters of silicon. Examples of such “systems on silicon” can be found in:
Disc (CD) players,
Disc Interactive (CDI) players,
Compact Cassette (DCC) players,
Audio Broadcast (DAB) receivers,
radios and mobile telephones,
High-Definition TeleVision (HDTV) sets,
video recorders,
processors,
car-navigation systems,
processors, and
test and measurement systems.
These systems generally process analog as well as digital signals, but the digital circuits dominate the surface of an IC. The memory needed for storing intermediate results often covers a significant fraction of the silicon area.
Systems on silicon are tending to become more complex and are tending to increase in number. The increase in complexity follows from advances in VLSI technology, and the rapid growth of the number of transistors integrated on a single IC. The constant reduction of the costs of integration makes integration economically attractive for an increasing number of systems. Also, the rapid succession of generations of a single product increases the pressure on design time. The ability to integrate systems on silicon effectively, efficiently, and quickly has thus become a key factor in the global competition in both consumer and professional electronic products.
This book pursues a programming approach to the design of digital VLSI circuits. In such an approach the VLSI-system designer constructs a program in a suitable high-level programming language. When he is satisfied with his program, the designer invokes a so-called silicon compiler which translates this program into a VLSI-circuit layout.
The choice of the programming language is a crucial one, for it largely determines the application area, the convenience of design, and the efficiency of the compiled circuits. A good VLSI-programming language.
0. is general purpose in that it allows the description of all digital functions;
1. encourages the systematic and efficient design of programs by abstracting from circuit, geometry and technology details;
2. is suitable for automatic translation into efficient VLSI circuits and test patterns.
Below follows a motivation for these requirements.
0. A wide range of applications is required to justify the investment in tools and training.
1. A major gain in design productivity can be expected by designing in a powerful highlevel language. Furthermore, system designers do not need to resort to VLSI specialists. Systematic design methods, supported by mathematical reasoning, are required to deal with the overwhelming complexity involved in the design of VLSI systems.
2. Automatic translation to VLSI circuits avoids the introduction of errors at the lower abstraction levels. It also becomes attractive to design alternative programs and compare the translated circuits in costs (circuit area) and performance (speed and power).
The virtues of viewing the lexicon as an inheritance network are its succinctness and its tendency to highlight significant clusters of linguistic properties. From its succinctness follow two practical advantages, namely its ease of maintenance and modification. In this chapter we present a feature-based foundation for lexical inheritance. We shall argue that the feature-based foundation is both more economical and expressively more powerful than non-feature-based systems. It is more economical because it employs only mechanisms already assumed to be present elsewhere in the grammar (viz., in the feature system), and it is more expressive because feature systems are more expressive than other mechanisms used in expressing lexical inheritance (cf. DATR). The lexicon furthermore allows the use of default inheritance, based on the ideas of default unification, defined by Bouma (1990a).
These claims are buttressed in sections sketching the opportunities for lexical description in feature-based lexicons in two central lexical topics: inflection and derivation. Briefly, we argue that the central notion of paradigm may be defined directly in feature structures, and that it may be more satisfactorily (in fact, immediately) linked to the syntactic information in this fashion. Our discussion of derivation is more programmatic; but here, too, we argue that feature structures of a suitably rich sort provide a foundation for the definition of lexical rules.
We illustrate theoretical claims in application to German lexical structure.
Natural language lexicons form an obvious application for techniques involving default inheritance developed for knowledge representation in artificial intelligence (AI). Many of the schemes that have been proposed are highly complex – simple tree-form taxonomies are thought to be inadequate, and a variety of additional mechanisms are employed. As Touretzky et al. (1987) show, the intuitions underlying the behaviour of such systems may be unstable, and in the general case they are intractable (Selman and Levesque, 1989).
It is an open question whether the lexicon requires this level of sophistication – by sacrificing some of the power of a general inheritance system one may arrive at a simpler, more restricted, version, which is nevertheless sufficiently expressive for the domain. The particular context within which the lexicon described here has been devised seems to permit further reductions in complexity. It has been implemented as part of the ELU unification grammar development environment for research in machine translation, comprising parser, generator, lexicon, and transfer mechanism.
Overview of Formalism
An ELU lexicon consists of a number of ‘classes’, each of which is a structured collection of constraint equations and/or macro calls encoding information common to a set of words, together with links to other more general ‘superclasses’. Lexical entries are themselves classes, and any information they contain is standardly specific to an individual word; lexical and non-lexical classes differ in that analysis and generation take only the former as entry points to the lexicon.
Introduction In setting up a lexical component for natural language processing systems, one finds that a considerable amount of information is often repeated across sets of word entries. To make the task of grammar writing more efficient, shared information can be expressed in the form of partially specified templates and distributed to relevant entries by inheritance. Shared information across sets of partially specified templates can be factored out and conveyed using the same technique. This makes it possible to avoid redefining the same information structures, thus reducing a great deal of redundancy in the specification of word forms. For example, general properties of intransitive verbs concerning subcategorization and argument structure can be simply stated once, and then inherited by lexical entries which provide word specific information, e.g. orthography, predicate sense, aktionsart, selectional restrictions. Likewise, properties which are common to all verbs (e.g. part of speech, presence of a subject) or subsets of the verb class (presence of a direct object for transitive and ditransitive verbs) can be defined as templates which subsume all members of the verb class or some subset of it. This approach to word specification provides a highly structured organization of the lexicon according to which the properties of related word types as well as the relation between word types and specific word forms are expressed in terms of structure sharing and inheritance (Flickinger, Pollard and Wasow, 1985; Flickinger, 1987; Pollard and Sag, 1987, pp. 191–209).
This chapter and those following describe the LKB, a lexical knowledge base system which has been designed as part of the ACQUILEX project to allow the representation of syntactic and semantic information semi-automatically extracted from machine readable dictionaries (MRDs) on a large scale. An overview of the ACQUILEX project is given by Briscoe (1991).
Although there has been previous work on building lexicons for Natural Language Processing (NLP) systems from MRDs (e.g. Carroll and Grover, 1989), most attempts at extracting semantic information have not made use of a formally defined representation language; typically a semantic network or a frame representation has been suggested, but the interpretation and functionality of the links has been left vague. Several networks based on taxonomies extracted from MRDs have been built (following Amsler, 1980) and these are useful for tasks such as sense-disambiguation, but are not directly utilisable as NLP lexicons. For a lexicon to be genuinely (re)usable, a declarative, formally specified, representation language is essential. A large lexicon has to be highly structured; it is necessary to be able to group lexical entries and to represent relationships between them, both in order to capture linguistic generalisations and to achieve consistency and conciseness. But, unless these notions of structure are properly specified, a lexicon based on them is in danger of being incomprehensible except (perhaps) to its creators.
In this chapter we discuss how the typed feature structure formalism described in the previous chapters is augmented with a default inheritance system. We first introduce our use of defaults informally and illustrate the sort of taxonomic data that motivated the design of our system. We then discuss some of the formal issues involved in introducing defaults into the representation language.
Taxonomies, Lexical Semantics and Default Inheritance
Our approach to default inheritance in the LKB has been largely motivated by consideration of the taxonomies which may be extracted automatically from MRDs, although the default inheritance mechanism can be used for other purposes, as discussed by Sanfilippo (this volume). In this section we introduce this concept of taxonomy, which is discussed in more detail by Vossen and Copestake (this volume). The notion of taxonomy that has been used in work on MRDs such as that by Amsler (1981), Chodorow etal. (1985) and Guthrie et al. (1990) is essentially an informal and intuitive one: a taxonomy is the network which results from connecting headwords with the genus terms in their definitions but the concept of genus term is not formally defined; however for noun definitions, which are all we will consider here, it is in general taken to be the syntactic head of the defining noun phrase (exceptions to this are discussed by Vossen and Copestake).