To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter explains the shortcomings of arrays having pre-determined length and the consequent need for dynamic storage. The use of library functions for allocating and freeing memory is then explained.
The concept of a linked list is introduced with particular reference to stacks. The concept is then extended to cover rings and binary trees.
Examples in this chapter include a program for demonstrating ring structures and a program for finding the shortest route through a road network. Finally there is a sorting program based on the monkey puzzle technique.
MEMORY ALLOCATION
The trouble with arrays is that the processor has to be told what space to allocate to each of them before execution starts. There are many applications for which it is impossible to know the detailed memory requirements in advance. What we need is a scheme in which the processor allocates memory during execution and on demand. A program written to use such a scheme fails only if demands on total memory exceed supply, not if one of many individual arrays exceeds its bounds.
The scheme is called ‘dynamic storage.’ For practical purposes it is based on structures. When the program needs a new structure it creates one from a ‘heap’ of unstructured memory. When a program has finished with a structure it frees the memory from which that structure was built, tossing it back on the heap.
This is probably the most important chapter in the book; the art of C is handling pointers. Pointers are closely associated with arrays, and arrays with strings.
The chapter begins by explaining the concept of a pointer and defines two operators, * and &, with which to declare and manipulate pointers.
Because C works on the principle of ‘call by value’ you cannot return values from functions by altering the values stored in their parameters. But you can use pointers as parameters and make functions alter the contents of the objects they point to. This concept may appear tricky at first, but glorious when you can handle it confidently. The chapter spends some time on this concept.
When you add 2 to a pointer into an array, the pointer then points to the element two further along, regardless of the length of element. This is a property of pointer arithmetic, the subject next described in this chapter.
Most pointers point to objects, but you can make them point to functions as well. The chapter shows the correspondence between pointers to arrays and pointers to functions. You may care to skip this topic on first reading; likewise the next which analyses the structure of complex declarations. Complex declarations are easy to understand once you have felt the need to set up a data structure in which pointers point to other pointers.
This chapter defines most of the basic components of C. Their syntax is defined using a pictorial notation. Characters, names and constants (the simple building blocks) are defined first. Important principles of the language are next explained; these include the concept of scalar ‘types’, the precedence and associativity of ‘operators’, the concepts of ‘coercion’ and ‘promotion’ in expressions of mixed type.
The operators are summarized on a single page for ease or reference.
The syntax of expressions and statements is defined in this chapter. Declarations are discussed, but their syntax is not defined because it involves the concept of pointers and dynamic storage. These topics are left to later chapters.
NOTATION
For a precise definition of the syntax of ANSI C, see the definitions in NSI X 3.159. These are expressed in BNF (Backus Naur Form).
To appreciate the syntactical form of an entity the practical programmer needs something different; BNF is not a self evident notation. Some books employ railway track diagrams, potentially easier to comprehend than BNF, but the tracks grow too complicated for defining structures in C. So I have devised a pictorial notation from which a programmer should be able to appreciate syntactical forms at a glance. The notation is fairly rigorous but needs a little help from notes here and there.
The original C programming language was devised by Dennis Ritchie. The first book on C, by Kernighan and Ritchie, came out in 1978 and remained the most authoritative and best book on the subject until their second edition, describing ANSI standard C, appeared in 1988. In all that time, and since, the availability and use of C has increased exponentially. It is now one of the most widely used programming languages, not only for writing computer systems but also for developing applications.
There are many books on C but not so many on ANSI standard C which is the version described here.
This book attempts three things:
to serve as a text book for introductory courses on C aimed both at those who already know a computer language and at those entirely new to computing
to summarize and present the syntax and grammar of C by diagrams and tables, making this a useful reference book on C
to illustrate a few essential programming techniques such as symbol state tables, linked lists, binary trees, doubly linked rings, manipulation of strings, parsing of algebraic expressions.
For a formal appreciation of C – its power, its advantages and disadvantages – see the references given in the Bibliography. As an informal appreciation: all those I know who program in C find the language likeable and enjoy its power. Programming C is like driving a fast and powerful car. Having learned to handle the car safely you would not willingly return to the family saloon.
In [Udd84, Udd86, Ebe89, UV88] the notions delay insensitivity, independent alphabet and absence of computation interference have been defined for directed processes. In this section we investigate to what extent these notions apply to handshake processes.
Definition A.O (directed process)
A directed process T is a triple (iT,oT,tT), in which iT and oT are disjoint sets of symbols and tT is a non-empty, prefix-closed subset of (iT ∪ oT).
A handshake process is not a directed process: the alphabet of a handshake process has more structure and the trace set is not prefix closed. However, to every handshake process P there corresponds a directed process, viz. (iP,oP,tP≤) .
All port structures in this appendix have no internal ports.
Composability
Composability of traces captures the notion that symbols communicated between processes arrive no earlier than they were sent. Consider directed processes P and Q such that iP = oQ and oP = iQ. Let s ∈ tP and t ∈ tQ. Composability restricts the way the pair (s,t) may evolve from (ε,ε). Let a ∈ iQ (and therefore a ∈ oP). Then ε is composable to a, but the converse is not true, because a must be sent by P before it can be received by Q. Similarly, for b ∈ oQ, we have b composable to ε. Also, trace s is composable to ta if s is composable to t and a is on its way, i.e. len.(t⌈a) < len.(s⌈a).
Handshake circuits and the associated compilation method from CSP-based languages were conceived during 1986 at Philips Research Laboratories. A first IC (7000 transistors) was designed using experimental tools to manipulate and analyze handshake circuits (then called “abstract circuits”) and to translate them into standard-cell netlists. The IC realized a subfunction of a graphics processor [SvB88] and proved “first-time-right” (September 1987). Extensive measurements hinted at interesting testability and robustness properties of this type of asynchronous circuit [vBS88].
Encouraged by these early results the emphasis of the research shifted from the design of the graphics processor to VLSI programming, compilation methods, and tool design. Generalization and systematization of the translation method resulted in an experimental silicon compiler during spring 1990 [vBKR+91]. Section 9.0 describes these compilation tools and their application to a simple Compact Disc error decoder.
A second test chip was designed and verified during the autumn of 1991 [vBBK+93, RS93]. In addition to some test structures, the IC contains a simple processor, including a four-place buffer, a 100-counter, an incrementer, an adder, a comparator, and a multiplier in the Galois field GF(28). The Tangram program was fully automatically compiled into a circuit consisting of over 14 thousand transistors. Section 9.1 discusses this chip and its performance in detail. This chapter concludes with an appraisal of asynchronous circuits in Section 9.2.
VLSI programming and compilation
Experiences with VLSI programming and compilation will be presented from a programmer's viewpoint.
In Chapter 6 we have developed a handshake semantics for Tangram. An alternative semantics for Tangram can be based on failure processes [BHR84]. Failure processes form the underlying model of CSP [Hoa85], and are the basis for a well-established theory for CSP, including a powerful calculus [RH88].
The availability of two distinct semantics for the same program notation suggests several questions, including:
0. Is the handshake-process semantics consistent with the failure semantics? If so, in what sense?
1. Can VLSI programmers use calculi that are based on failure semantics?
The last question is of obvious practical significance.
This appendix starts with a description of failure processes. By means of a simple example it is shown that an embedding of failure processes into all-active handshake processes does not exist. By choosing a more subtle link between handshake semantics and failure semantics, we arrive at positive answers to the above questions.
Failure processes
This subsection describes a process model based on failures. The description below is rather concise; for a more extensive treatment the reader is referred to [BHR84], [BR85] and [Hoa85].
An alphabet structure defines an alphabet as a set of communications.
Definition B.O (alphabet of an alphabet structure)
Let A be an alphabet structure.
A communication of A is a pair a: v, such that a ∈ cA and v ∈ TA.a.
The alphabet of A is the set of all communications of A and is denoted by aA.
This chapter introduces further Tangram constructs out of which more interesting programs can be described. These constructs include expressions, guarded selection, guarded repetition, and choice. The choice construct supports mixed input and output guards. Guarded selection and choice also introduce nondeterminism.
These constructs are introduced by means of concise and telling examples, namely a simple FIR filter, a median filter, a block sorter, a greatest common divisor, modulo-N counters, various stacks (including a specialization as a priority queue), and a nacking arbiter. In many cases handshake circuits are presented and explained with reference to the Tangram program text. Where relevant, circuit size, speed, and power consumption are analyzed.
FIR filter
A Finite Impulse Response (FIR) filter is a component with a single input port and a single output port. Input and output communications strictly alternate, starting with an input. For a FIR filter of order N the output values are specified as follows. The value of the ith output, i ≥ N, is a weighted sum of the N + 1 most recent input values. The N + 1 weight factors are generally referred to as the filter coefficients. The first N output values are left unspecified.
A very simple FIR filter is used to introduce Tangram expressions and their translation into handshake circuits.
This book is about the design of asynchronous VLSI circuits based on a programming and compilation approach. It introduces handshake circuits as an intermediate architecture between the algorithmic programming language Tangram and VLSI circuits.
The work presented in this book grew out of the project “VLSI programming and compilation into asynchronous circuits” being conducted at Philips Research Laboratories Eindhoven, since 1986. Our original motivation was to increase the productivity of VLSI design by treating circuit design as a programming activity. We chose asynchronous circuits as target for automatic silicon compilation, because asynchronous circuits simplified the translation process and made it easier to take advantage from the abundantly available parallelism in VLSI. Later we discovered that the potential for low power consumption inherent in asynchronous circuits may turn out to be highly relevant to battery-powered products.
The core of this book is about handshake circuits. A handshake circuit is a network of handshake components connected by handshake channels, along which components interact exclusively by means of handshake signaling. It presents a theoretical model of handshake circuits, a compilation method, and a number of VLSI-implementation issues. This core is sandwiched between an informal introduction to VLSI programming and handshake circuits on the one side and a discussion on practical experiences including tooling and chip evaluations on the other side.
The most interesting operation on handshake processes is parallel composition. Parallel composition is defined only for connectable processes. Connectability of handshake processes captures the idea that ports form the unit of connection (as opposed to individual port symbols), and that a passive port can only be connected to a single active port and vice versa. A precise definition will be given later.
The communication between connectable handshake processes is asynchronous: the sending of a signal by one process and the reception of that signal by another process are two distinct events. Asynchronous communication is more complicated than synchronized communication, because of the possible occurrence of interference. The concept of interference with respect to voltage transitions has been mentioned in Section 0.1. Interference with respect to symbols occurs when one process sends a symbol and the other process is not ready to receive it. The receptiveness of handshake processes and the imposed handshake protocol exclude the possibility of interference. We are therefore allowed to apply the simpler synchronized communication in the definition of parallel composition of handshake processes.
Another complication is, however, the possibility of divergence: an unbounded amount of internal communication, which cannot be distinguished externally from deadlock. From an implementation viewpoint divergence is undesirable: it forms a drain on the power source, without being productive.
The external behavior of the parallel composition of connectable P and Q will be denoted by P ∥ Q, which is again a handshake process.
Tangram is a VLSI-programming language based on CSP, and has much in common with the programming language OCCAM [INM89] (see Section 2.7 for some of the differences). The main construct of Tangram is the command. Commands are either primitive commands, such as a?x and x := x + 1, or composite commands, such as R; S and R ∥ S, where R and S are commands themselves.
Execution of a command may result in a number of communications with the environment through external ports. Another form of interaction with the environment is the reading from and writing into external variables. A Tangram program is a command without external variables, prefixed by an explicit definition of its external ports.
Not all compositions of commands are valid in Tangram. For instance, in a sequential composition the two constituent commands must agree on the input/output direction of their common ports. Also, two commands composed in parallel may not write concurrently into a common variable. Similarly, concurrent reading from and writing into a common variable is not allowed. Section 6.1 defines the syntax of Tangram, including these composition rules. The meaning of each command is described informally.
For a subset of the Tangram commands the handshake-process denotations are given in Section 6.3. This subset is referred to as Core Tangram.
Tangram
The main syntactic constructs of Tangram are program, command, guarded-command set, and expression. With each construct we associate a so-called alphabet structure: a set of typed ports and variables.