To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this part the design process itself is examined from three approaches.
In Chapter 5 design is modelled as transforming formal draft system designs, and the user specification process is examined in detail.
In Chapter 6 circuits are relations on signals, and design is achieved through the application of combining forms satisfying certain mathematical laws.
Chapter 7 treats the problem of the automatic synthesis of VLSI chips for signal processing, and the practical issues involved are discussed in greater depth.
The development of VLSI fabrication technology has resulted in a wide range of new ideas for application specific hardware and computer architectures, and in an extensive set of significant new theoretical problems for the design of hardware. The design of hardware is a process of creating a device that realises an algorithm, and many of the problems are concerned with the nature of algorithms that may be realised. Thus fundamental research on the design of algorithms, programming and programming languages is directly relevant to research on the design of hardware. And conversely, research on hardware raises many new questions for research on software. These points are discussed at some length in the introductory chapter.
The papers that make up this volume are concerned with the theoretical foundations of the design of hardware, as viewed from computer science. The topics addressed are the complexity of computation; the methodology of design; and the specification, derivation and verification of designs. Most of the papers are based on lectures delivered at our workshop on Theoretical aspects of VLSI design held at the Centre for Theoretical Computer Science, University of Leeds in September 1986. We wish to express our thanks to the contributors and referees for their cooperation in producing this work.
One of the natural ways to model circuit behaviour is to describe a circuit as a function from signals to signals. A signal is a stream of data values over time, that is, a function from integers to values. One can choose to name signals and to reason about their values. We have taken an alternative approach in our work on the design language μFP (Sheeran [1984]). We reason about circuits, that is functions from signals to signals, rather than about the signals themselves. We build circuit descriptions by ‘plugging together’ smaller circuit descriptions using a carefully chosen set of combining forms. So, signals are first order functions, circuits are second order, and combining forms are third order.
Each combining form maps one or more circuits to a single circuit. The combining forms were chosen to reflect the fact that circuits are essentially two-dimensional. So, they correspond to ways of laying down and wiring together circuit blocks. Each combining form has both a behavioural and a pictorial interpretation. Because they obey useful mathematical laws, we can use program transformation in the development of circuits. An initial obviously correct circuit can be transformed into one with the same behaviour, but a more acceptable layout. It has been shown that this functional approach is particularly useful in the design of regular array architectures (Sheeran [1985, 1986], Luk & Jones [1988a]).
However, sometimes a relational description of a circuit is more appropriate than a functional one.
Combinational networks are a widely studied model for investigating the computational complexity of Boolean functions relevant both to sequential computation and parallel models such as VLSI circuits. Recently a number of important results proving non-trivial lower bounds on a particular type of restricted network have appeared. After giving a general introduction to Boolean complexity theory and its history this chapter presents a detailed technical account of the two main techniques developed for proving such bounds.
INTRODUCTION
An important aim of Complexity Theory is to develop techniques for establishing non-trivial lower bounds on the quantity of particular resources required to solve specific problems. Natural resources, or complexity measures, of interest are Time and Space, these being formally modelled by the number of moves made (resp. number of tape cells scanned) by a Turing machine. ‘Problems’ are viewed as functions, f : D → R; D is the domain of inputs and R the range of output values. D and R are represented as words over a finite alphabet Σ and since any such alphabet can be encoded as a set of binary strings it is sufficiently general to consider D to be the set of Boolean valued n-tuples {0, 1}n and R to be {0,1}. Functions of the form f : {0, 1}n → {0,1}, are called n-input single output Boolean functions. Bn denotes the set of all such functions and Xn = (x1,x2, …, xn) is a variable over {0, 1}n.
The theme of this chapter centres on the automatic synthesis of cost effective and highly parallel digital signal processors suitable for VLSI implementation. The proposed synthesis model is studied in detail and the concepts of signal modelling and data flow analysis are discussed. This is further illustrated by the COSPRO (COnfigurable Signal PROcessor) simulator – a primitive version of the automatic synthesis concept developed at the Department of Electrical & Electronic Engineering, University of Newcastle Upon Tyne. Binary addition is chosen as a case study to demonstrate the application of the concept.
INTRODUCTION
Digital signal processing
Digital signal processing (DSP), a counterpart of analog signal processing, began to blossom in the mid 1960s when semiconductor and computer technologies were able to offer a massive increase in flexibility and reliability. Within the short period of twenty years, this field has matured rapidly in both theory and applications, and contributed significantly to the understanding in many diverse areas of science and technology. The range of applications has grown to include almost every part of our lives, from microprocessor controlled domestic appliances to computerised banking systems and highly sophisticated missile guidance systems. Many other areas such as biomedical engineering, seismic research, radar and sonar detection and countermeasures, acoustics and speech, telecommunications, image processing and understanding, thermography, office automation and computer graphics employ DSP to a great extent, and are heavily applied in military, intelligence, industrial and commercial environments.
The VIPER microprocessor designed at the Royal Signals and Radar Estasblishment (RSRE) is probably the first commercially produced computer to have been developed using modern formal methods. Details of VIPER can be found in Cullyer [1985, 1986, 1987] and Pygott [1986]. The approach used by W. J. Cullyer and C. Pygott for its verification is explained in Cullyer & Pygott [1985], in which a simple counter is chosen to illustrate the verification techniques developed at RSRE. Using the same counter, we illustrate the approach to hardware verification developed at Cambridge, which formalizes Cullyer and Pygott's method. The approach is based on the HOL system, a version of LCF adapted to higher-order logic (Camilleri et al. [1987], Gordon [1983, 1985]). This research has formed the basis for the subsequent project to verify the whole of VIPER to register transfer level (Cohn [1987, 1989]).
In Cullyer and Pygott's paper, the implementation of the counter is specified at three levels of decreasing abstractness:
As a state-transition system called the host machine;
As an interconnection of functional blocks called the high level design;
As an interconnection of gates and registers called the circuit.
Ultimately, it is the circuit that will be built and its correctness is the most important. However, the host machine and high level design represent successive stages in the development of the implementation and so one would like to know if they too are correct.
Since our concern was speech, and speech impelled us
To purify the dialect of the tribe
And urge the mind to aftersight and foresight
T. S. Eliot Little Gidding
ABSTRACT
We analyse theoretically the process of specifying the desired behaviour of a digital system and illustrate our theory with a case study of the specification of a digital correlator.
First, a general theoretical framework for specifications and their stepwise refinement is presented. A useful notion of the consistency of two general functional specifications is defined. The framework has three methodological divisions: an exploration phase, an abstraction phase, and an implementation phase.
Secondly, a mathematical theory for specifications based on abstract data types, streams, clocks and retimings, and recursive functions is developed. A specification is a function that transforms infinite streams of data. The mathematical theory supports formal methods and software tools.
Thirdly, a digital correlator is studied in considerable detail to demonstrate points of theoretical and practical interest.
INTRODUCTION
Overview
How can we precisely define the desired behaviour of a digital system? What role can such precise definitions have in the imprecise process of designing a digital system, and in its subsequent use?
We wish to formulate answers to these questions by theoretically analysing the first step of a design assignment, when it must be determined what is to be designed.
The specification, design, construction, evaluation and maintenance of computing systems involve significant theoretical problems that are common to hardware and software. Some of these problems are long standing, although they change in their form, difficulty and importance as technologies for the manufacture of digital systems change. For example, theoretical areas addressed in this volume about hardware include
models of computation and semantics,
computational complexity,
methodology of design,
specification methods,
design and synthesis, and
verification methods and tools;
and the material presented is intimately related to material about software. It is interesting to attempt a comparison of theoretical problems of interest in these areas in the decades 1960–69 and 1980–89. Plus ça change, plus c'est la même chose?
Of course, the latest technologies permit the manufacture of larger digital systems at smaller cost. To enlarge the scope of digital computation in the world's work it is necessary to enlarge the scope of the design process. This involves the development of the areas listed above, and the related development of tools for CAD and CIM.
Most importantly, it involves the unification of the study of hardware and software. For example, a fundamental problem in hardware design is to make hardware that is independent of specific fabricating technologies. This complements a fundamental problem in software design – to make software that is independent of specific hardware (i.e., machines and peripherals).
The chapters in this part examine the performance of VLSI systems from different viewpoints.
Chapter 8 looks at the use of discrete complexity models in VLSI design, both theoretically and practically. The results of an experiment on a basic hypothesis are reported.
The final chapter is a technical presentation of two recent innovative results in a field of complexity theory relevant to VLSI.
ABSTRACT A model of computation for the design of synchronous and systolic algorithms is presented. The model is hierarchically structured, and so can express the development of an algorithm through many levels of abstraction. The syntax of the model is based on the directed graph, and the synchronous semantics are state-transitional. A semantic representation of ripple-carries is included. The cells available in the data structure of a computation graph are defined by a graph signature. In order to develop two-level pipelining in the model, we need to express serial functions as primitives, and so a data structure may include history-sensitive functions.
A typical step in a hierarchical design is the substitution of a single data element by a string of data elements so as to refine an algorithm to a lower level of abstraction. Such a refinement is formalised through the definition of parallel and serial homomorphisms of data structures.
Central to recent work on synchronous algorithms has been the work of H. T. Kung and others on systolic design. The Retiming Lemma of Leiserson & Saxe [1981] has become an important optimisation tool in the automation of systolic design (for example, in the elimination of ripple-carries). This lemma and the Cut Theorem (Kung & Lam [1984]) are proved in the formal model.
The use of these tools is demonstrated in a design for the matrix multiplication algorithm presented in H. T. Kung [1984].
When nulls occur in a relational theory T, updates to T will cause excessive growth in the size of T if many data atoms of T unify with atoms occurring in the updates. This chapter proposes a scheme of lazy evaluation for updates that strictly bounds the growth of T caused by each update, via user-specified limits on permissible size increases. Under lazy evaluation, an overly-expensive update U will be stored away rather than executed, with the hope that hew information on costly null values will reduce the expense of executing U before the information contained in U is needed for an incoming query. If an incoming query unavoidably depends on the results of an overly expensive portion of an update, the query must be rejected, as there is no way to reason about the information in the update other than by incorporating it directly in the relational theory. When a query is rejected, the originator of the query is notified of the exact reasons for the rejection. The query may be resubmitted once the range of possible values of the troublesome nulls has been narrowed down. The bottom line for an efficient implementation of updates, however, is that null values should not be permitted to occur as attribute values for attributes heavily used in update selection clauses—particularly those used as join attributes.
The cost of an update can be measured as a function of the increase in the size of T that would result from execution of the update, and by measures of the expected time to execute the update and to answer subsequent queries.
‘ … But I say to hell with common sense! By itself each segment of your experience is plausible enough, but the trajectory resulting from the aggregate of these segments borders on being a miracle.’
—Stanislaw Lem, The Chain of Chance
This chapter describes simulation experiments conducted with the Update Algorithm, and presents the results from these experiments. The goal of the simulation was to gauge the expected performance of the update and query processing algorithms in a traditional database management system application. The implementation was tailored to this environment, and for that reason the techniques used and results obtained will apply only partially, if at all, to other application environments, such as knowledge-based artificial intelligence applications. In particular, the following assumptions and restrictions were made.
Update syntax was modified and restricted, to encourage use of simple constructs.
A fixed data access mechanism (query language) was assumed.
A large, disk-resident database supplying storage for the relational theory was assumed.
Performance was equated with the number of disk accesses required to perform queries and updates after a long series of updates, and the storage space required after a long series of updates.
These assumptions and restrictions are all appropriate to traditional database management scenarios; they will be discussed in more detail in later sections. We begin with a brief high-level description of the implemented system, and then examine its components in more detail. The chapter concludes with a description of the experimental results.
Overview
The Update Algorithm Version II was chosen for simulation.
“ … You believers make so many and such large and such unwarrantable assumptions.”
“My dear, we must make assumptions, or how get through life at all?”
“Very true. How indeed? One must make a million unwarrantable assumptions, such as that the sun will rise tomorrow, and that the attraction of the earth for our feet will for a time persist, and that if we do certain things to our bodies they will cease to function, and that if we get into a train it will probably carry us along, and so forth. One must assume these things just enough to take action on them, or, as you say, we couldn't get through life at all. But those are hypothetical, pragmatical assumptions, for the purposes of action: there is no call actually to believe them, intellectually. And still less call to increase their number, and carry assumption into spheres where it doesn't help us to action at all. For my part, I assume practically a great deal, intellectually nothing.”
—Rose Macaulay, Told by an Idiot
Relational theories contain little knowledge, that is, data about data. The exact line between knowledge and data is hard to pinpoint; for our purposes, the distinguishing characteristic of knowledge will be our reluctance to change it in response to new information in the form of an update. Under this categorization, the integrity constraints discussed in Chapter 7 are a form of knowledge.