To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, we shall give a further illustration of the theoretical usefulness of our framework by applying it in the identification of unstructuredness. Since DeMarco data flow diagrams are mainly used as communication tools with users during the analysis stage of systems development, they are problem-oriented and probably unstructured. Some mechanism must be available enabling us to detect the unstructured elements so that we can construct structured tasks and define refinement morphisms accordingly. We shall extend our concepts in tasks and show that only one single criterion is necessary and sufficient for identifying unstructuredness. Looking at it in another way, a single criterion is necessary and sufficient for proving a task to be structured.
Quite a number of papers have already addressed a similar problem of detecting unstructuredness in program schemes and transforming the schemes into structured equivalents. These papers can roughly be classified as follows:
(a) Many of the papers are based on heuristic arguments (Colter 1985, McCabe 1976, Mills 1972, Oulsman 1982, Williams 1977, Williams and Ossher 1978, Williams and Chen 1985). Each paper gives a new proposal supposedly better than earlier ones (Prather and Giulieri 1981). According to the arguments in McCabe (1976), Oulsman (1982), Williams (1977) and Williams and Ossher (1978), unstructuredness may be due to any of four elements — branching out of a loop, branching into a loop, branching out of a selection and branching into a selection. The identification of these elements, however, has remained a difficult task. Since unstructured elements cannot exist in isolation, the authors recommend that we should identify unstructured compounds, or combinations of unstructured elements. Unfortunately, the number of combinations is endless, so that we can never exhaust all the possible cases (Williams 1983).
Structured analysis and design methodologies have been recognized as a popular and powerful tool in information systems development. A complex system can be specified in a top-down and graphical fashion, enabling practitioners to visualize the target systems and communicate with users much more easily than by means of conventional methods. As a matter of fact, the structured methodologies have been designed by quite a number of distinct authors, each employing a number of models which vary in their graphical outlook. Different models are found to be suitable for different stages of a typical systems life cycle. A specification must be converted from one form to another during the development process. Unfortunately, however, the models are only derived from the experience of the authors. Little attempt has been made in proposing a formal framework behind them or establishing a theoretical link between one model and another.
A unifying framework is proposed in this book. We define an initial algebra of structured systems, which can be mapped by unique homomorphisms to a DeMarco algebra of data flow diagrams, a Yourdon algebra of structure charts and a Jackson algebra of structure texts. We also find that the proposed initial algebra as well as the structured models fit nicely into a functorial framework. DeMarco data flow diagrams can be mapped by a free functor to terms in the initial algebra, which can then be mapped to other notations such as Yourdon structure charts by means of forgetful functors.
All this will not be finished in the first one hundred days. Nor will it be finished in the first one thousand days, … But let us begin.
—John F. Kennedy (1961)
INTRODUCTION
In this chapter, we shall give an illustration of the practical usefulness of our framework. We shall discuss a prototype system which has been developed to implement the structured tasks. It has been implemented on a Macintosh using Turbo Pascal. It enables the user to draw a hierarchy of DeMarco-like task diagrams and stores them as structured tasks, which are stored physically in the form of pointers and linked lists. It helps the user to review the task diagrams to an appropriate level of detail, and zoom in/zoom out to lower/higher levels through refinements and abstractions as required. Given an incomplete task specification, the system prompts the user to define further details of his design considerations. For example, if the user wants to build a hierarchical structure into a flat task diagram, he will be prompted to supply the necessary details such as the names of intermediate levels. Given a hierarchy of task diagrams, the system then transforms them automatically into a term algebra, a Yourdon structure chart and also Jackson structure text. An example of an application of the system will be given in Section 7.2. An overview of the system with examples of its algorithms will be given in Section 7.3.
In this chapter we shall compare our approach with five related systems development environments. These environments include PSL/PSA, ADS/SODA, SADT/EDDA, SAMM/SIGS and RSL/SREM. They have been chosen for comparison because of the following reasons:
(a) They were pioneer systems development environments meant to cover the entire systems life cycle, and have stood the test of time.
(b) They are better known and documented, so that interested readers can obtain further information easily.
(c) They are still under active development and recent enhancements have been reported.
(d) The languages examined cover a wide spectrum of characteristics. Some were originally developed as manual tools, but others were meant for mechanical support from the very beginning. For some languages, a system can be specified in a multi-level fashion, but not for others. Some languages use graphics as the specification medium while others are purely textual. Some of the development environments generate graphical feedback to users whereas others generate textual documentation.
(e) The final reason is a pragmatic one. We have included projects which are familiar to the present author. Part of this chapter is derived from the research results of a joint project with Mr Daniel Pong (Tse and Pong 1982, Tse and Pong (to appear)).
PSL/PSA AND META/GA
PSL was the first major language for defining a system formally and analysing it automatically (Teichroew 1971, Teichroew et al. 1980, 1982, PSL/PSA Introduction 1987).
Structured systems development methodologies have been recognized as the most popular tools in information systems development. They are widely accepted by practising systems developers because of the top down nature of the methodologies and the graphical nature of the tools. Unfortunately, however, the models are only derived from the experience of the authors. In spite of the popularity of these models, relative little work has been done in providing a theoretical framework to them. In this project, we have tried to solve the problem by defining a unifying theoretical framework behind the popular structured models.
We have defined an initial algebra of structured systems, which can be mapped by unique homomorphisms to a DeMarco algebra of data flow diagrams, a Yourdon algebra of structure charts and a Jackson algebra of structure texts (with equations). As a result, specifications can be transformed from one form to another. Algebraic interpreters may be adapted to validate the specifications.
We have also found that the proposed term algebra as well as the DeMarco, Yourdon and Jackson notations fit nicely into a functorial framework. The framework provides a theoretical basis for manipulating incomplete or unstructured specifications through the concepts of structured tasks and refinement morphisms. Moreover, DeMarco data flow diagrams can be mapped to term algebras through free functors. Conversely, specifications in term algebras can be mapped to other notations such as Yourdon structure charts by means of functors.
This part describes three different approaches to the use of formal methods in the verification and design of systems and circuits.
Chapter 2 describes the stages involved in the verification of a counter using a mechanized theorem prover.
The next chapter describes a mathematical model of synchronous computation within which formal transformations which are useful in the design process can be defined.
Chapter 4 describes verification in a different framework – that of the algebra of communicating processes.
In designing VLSI-circuits it is very useful, if not necessary, to construct the specific circuit by placing simple components in regular configurations. Systolic systems are circuits built up from arrays of cells and therefore very suitable for formal analysis and induction methods. In the case of a palindrome recognizer a correctness proof is given using bisimulation semantics with asynchronous cooperation. The proof is carried out in the formal setting of the Algebra of Communicating Processes (see Bergstra & Klop [1986]), which provides us with an algebraical theory and a convenient proof system. An extensive introduction to this theory is included in this paper. The palindrome recognizer has also been studied by Hennessy [1986] in a setting of failure semantics with synchronous cooperation.
INTRODUCTION
In the current research on (hardware) verification one of the main goals is to find strong proof systems and tools to verify the designs of algorithms and architectures. For instance, in the development of integrated circuits the important stage of testing a prototype (to save the high costs of producing defective processors) can be dealt with much more efficiently, when a strong verification tool is available. Therefore, developing a verification theory has very high priority and is subject of study at many universities and scientific institutions.
However, working on detailed verification theories is not the only approach to this problem. Once having a basic theory, the development of case studies is of utmost importance to provide us with new ideas.
In this part the design process itself is examined from three approaches.
In Chapter 5 design is modelled as transforming formal draft system designs, and the user specification process is examined in detail.
In Chapter 6 circuits are relations on signals, and design is achieved through the application of combining forms satisfying certain mathematical laws.
Chapter 7 treats the problem of the automatic synthesis of VLSI chips for signal processing, and the practical issues involved are discussed in greater depth.
The development of VLSI fabrication technology has resulted in a wide range of new ideas for application specific hardware and computer architectures, and in an extensive set of significant new theoretical problems for the design of hardware. The design of hardware is a process of creating a device that realises an algorithm, and many of the problems are concerned with the nature of algorithms that may be realised. Thus fundamental research on the design of algorithms, programming and programming languages is directly relevant to research on the design of hardware. And conversely, research on hardware raises many new questions for research on software. These points are discussed at some length in the introductory chapter.
The papers that make up this volume are concerned with the theoretical foundations of the design of hardware, as viewed from computer science. The topics addressed are the complexity of computation; the methodology of design; and the specification, derivation and verification of designs. Most of the papers are based on lectures delivered at our workshop on Theoretical aspects of VLSI design held at the Centre for Theoretical Computer Science, University of Leeds in September 1986. We wish to express our thanks to the contributors and referees for their cooperation in producing this work.
One of the natural ways to model circuit behaviour is to describe a circuit as a function from signals to signals. A signal is a stream of data values over time, that is, a function from integers to values. One can choose to name signals and to reason about their values. We have taken an alternative approach in our work on the design language μFP (Sheeran [1984]). We reason about circuits, that is functions from signals to signals, rather than about the signals themselves. We build circuit descriptions by ‘plugging together’ smaller circuit descriptions using a carefully chosen set of combining forms. So, signals are first order functions, circuits are second order, and combining forms are third order.
Each combining form maps one or more circuits to a single circuit. The combining forms were chosen to reflect the fact that circuits are essentially two-dimensional. So, they correspond to ways of laying down and wiring together circuit blocks. Each combining form has both a behavioural and a pictorial interpretation. Because they obey useful mathematical laws, we can use program transformation in the development of circuits. An initial obviously correct circuit can be transformed into one with the same behaviour, but a more acceptable layout. It has been shown that this functional approach is particularly useful in the design of regular array architectures (Sheeran [1985, 1986], Luk & Jones [1988a]).
However, sometimes a relational description of a circuit is more appropriate than a functional one.
Combinational networks are a widely studied model for investigating the computational complexity of Boolean functions relevant both to sequential computation and parallel models such as VLSI circuits. Recently a number of important results proving non-trivial lower bounds on a particular type of restricted network have appeared. After giving a general introduction to Boolean complexity theory and its history this chapter presents a detailed technical account of the two main techniques developed for proving such bounds.
INTRODUCTION
An important aim of Complexity Theory is to develop techniques for establishing non-trivial lower bounds on the quantity of particular resources required to solve specific problems. Natural resources, or complexity measures, of interest are Time and Space, these being formally modelled by the number of moves made (resp. number of tape cells scanned) by a Turing machine. ‘Problems’ are viewed as functions, f : D → R; D is the domain of inputs and R the range of output values. D and R are represented as words over a finite alphabet Σ and since any such alphabet can be encoded as a set of binary strings it is sufficiently general to consider D to be the set of Boolean valued n-tuples {0, 1}n and R to be {0,1}. Functions of the form f : {0, 1}n → {0,1}, are called n-input single output Boolean functions. Bn denotes the set of all such functions and Xn = (x1,x2, …, xn) is a variable over {0, 1}n.