To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Our first view of a concurrent process is that of a machine where every detail of its behaviour is explicit. We could take as our machine model automata in the sense of classical automata theory [RS59], also known as transition systems [Kel76]. Automata are fine except that they cannot represent situations where parts of a machine work independently or concurrently. Since we are after such a representation, we use Petri nets [Pet62, Rei85] instead. This choice is motivated by the following advantages of nets:
Concepts. Petri nets are based on a simple extension of the concepts of state and transition known from automata. The extension is that in nets both states and transitions are distributed over several places. This allows an explicit distinction between concurrency and sequentiality.
Graphics. Petri nets have a graphical representation that visualises the different basic concepts about processes like sequentiality, choice, concurrency and synchronisation.
Size. Since Petri nets allow cycles, a large class of processes can be represented by finite nets. Also, as a consequence of (1), parallel composition will be additive in size rather than multiplicative.
An attractive alternative to Petri nets are event structures introduced in [NPW81] and further developed by Winskel [Win80, Win87]. Event structures are more abstract than nets because they do not record states, only events, i.e. the occurences of transitions. But in order to forget about states, event structures must not contain cycles. This yields infinite event structures even in cases where finite (but cyclic) nets suffice.
Many computing systems consist of a possibly large number of components that not only work independently or concurrently, but also interact or communicate with each other from time to time. Examples of such systems are operating systems, distributed systems and communication protocols, as well as systolic algorithms, computer architectures and integrated circuits.
Conceptually, it is convenient to treat these systems and their components uniformly as concurrent processes. A process is here an object that is designed for a possibly continuous interaction with its user, which can be another process. An interaction can be an input or output of a value, but we just think of it abstractly as a communication. In between two subsequent communications the process usually engages in some internal actions. These proceed autonomously at a certain speed and are not visible to the user. However, as a result of such internal actions the process behaviour may appear nondeterministic to the user. Concurrency arises because there can be more than one user and inside the process more than one active subprocess. The behaviour of a process is unsatisfactory for its user(s) if it does not communicate as desired. The reason can be that the process stops too early or that it engages in an infinite loop of internal actions. The first problem causes a deadlock with the user(s); the second one is known as divergence. Thus most processes are designed to communicate arbitrarily long without any danger of deadlock or divergence.
A crucial test for any theory of concurrent processes is case studies. These will clarify the application areas where this theory is particularly helpful but also reveal its shortcomings. Such shortcomings can be challenges for future research.
Considering all existing case studies based on Petri nets, algebraic process terms and logical formulas, it is obvious that these description methods are immensely helpful in specifying, constructing and verifying concurrent processes. We think in particular of protocol verification, e.g. [Vaa86, Bae90], the verification of VLSI algorithms, e.g. [Hen86], the design of computer architectures, e.g. [Klu87, DD89a, DD89b], and even of concurrent programming languages such as OCCAM [INM84, RH88] or POOL [Ame85, ABKR86, AR89, Vaa90]. However, these examples use one specific description method in each case.
Our overall aim is the smooth integration of description methods that cover different levels of abstraction in a top-down design of concurrent processes. This aim is similar to what Misra and Chandy have presented in their rich and beautiful book on UNITY [CM88]. However, we believe that their approach requires complementary work at the level of implementation, i.e. where UNITY programs are mapped onto architectures.
Our presentation of three different views of concurrent processes attempts to contribute to this overall aim. To obtain a coherent theory, we concentrated on a setting where simple classes of nets, terms and formulas are used. We demonstrated the applicability of this setting in a series of small but non-trivial process constructions.
The stepwise development of complex systems through various levels of abstraction is good practice in software and hardware design. However, the semantic link between these different levels is often missing. This book is intended as a detailed case study how such links can be established. It presents a theory of concurrent processes where three different semantic description methods are brought together in one uniform framework. Nets, terms and formulas are seen as expressing complementary views of processes, each one describing processes at a different level of abstraction.
Petri nets are used to describe processes as concurrent and interacting machines which engage in internal actions and communications with their environment or user.
Process terms are used as an abstract concurrent programming language. Due to their algebraic structure process terms emphasise compositionality, i.e. how complex terms are composed from simpler ones.
Logical formulas of a first-order predicate logic, called trace logic, are used as a specification language for processes. Logical formulas specify safety and liveness aspects of the communication behaviour of processes as required by their users.
At the heart of this theory are two sets of transformation rules for the top-down design of concurrent processes. The first set can be used to transform logical formulas stepwise into process terms, and the second set can be used to transform process terms into Petri nets. These rules are based on novel techniques for the operational and denotational semantics of concurrent processes.
We now introduce a second view of concurrent processes whereby each process is a term over a certain signature of operator symbols. By interpreting these symbols on nets, we will solve the problem of compositionality. As interpretations we take a selection of the operators suggested in Lauer's COSY, Milner's CCS and Hoare's CSP.
Lauer's COSY (Concurrent Systems) is one of the first approaches to compositionality of processes on a schematic, uninterpreted level [LTS79]. It originates from path expressions [CH74] and can thus be seen as an extension of regular expressions to include parallelism. We use here COSY's operator for parallel composition because, as we shall see later in Chapter 4, it enjoys pleasant logical properties.
A significant step beyond COSY is Milner's CCS (Calculus of Communicating Systems) with its conceptual roots in algebra and the λ-calculus [Mil80, Mil83]. From CCS we take the idea that processes are recursive terms over certain operator symbols, i.e. they form the smallest set that is generated by the operator symbols and closed under parameterless recursion. In COSY only iteration is present as might be clear from its background in regular expressions. By using recursion we ensure that process terms are Turing powerful even on the schematic level without the help of interpreted values or variables. We also take CCS's choice operator because it allows a very clear treatment on the level of nets, and its notion of action morphism by which actions can be renamed.
This chapter initiates and serves as a preamble to a systematic discussion of several design paradigms. The concept of a paradigm is central, not only to this (the second) part of the book but also to the discussion of the relationship between design and science in Part III. It is, thus, of some importance to specify as exactly as possible what I mean when I use the terms ‘paradigm’ and ‘design paradigm’ in this work. The necessity for such clarification becomes specially urgent given the fact that in certain intellectual circles the word ‘paradigm’ has assumed the unfortunate status of a cult-word.
The present chapter also serves a second, less obvious, function. And that is, to demarcate the fundamentally descriptive spirit of the preceding chapters from the pronounced prescriptive character of the chapters to follow.
The discipline of design is necessarily Janus-faced. On the one hand the act of design is a cognitive and an intellectual process. Hence any theory of design must, at least in part, view design processes as natural processes that harness the capabilities inherent in, and are in turn harnessed to the limitations imposed by, human nature and intellect. A theory of design must at least in part be a cognitive theory and, therefore, descriptive in nature. Much of Part I was precisely that; in particular, the notion that design is an evolutionary phenomenon is a descriptive, empirical proposition – a consequence of the inherent bounds of our ability to make rational decisions.
‘Classical’ design automation, of course, relied on the use of algorithms. That is, the earliest applications of computing techniques to design problem solving were all based on algorithms for producing designs. We may, therefore, refer to this approach as the algorithmic design paradigm.
A necessary (though not sufficient) condition for the algorithmic paradigm to be applicable is that the design problem be well-structured. Such problems are characterized by the fact that the requirements are all empirical in nature in the sense that one knows exactly how to determine whether or not a given design meets such requirements (see chapter 3, section 3.3). And although most interesting design problems are ill-structured many of their subproblems or components may turn out to be well-structured – in which case, the algorithmic paradigm may apply. Strictly speaking, then, one should think of the algorithmic paradigm as, so to speak, a tool that can be invoked by other more general paradigms such as TPD or even ASE to solve well-structured components of a given design problem.
COMPILING AS AN ALGORITHMIC STYLE
In the domain of computing systems design the algorithmic paradigm is perhaps most well represented by an important family of methods which may be collectively termed compiling. This is a technique or style which is ‘classical’ in that it was invented at a relatively early stage in the history of computer science; and it has been enormously successful as an instance of the algorithmic paradigm in the automation of software, firmware and hardware design.
In chapter 3 (section 3.4) I introduced the concept of bounded rationality. As noted there, bounded rationality is a model of rationality proposed by Simon (1976, 1982) that recognizes the severe limits on the cognitive and information processing capabilities of the decision making agent. Such limitations may arise in a variety of ways: the various factors or parameters affecting a decision may be so complex or they may interact in such complicated ways the agent may be unable to decide the best course of action; or the agent may simply not know all the alternative courses of action; or even when all the factors or alternatives are known fully, the cost of computing the best possible choice may be far too prohibitive.
Bounded rationality explains why design problems are so very often formulated in an incomplete manner (see section 3.4, chapter 3). But it has an even more profound consequence for the design process itself, and for the nature of design solutions.
Example 5.1
To understand this, consider a computer architect who has been given a particular set of initial requirements and is required to design a micro-architecture that meets these requirements. A computer's micro-architecture, it will be recalled (see foot-note 6, chapter 3) refers to the internal architecture of the computer as seen by the microprogrammer (see also Dasgupta 1989a, chapter 1).
We shall further assume that the initial requirements, though sparse, are quite precise and complete.
Let us recapitulate some of the very basic features of the design process as articulated in Part I of this book.
(a) The requirements constituting the specification of a design problem may (initially) be neither precise nor complete; hence the elaboration of requirements becomes an integral part of the design process. Requirements may, furthermore, be of an empirical or a conceptual nature.
(b) A design is an abstract description of the target artifact that serves as a blueprint for implementation, and as a medium for criticism, experimentation and analysis.
(c) The designer is constantly faced with the problem of bounded rationality – the fact that designers and their clients are limited in their capacities to make fully rational decisions, or are limited in their ability to grasp the full implications of such decisions.
(d) Design decisions are more often than not satisficing procedures.
As a consequence of all of the above design can be naturally viewed as an evolutionary process possessing the following characteristics: at any stage of its development the design is viewed as a tentative or conjectural solution to the problem posed. The designer's task in each evolutionary cycle is to elaborate either the design or the requirements so as to establish or converge towards a fit between the two. Thus, an integral part of each evolutionary cycle is the critical testing of the ‘current’ design against the ‘current’ requirements (see chapter 5, especially section 5.3).
The natural point to begin any discussion of design is its definition – that is, to state succinctly in a single sentence what it is that one does when one designs and what the end product is. Such an enterprise has been attempted in a variety of contexts including architecture, engineering, computer science, and the hybrid discipline of computer-aided design. As might be expected these definitions range from the very prosaic to the very abstract and each reflects quite clearly the specific perspective, tradition, and bias of its author. Invariably, such attempts to capture an entire human (or more recently, human–computer) enterprise within the bounds of a single sentence are unsatisfactory at the very least and fail abysmally at the worst.
One reason why definitions fail is the ubiquity of design as a human activity. As Simon (1981) has pointed out, anyone who devises courses of action to change an existing state of affairs to a preferred one is involved in the act of design. We have all been involved in devising such courses of action; hence from the very experience of living – or at least from the very experience of purposively controlling our lives – we have an intuitive idea of what design is. Thus, any single all embracing definition leaves us dissatisfied as much by what it excludes as by what it contains. For every definition one can point to instances of what is intuitively or experientially felt to be design but which have been excluded from the definition.
Most design theorists including Simon (1981) and Jones (1980) among the more influential agree that, in an ultimate sense, the goal of design is to initiate change in some aspect of the world. We perceive an imperfection in the state of affairs in some specific domain and we conceive or design an artifact which when implemented will, we believe, correct or improve this state of affairs.
There are a number of consequences of this seemingly obvious observation.
DEMARCATING ENGINEERING FROM SCIENCE
I shall use the term engineering here (and throughout the book) as a synonym for the more cumbersome ‘sciences of the artificial’. That is, the term encompasses the activities involved in the production of all useful artifacts – both material, such as buildings, computers and machine tools, and symbolic or abstract such as organizations and computer programs. Clearly, except in the simplest of situations, engineering entails design.
The notion of design as an agent of change appears to establish a firm criterion for demarcating engineering from the natural sciences (such as physics, chemistry, biology or geology). According to conventional wisdom, the latter disciplines are concerned with understanding the natural universe; the engineering disciplines are concerned with its purposeful, artificial alteration or extension.
The tendency to distinguish between the natural sciences (or simply, ‘science’) and engineering has, over the years, assumed almost mythical proportions.