To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this article we describe a translation of the Parallel Object-Oriented Language POOL to the language of ACP, the Algebra of Communicating Processes. This translation provides us with a large number of semantics for POOL. It is argued that an optimal semantics for POOL does not exist: what is optimal depends on the application domain one has in mind. We show that the select statement in POOL makes a semantical description of POOL with handshaking communication between objects incompatible with a description level where message queues are used. Attention is paid to the question how fairness and successful termination can be included in the semantics. Finally it is shown that integers and booleans in POOL can be implemented in various ways.
INTRODUCTION
At this moment there are a lot of programming languages which offer facilities for concurrent programming. The basic notions of some of these languages, for example CSP, occam and LOTOS, are rather close to the basic notions in ACP, and it is not very difficult to give semantics of these languages in the framework of ACP. Milner showed how a simple high level concurrent language can be translated into CCS. However, it is not obvious at first sight how to give process algebra semantics of more complex concurrent programming languages like Ada, Pascal-Plus or POOL. This is an important problem because of the simple fact that a lot of concurrent systems are specified in terms of these languages. In this article we will tackle the problem, and give process algebra semantics of the language POOL.
By
J. A. Bergstra, Programming Research Group, University of Amsterdam, P.O. Box 41882, 1009 DB Amsterdam, The Netherlands, Department of Philosophy, State University of Utrecht, Heidelberglaan 2, 3584 CS Utrecht, The Netherlands
We introduce an encapsulation operator Eφ that provides process algebra with a process creation mechanism. Several simple examples are considered. It is shown that Eφ does not extend the defining power of the system ‘ACP with guarded recursion’.
INTRODUCTION
Extension of process algebra
In this paper we extend process algebra with a new operator that will be helpful to describe process creation. From a methodological point of view the extension of process algebra with new operators is just the right way to incorporate new features. Only in a very rich calculus with many operators one may hope to be able to perform significant algebraic calculations on systems. In many cases a new feature requires new (additional) syntax and more equations, only in very rare circumstances the addition of equations alone suffices to obtain an appropriate model of some new system aspect. The core system ACP, see, describes asynchronous cooperation with synchronous communication.
On top of ACP various features can be added, for instance: asynchronous communication, cooperation in the presence of shared data, broadcasting, interrupts. This note adds process creation to the features that are compatible with process algebra.
For historical remarks and relations with previous literature we refer to.
Process creation
We start on basis of the axiom system ACP which is supposed to be known to the reader. We assume the presence of a finite set of data D and introduce for each d∈D an action cr(d). The action cr(d) stands for: create a process on basis of initial information d. Let cr(D) denote the set {cr(d)|d∈D}.
By
J. C. Mulder, Programming Research Group, University of Amsterdam, P.O. Box 41882, 1009 DB Amsterdam, The Netherlands,
W. P. Weijland, Centre for Mathematics and Computer Science, P.O. Box 4079, 1009 AB Amsterdam, The Netherlands
In this paper a concurrent sorting algorithm called ranksort is presented, able to sort an input sequence of length n in log n time, using n2 processors. The algorithm is formally specified as a delay-insensitive circuit. Then, a formal correctness proof is given, using bisimulation semantics in the language ACPτ. The algorithm has area-time2=O(n2 log4n) complexity which is slightly suboptimal with respect to the lower bound of AT2 = Ω(n2 log n).
INTRODUCTION
Many authors have studied the concurrency aspects of sorting, and indeed the n-time bubblesort algorithm (using n processors) is rather thoroughly analyzed already (e.g. see: Hennessy, Kossen and Weijland). However, bubblesort is not the most efficient sorting algorithm in sequential programming, since it is n2-time and for instance heapsort and mergesort are n log n-time sorting algorithms. So, the natural question arises whether it would be possible to design an algorithm using even less than n-time.
In this paper we discuss a concurrent algorithm, capable of sorting n numbers in O(log n) time. This algorithm is based on the idea of square comparison: putting all numbers to be sorted in a square matrix, all comparisons can be made in O(1) time, using n2 processors (one for each cell of the matrix). Then, the algorithm only needs to evaluate the result of this operation.
The algorithm presented here, which is called ranksort, is not the only concurrent time-efficient sorting algorithm. Several sub n-time algorithms have been developed by others (see: Thompson).
By
C. P. J. Koymans, Department of Philosophy, State University of Utrecht, Heidelberglaan 2, 3584 CS Utrecht, The Netherlands,
J. C. Mulder, Programming Research Group, University of Amsterdam, P.O. Box 41882, 1009 DB Amsterdam, The Netherlands
A version of the Alternating Bit Protocol is verified by means of process algebra. To avoid a combinatorial explosion, a notion of ‘modules’ is introduced and the protocol is divided in two such modules. A method is developed for verifying conglomerates of modules and applied to the motivating example.
One of the basic problems in protocol verification is the following: data are to be transmitted from A to B via some unreliable medium M. A protocol has been proposed for doing so correctly and perhaps efficiently. A rigorous mathematical proof of the correctness claim is desired.
Now protocol verification aims at providing the techniques for giving such a proof. Several formalisms have been advocated, but as yet none has been widely accepted.
The framework we adhere to is process algebra. The first protocol correctness proof by means of process algebra is in Bergstra and Klop, where a simple version of the Alternating Bit Protocol is verified.
We have tried our hands at a more complicated version, called the Concurrent Alternating Bit Protocol (CABP) and found that the number of possible state transitions was prohibitively large. In this article we propose a divide-and-conquer strategy. We group processes into modules, describe and verify their behaviour and finally combine them. For different approaches, see.
In Section 1 we deal with the Concurrent Alternating Bit Protocol (CABP). In Section 2 we present the modular approach. Modules are introduced in Section 3, whereas the verification of the CABP is given in Section 4.
The Amoeba distributed operating system supports the transaction as its communication primitive. The protocol that the Amoeba system uses to carry out sequences of transactions reliably and efficiently is analyzed in terms of process algebra. The design goals are formulated as process algebra equations and it is established that one of them is not met. This can be repaired by adding an extra transition. Subsequently it is verified that the revised version meets its specifications.
It has been observed that formal verification methods for mathematical proofs, computer programs, communication protocols and the like are usually illustrated by ‘toy’ examples and that such proofs tend to be discouragingly long. In order to demonstrate that it is feasible to verify a ‘real-life’ communication protocol by means of process algebra, we picked one from the literature.
In his Ph.D. thesis, Mullender investigates issues he considered while developing the Amoeba distributed operating system. In Section 3.2.4 of a transaction protocol is described to which we will refer as the Amoeba protocol. In the preceding sections of the design goals are described that this protocol is supposed to satisfy. He does not give a formal verification that his protocol meets this criteria. In fact, it turns out that one of them is not met. Note that this only applies to the simplified version of the protocol that appears in, the actual implementation uses a much more complicated version in which this mistake is not found.
Section 1 of this article gives the minimum background information necessary for understanding the rest of the article.
In Section 2 the design goals are formulated in English and in terms of process algebra.
By
L. Kossen, Centre for Mathematics and Computer Science, P.O. Box 4079, 1009 AB Amsterdam, The Netherlands,
W. P. Weijland, Centre for Mathematics and Computer Science, P.O. Box 4079, 1009 AB Amsterdam, The Netherlands
In designing VLSI-circuits it is very useful, if not necessary, to construct the specific circuit by placing simple components in regular configurations. Systolic systems are circuits built up from arrays of cells and therefore very suitable for formal analysis and induction methods. In two examples correctness proofs are given using bisimulation semantics with asynchronous cooperation. These examples also have been worked out by Hennessy in a setting of failure semantics with synchronous cooperation. Finally the notion of process creation is introduced and used to construct machines with unbounded capacity.
INTRODUCTION
In this article we will present simple descriptions of so-called systolic systems. Such systems can be looked at as a large integration of identical cells in such a way that the behaviour of the total system strongly resembles the behaviour of the individual cells. In fact the total system behaves like one of its individual cells ‘on a larger scale’.
For example one can think of a machine sorting arrays of numbers with a certain maximum length. Suppose we need a machine handling arrays that are much longer. A typical ‘systolic approach’ to this problem would be to try to interconnect the smaller machines such that the total circuit sorts arrays of a greater length. As a matter of fact this specific example will be worked out in the following sections. In designing VLSI-circuits (short for very large scale integrated circuits) it is very useful, if not necessary, to construct the specific circuit by placing simple components in regular configurations. Otherwise one looses all intuition about the behaviour of the circuit that is eventually constructed.
Let x be a process which can perform an action a when it is in state s. In this article we consider the situation where x is placed in a context which blocks a whenever, x is in s. The option of doing a in state s is redundant in such a context and x can be replaced by a process x′ which is identical to x, except for the fact that x′ cannot do a when it is in s (irrespective of the context). A simple, compositional proof technique is presented, which uses information about the traces of processes to detect redundancies in a process specification. As an illustration of the technique, a modular verification of a workcell architecture is presented.
INTRODUCTION
We are interested in the verification of distributed systems by means of algebraic manipulations. In process algebra, verifications often consist of a proof that the behaviour of an implementation IMPL equals the behaviour of a specification SPEC, after abstraction from internal activity: τI(IMPL) = SPEC.
The simplest strategy to prove such a statement is to derive first the transition system (process graph) for the process IMPL with the expansion theorem, apply an abstraction operator to this transition system, and then simplify the resulting system to the system for SPEC using the laws of (for instance) bisimulation semantics. This ‘global’ strategy however, is often not very practical due to combinatorial state explosion: the number of states of IMPL can be of the same order as the product of the number of states of its components. Another serious problem with this strategy is that it provides almost no ‘insight’ in the structure of the system being verified.
By
J. A. Bergstra, Programming Research Group, University of Amsterdam, P.O. Box 41882, 1009 DB Amsterdam, The Netherlands, Department of Philosophy, State University of Utrecht, Heidelberglaan 2, 3584 CS Utrecht, The Netherlands,
J. W. Klop, Department of Software Technology, Centre for Mathematics and Computer Science, P.O. Box 4079, 1009 AB Amsterdam, The Netherlands, Department of Mathematics and Computer Science, Free University, P.O. Box 7161, 1007 MC Amsterdam, The Netherlands
This article serves as an introduction to the basis of the theory, that will be used in the rest of this book. To be more precise, we will discuss the axiomatic theory ACPT (Algebra of Communicating Processes with abstraction), with additional features added, which is suitable for both specification and verification of communicating processes. As such, it can be used as background material for the other articles in the book, where all basic axioms are gathered. But we address ourselves not exclusively to readers with previous exposure to algebraic approaches to concurrency (or, as we will call it, process algebra). Also newcomers to this type of theory could find enough here, to get started. For a more thorough treatment of the theory, we refer to, which will be revised, translated and published in this CWI Monograph series. There, most proofs can also be found; we refer also to the original papers where the theory was developed. This article is an abbreviated version of reference.
Our presentation will concentrate on process algebra as it has been developed since 1982 at the Centre for Mathematics and Computer Science, Amsterdam (see), since 1985 in cooperation with the University of Amsterdam and the University of Utrecht. This means that we make no attempt to give a survey of related approaches though there will be references to some of the main ones.
This paper is not intended to give a survey of the whole area of activities in process algebra.
We acknowledge the help of Jos Baeten in the preparation of this paper.
In this book, we give applications of the theory of process algebra, known by the acronym ACP (Algebra of Communicating Processes), as it has been developed since 1982 at the Centre for Mathematics and Computer Science, Amsterdam (see), since 1985 in cooperation with the University of Amsterdam and the University of Utrecht. An important stimulus for this book was given by the ESPRIT contract no. 432, An Integrated Formal Approach to Industrial Software Development (Meteor). The theory itself is treated in, which will be revised, translated and published in this series. The theory is briefly reviewed in the first article in this book, An introduction to process algebra, by J.A. Bergstra and J.W. Klop.
This book gives applications of the theory of process algebra. By the term process algebra we mean the study of concurrent or communicating processes in an algebraic framework. We endeavour to treat communicating processes in an axiomatic way, just as for instance the study of mathematical objects as groups or fields starts with an axiomatization of the intended objects. The axiomatic method which will concern us, is algebraic in the sense that we consider structures which are models of some set of (mostly) equational axioms; these structures are equipped with several operators. Thus we use the term ‘algebra’ in the sense of model theory.
By
Richard A Volz, Dept. of Computer Science, Texas A&M Univ., College Station, Texas, U.S.A., 77843.,
Padmanabhan Krishnan, Dept. of Elec. Eng. & Comp. Sci., The Univ. of Michigan, Ann Arbor, Mich. U.S.A., 48109.,
Ronald Theriault, Dept. of Computer Science, Texas A&M Univ., College Station, Texas, U.S.A., 77843.
This paper describes the design and implementation of a Distributed Ada system. The language is not well defined with respect to distribution, and any implemtation for distributed execution must make a number of decisions regarding the language. The objectives in the implementation described here are to remain as close to the current definition of Ada as possible, and to learn through experience what changes are necessary in future versions. The approach we take translates a single Distributed Ada program into a number of Ada programs (one per site), each of which may then be compiled by an existing Ada compiler. Issues discussed include the ramifications of sharing of data types, objects, subprograms, tasks and task types. The implementation techniques used in the translator are described. We also develop a model of the performance of our system and validate it through performance benchmarks.
INTRODUCTION
The importance of distributed systems cannot be over-emphasized, especially with the reduction in the cost of high speed connection between powerful processing elements. Distributed computing has made inroads into many important areas such as manufacturing, avionic systems and space systems. The cost of developing software for such systems, however, is reaching astronomical proportions [1]. A major concern is the creation of software tools to economically harness the increased computing power.
Central to distributed software development is the language used to program these distributed devices.
By
Brian Dobbing, Alsys Limited Newtown Road, Henley on Thames, Oxon RG9 1EN, England,
Ian Caldwell, Alsys Limited Newtown Road, Henley on Thames, Oxon RG9 1EN, England
This paper describes firstly a general model for implementing Ada in a distributed or parallel system using existing compilation systems and without extensions to, or restrictions on, the Ada language definition. It then describes an instance of the model, namely how to implement an application across a network of transputers in Ada using the current Alsys Ada Compilation System for the Transputer.
INTRODUCTION
Much debate has already taken place regarding the inadequacies of Ada to support a single program running on a distributed or parallel architecture. This has led to a set of twelve requirements from the Parallel/Distributed Systems Working Group for consideration by the Ada 9X Project Requirements Team [DoD89]. Whilst we await Ada 9X, it is very important to be able to demonstrate that current Ada can be used to program distributed systems efficiently, without compromising Ada's goals of security and portability. This document describes how Ada can be used in this way in the general case and also gives an example of distributed Ada on the Inmos transputer. The transputer has been chosen primarily as a precursor to a study commissioned by the European Space Agency into the problems of, and recommendations for, mapping Ada onto a multi-transputer network for on-board space applications.
The intention of the general model is to be able to demonstrate support for the needs of distributed systems such as:
program partitioning and configuration;
dynamic reconfiguration and fault tolerance;
different perception of time in different partitions;
This paper presents the current position of the York Distributed Ada Project. The project has developed an approach to the programming of loosely coupled distributed systems in Ada, this being based on a single Ada program consisting of virtual nodes which communicate by message passing. A prototype development environment has been produced to support this approach, and an example distributed embedded system has been built. Preliminary work has been undertaken on mechanisms to support the construction of fault-tolerant systems within the approach.
INTRODUCTION
The Ada programming language, together with an appropriate support environment, is well suited to the development of large software engineering applications. Many such projects involve programming embedded computer systems, which often include loosely coupled distributed processing resources — collections of processing elements which are connected by some communication system but which share no common memory. It is now a commonly accepted view (Wellingsl987) that the Ada language by itself does not provide adequate support for the programming of these systems. The York Distributed Ada (YDA) Project has addressed this problem and developed an approach which allows Ada programs to be constructed for execution on distributed systems. (The YDA project was originally funded through the UK's Alvey Software Engineering Directorate as part of the ASPECT(Hall1985) project. It is currently funded by the Admiralty Research Establishment.) The current position of the project is presented in this paper.
By
Anders Ardö, Department of Computer Engineering University of Lund, P.O. Box 118 S-221 00 Lund, Sweden,
Lars Lundberg, Department of Computer Engineering University of Lund, P.O. Box 118 S-221 00 Lund, Sweden
Now, when the internal speed of computers is close to the physical limitations of electronic devices cf. [Wil83,Mea83] parallelism is the main way to increase the computing capacity. This conclusion has been obvious for quite some time now, and multiprocessor systems have attracted much attention during the last years.
In order to achieve parallelism, new computer structures and internal organizations are needed. There are, however, still no general solutions to those problems. Finding general and efficient ways to organize for parallelism in computer systems is one main interest of computer architecture research today.
One application of multiprocessors that is likely to increase rapidly is based on the use of parallel languages. As parallel languages has not been generally available, there is still very little knowledge among software designers how to take advantage of program parallelism. Now, when parallel languages like Ada are becoming readily available, the programming community will take advantage of the new possibilities they offer.
There are strong reasons to believe that parallelism in programming will be used even aside from the possibilities of increasing the speed with parallel hardware. Many computational problems have an inherent parallelism that obviously will be used when only parallelism has become part of the programmers' daily life. This will also increase program understandability [Hoa78]. Real-time programming is an obvious example for which these arguments are relevant, but as experience grows the same will show to be true for a variety of application areas.
This paper will present a study of practical design decisions relevant to the retargeting of a traditional compilation system to a distributed target environment. The knowledge was gathered during the course of Honeywell's Distributed Ada project which involved the retargeting of a full commercial Ada compilation system to a distributed environment. The goal of the project was to create a compilation system which would allow a single unmodified Ada program to be fragmented and executed in a distributed environment.
The Distributed Ada Project
The trend in embedded system architectures is shifting from uniprocessor systems to networks of multiple computers. Advances in software tools and methodologies have not kept pace with advances in using distributed system architectures. In current practice, the tools designed for developing software on uniprocessor systems are used even when the target hardware is distributed. Typically, the application developer factors the hardware configuration into software design very early in the development process and writes a separate program for each processor in the system. In this way, software design gets burdened with hardware information that is unrelated to the application functionality. The paradigm is also weak in that no compiler sees the entire application. Because of this, the semantics of remote operations are likely to be different from local operations and the type checking that the compiler provides is defeated for inter-processor operations.