To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper will present a study of practical design decisions relevant to the retargeting of a traditional compilation system to a distributed target environment. The knowledge was gathered during the course of Honeywell's Distributed Ada project which involved the retargeting of a full commercial Ada compilation system to a distributed environment. The goal of the project was to create a compilation system which would allow a single unmodified Ada program to be fragmented and executed in a distributed environment.
The Distributed Ada Project
The trend in embedded system architectures is shifting from uniprocessor systems to networks of multiple computers. Advances in software tools and methodologies have not kept pace with advances in using distributed system architectures. In current practice, the tools designed for developing software on uniprocessor systems are used even when the target hardware is distributed. Typically, the application developer factors the hardware configuration into software design very early in the development process and writes a separate program for each processor in the system. In this way, software design gets burdened with hardware information that is unrelated to the application functionality. The paradigm is also weak in that no compiler sees the entire application. Because of this, the semantics of remote operations are likely to be different from local operations and the type checking that the compiler provides is defeated for inter-processor operations.
The task of programming distributed applications in Ada may be addressed in several ways. Most of these require the application developer to factor the hardware configuration into software design very early in the development process. The resulting software is sensitive to changes in hardware, does not lend itself to design iteration, is not easily transportable across different hardware configurations, and is not stable against changes during the lifecycle of the application.
In Section 3, we describe an approach that aims at separation of concerns between program design and program partitioning for distributed execution. The entire application is written as a single Ada program using the full capabilities of the language for program structuring, separate compilation, and type checking. Then in a distinct second phase of design, the program is partitioned and prepared for distributed execution. Advantages of a two-phase design approach are discussed. Section 4 reviews related work and presents a comparative evaluation. Section 5 describes the notation used to express program partitioning. Section 6 revisits the issue of what Ada entities should be distributable.
Two implementations of this approach have been completed and tested with the Ada Compiler Validation Capability (ACVC) test-suite. Implementation issues, and the key features of our implementation approach are presented in an accompanying paper.
CLASSIFICATION OF APPROACHES
The Ada language does not provide explicit language support for distribution.
By
D. Auty, SofTech, USA,
A. Burns, University of Bradford, UK,
C. W. McKay, University of Houston - Clear Lake, USA,
C. Randall, GHG Corporation, USA,
P. Rogers, University of Houston - Clear Lake, USA
Perhaps the greatest challenge facing Ada is in the domain of the large distributed real-time system. Because of the long lead time associated with such complex applications no real experience of the use of Ada, in this type of domain, has yet been gained. Nevertheless there are projects of a large and complex nature that are committed to the use of Ada, even though the full potential of the language has yet to prove itself in this challenging domain.
The Portable Common Execution Environment (PCEE) project is a research effort addressing the life cycle support of large, complex, non-stop, distributed computing applications with Mission And Safety Critical (MASC) components. Such applications (for example the International Space Station — Freedom) typically have extended life-time (e.g., 30 years) requirements. PCEE focuses on the system software, the interface to applications and the system architecture necessary to reliably build and maintain such systems. The requirements extend from the target system environment to the integration environment, and ultimately to the host environment. The integration environment serves as the single logical point of integration, deployment, and configuration control whereas system development occurs in the host environment. Life cycle issues include an integrated approach to the technologies (environments, tools, and methodologies) and theoretical foundations (models, principles, and concepts) that span these three environments. The scope of the effort is necessarily broad. There are, however, substantial research foundations to support development across the breadth of the project.
By
Judy M Bishop, Department of Electronics and Computer Science, The University, Southampton, England,
Michael J Hasling, Department of Electronics and Computer Science, The University, Southampton, England
Although Ada is now ten years old, there are still not firm guidelines as to how the distribution of an Ada program onto multiprocessors should be organised, specified and implemented. There is considerable effort being expended on identifying and solving problems associated with distributed Ada, and the first aim of this paper is to set out where the work is being done, and how far it has progressed to date. In addition to work of a general nature, there are now nearly ten completed distributed Ada implementations, and a second aim of the paper is to compare these briefly, using a method developed as part of the Stadium project at the University of Southampton. Much of Southampton's motivation for getting involved in distributed Ada has been the interest from the strong concurrent computing group, which has for several years taken a lead in parallel applications on transputers. The paper concludes with a classification of parallel programs and a description of how the trends in distributed Ada will affect users in the different groups.
COLLECTIVE WORK ON DISTRIBUTED ADA
The major forums where work on distributed Ada is progressing are Ada UK's International Real-Time Issues Workshop, the Ada 9X Project, SIGAda ARTEWG and AdaJUG CARTWG. Reports of these meetings appear regularly in Ada User (published quarterly by Ada UK) and Ada Letters (published bi-monthly by ACM SIGAda). The status of their activities is summarised here.
Although Ada is now reaching its adolescence, distributed Ada is still in its infancy. The extent of the problems yet to be solved and the multitude of proposed solutions presents a very real dilemma for prospective implementors and users alike. How does one specify a distributed program? What parts of Ada are allowed to be distributed? Will the underlying hardware configuration matter? Can the program be made fault tolerant and reliable in the face of processor failure? How much effort will it take to move an existing Ada program onto a mutiprocessor system? Will the proposed new Ada Standard (Ada 9X) address distributed issues?
These are just some of the questions that arise, and there is considerable effort being expended, world-wide, in answering them. However, much of this work is being conducted in small working groups, and the interim results are published only in condensed form, if at all. The aim of this book is to open the debate to as wide an audience as possible, heightening the level of awareness of the progress that has been made to date, and the issues that still remain open.
The symposium on which this book is based was held at the University of Southampton on 11–12 December 1989 and attended by nearly 100 people.
A recent trend in computer engineering has been the replacement of large uniprocessor based proprietary architectures by multiple microprocessor based designs employing various interconnection strategies. While these multiprocessor based systems offer significant performance and economic advantages over uniprocessor systems, not all prospective users are able or willing to adapt their applications to execute as multiple concurrent streams.
The Ada programming language is well suited to multiprocessor systems as it allows the programmer to direct the use of concurrency through the use of the Ada tasking mechanism. The avoidance of automatic distribution of the program by the compiler and the choice of the Ada task as the unit of distribution greatly simplify the development of Ada software for multiprocessor architectures.
For performance reasons, the inter-processor communications path should offer low latency and high transfer rates. Shared memory supports these characteristics and a multiprocessor system, where all memory can be accessed by all processors, has proven to be a suitable platform for a parallel Ada implementation.
This paper discusses the implementation and architecture of a parallel Ada system that allows up to twenty processors to co-execute the same Ada program with true concurrency. Particular attention is given to the design of the Ada runtime and the interface between the runtime and the underlying operating system, as these parts of the system must be “multi-threaded” throughout in order to minimize bottle-necks. The paper concludes with the description of a 1000 MIPS Ada engine currently under development.
By
Robert Dewar, New York University Ada/Ed Research Group,
Susan Flynn, New York University Ada/Ed Research Group,
Edmond Schonberg, New York University Ada/Ed Research Group,
Norman Shulman, New York University Ada/Ed Research Group
The Ada multi-tasking model is one in which tasks can run on separate processors and memory is either non-shared (local to one task), or shared (referenced by more than one task). It would therefore seem that mapping Ada onto a multi-processor architecture with both local and shared memory should be straightforward. This paper examines the difficulties in mapping Ada onto the IBM RP3 which is an example of such an architecture. In practice there are a number of difficult problems, the most significant of which is the inability to determine at compile time which variables are shared. The RP3 has a flexible shared memory architecture, and an important purpose of the Ada/RP3 project is to investigate possible models for implementation of Ada, with a view to determining whether modifications or enhancements of Ada are desirable to ensure optimal use of such architectures.
INTRODUCTION
The NYU Ada/Ed system consists of a front end and interpreter written entirely in C. This system is a direct descendant of the original SETL interpreter, and has been ported to a wide variety of machines [KS84].
Our current research involves porting Ada/Ed to the IBM RP3, an experimental multi-processor with shared memory [P87]. The front end is essentially unchanged, except for the addition of a set of pragmas described later, but the backend is being modified to interface with proprietary IBM code generating technology, and the runtime library is being rewritten to take advantage of the multi-processor architecture.
By
A.B. Gargaro, Computer Sciences Corporation Moorestown, New Jersey, USA,
S.J. Goldsack, Department of Computing Imperial College London, UK,
R.A. Volz, Department of Computer Science Texas A&M University, USA,
A.J. Wellings, Department of Computer Science University of York, UK
The Ada programming language was designed to provide support for a wide range of safety-critical applications within a unified language framework, but it is now commonly accepted that the language has failed to achieve all its stated design goals. A major impediment has been the lack of language support for distributed fault-tolerant program execution.
In this paper we propose language changes to Ada which will facilitate the programming of fault-tolerant distributed real-time applications. These changes support partitioning and configuration/reconfiguration. Paradigms are given to illustrate how dynamic reconfiguration of the software can be programmed following notification of processor and network failure, mode changes, software failure, and deadline failure.
INTRODUCTION
There is increasing use of computers that are embedded in some wider engineering application. These systems all have several common characteristics: they must respond to externally generated input stimuli within a finite and specified period; they must be extremely reliable and/or safe; they are often geographically distributed over both a local and a wide area; they may contain a very large and complex software component; they may contain processing elements which are subject to cost/size/weight constraints.
Developing software to control safety-critical applications requires programming abstractions that are unavailable in many of today's programming languages. The Ada programming language was designed to provide support for such applications within a unified language framework, but it is now commonly accepted that the language has failed to achieve all its stated design goals.
By
Colin Atkinson, Imperial College, Dept. of Computing, 180 Queens Gate, London SWZ 2BZ, U.K.,
Andrea Di Maio, TXT S.p.A, Via Socrate, 41, 20128 Milano, Italy.
Although the introduction of Ada represented a significant step forward for the developers and users of embedded systems, experience in the use of the language has demonstrated that it has several shortcomings, particularly in the realm of distributed systems. Some of the difficulties with Ada in this respect are caused by relatively minor semantic details chosen without due regard for the properties of distributed systems, such as the semantics of timed and conditional entry calls, and should be easily rectified. Others, however, are of a much more fundamental nature, and are likely to require more significant modifications to the language to overcome them.
One of the main problems of the existing version of Ada is its execution model, based on the notion of a single main program. This model does not carry over well to distributed environments, and tends to reduce the prospects for supporting dynamic configuration and flexible responses to hardware failure.
The purpose of this paper is to outline the difficulties caused by the current execution model of Ada, and to describe the different solutions devised by the European projects, DIADEM, and DRAGON. The first of these was a small project partially funded under the Multi-Annual Programme of the Commission of the European communities, and was completed early in 1987. The second project is partially supported under the Esprit program of the Commission, and is due for completion in the middle of 1990.
A large proof should be organized as a collection of theories. An LCF theory has a signature: its type constructors, constants, infixes, predicates. It may have parent theories, inheriting all their symbols and axioms. This rich environment may be extended by new axioms. Theorems may be proved and recorded in the theory.
Existing theories may become the parents of a new theory if their signatures are disjoint. Names of types, constants, infixes, and predicates cannot be hidden or renamed to avoid clashes. Each theory has separate name spaces for its axioms and theorems. An axiom is designated by the pair (theory name, axiom name), a theorem by (theory name, theorem name).
Theories do not have a tree structure: sharing always occurs. In Figure 6.1, the theory T has parents T1 and T2. They both have T4 as a parent. Both the theories T3 and T4 have T5 as a parent; both T4 and T5 have PPλ as a parent. Symbols declared in a common ancestor are shared. If the symbol + is declared in T4 then it is visible in both T1 and T2 and does not cause a clash between the two theories. But if it were declared in both T2 and T3 then T1 and T2 would clash during the construction of T.
Every theory is ultimately descended from PPλ. A theory is the child of its parents.
When representing a logic on a computer, how should we treat inference rules? A rule, given appropriate premises, delivers a conclusion, so LCF represents an inference rule by a function from theorems to theorems. Theorems of PPλ are represented by the ML data type thm. Axioms and rules of PPλ are predefined identifiers. Each axiom is a theorem; applying rules to axioms computes new theorems.
This chapter lists the axioms, inference rules, and predefined theorems of PPλ. The examples include formal proofs from previous chapters, performed in LCF.
The representation of inference rules
A rule may hold only if its premises have a certain form; otherwise the corresponding ML function fails, raising exception rule of type string × thm list. The exception includes an error message and the offending premises.
In certain inference rules, the premises do not contain enough information to completely determine the conclusion. The corresponding function takes additional arguments, giving parts of the conclusion or even a formula stating the conclusion itself.
To implement quantifier rules, the abstract syntax functions of Chapter 5 perform substitution and enforce the provisos on eigenvariables. If a proviso is violated, the rule fails.
In the sequent Γ ├ A, the assumptions Γ are usually regarded as a set. ML represents a set of PPλ formulae as a list without repetitions. The LCF user need not be concerned with the order of the assumptions in the assumption list.
In conclusion, we see that proving whether particular exponential diophantine equations have finitely or infinitely many solutions, is absolutely intractable (Theorem D). Such questions escape the power of mathematical reasoning. This is a region in which mathematical truth has no discernible structure or pattern and appears to be completely random. These questions are completely beyond the power of human reasoning. Mathematics cannot deal with them.
Nonlinear dynamics [FORD (1983) and JENSEN (1987)] and quantum mechanics have shown that there is randomness in nature. I believe that we have demonstrated in this book that randomness is already present in pure mathematics, in fact, even in rather elementary branches of number theory. This doesn't mean that the universe and mathematics are lawless, it means that sometimes laws of a different kind apply: statistical laws.
More generally, this tends to support what TYMOCZKO (1986) has called a “quasi-empirical” approach to the foundations of mathematics. To quote from CHAITIN (1982b), where I have argued this case at length, “Perhaps number theory should be pursued more openly in the spirit of experimental science!” To prove more, one must sometimes assume more.
I would like to end with a few speculations on the deep problem of the origin of biological complexity, the question of why living organisms are so complicated, and in what sense we can understand them. I.e., how do biological “laws” compare with the laws of physics?
We have seen that Ω is about as random, patternless, unpredictable and incomprehensible as possible; the pattern of its bit sequence defies understanding.
Tactics accomplish backwards proof. You begin with a goal — a statement of the desired theorem — and reduce it to simpler subgoals, forming a proof tree. A tactic reduces a goal to a list of subgoals such that if every subgoal holds then the goal holds also. If the list of subgoals is empty then the tactic has proved the goal. The standard tactics allow reasoning about logical connectives, substitution, induction, and rewriting. A tactic is a higher order function. It not only reduces a goal to subgoals, but returns a proof function to justify its action.
It is worthwhile to develop tactics specifically for the theory at hand. Operations called tacticals, which treat tactics as data, construct sophisticated tactics from simple ones. It is seldom necessary to code a tactic in low-level ML: the standard tactics and tacticals form a language of proofs. Constructing a tactic from simpler ones promotes readability and avoids many kinds of error. The tactic steps of a proof can be composed to form a single tactic, giving a surprisingly readable synopsis of the proof. Tacticals are a prime example of higher order functions in LCF.
The subgoal package provides commands for backwards proof. It manages the subgoal tree, presenting the user with unsolved goals. When the last subgoal has been solved, it produces the final theorem.
Growing numbers of computer scientists recognise the importance of formal methods of system design [48]. LCF and ML are often mentioned, seldom understood. There is a wide literature, as the bibliography attests. Gordon, Milner, and Wadsworth [41] describe Edinburgh LCF, with a few remarkable examples. The present book aims to make Cambridge LCF accessible to a wide audience.
Serious students of LCF are of several kinds. Some would like an introduction to formal reasoning about computable functions. Others are interested in the principles of the machine implementation. And a few want to perform large proofs in Cambridge LCF, and require a comprehensive description.
Cambridge LCF is not the answer to all problems in formal methods. Like Edinburgh LCF its concern is denotational semantics, domain theory, and functional programming. Related systems support other formal methods, but none supports Z, VDM, CSP, CCS, or Hoare-style verification. Some ‘formal’ methods lack a precise meaning, making machine implementation impossible. However the goal of formal methods — introducing rigor into computer system design — can be achieved in LCF.
How to read this book
People find LCF difficult to approach because it requires familiarity with a mass of background material. The book consists of two parts. Part I outlines the mathematical preliminaries: elementary logic and domain theory. Many advocates of formal methods have a lamentably weak grasp of this fundamental material. Part II describes Cambridge LCF in enough detail to serve as a reference manual, though operating instructions should be supplied with the software.
More than half a century has passed since the famous papers GÖDEL (1931) and TURING (1937) that shed so much light on the foundations of mathematics, and that simultaneously promulgated mathematical formalisms for specifying algorithms, in one case via primitive recursive function definitions, and in the other case via Turing machines. The development of computer hardware and software technology during this period has been phenomenal, and as a result we now know much better how to do the high-level functional programming of Godel, and how to do the low-level machine language programming found in Turing's paper. And we can actually run our programs on machines and debug them, which Gödel and Turing could not do.
I believe that the best way to actually program a universal Turing machine is John McCarthy's universal function EVAL. In 1960 McCarthy proposed LISP as a new mathematical foundation for the theory of computation [McCARTHY (I960)]. But by a quirk of fate LISP has largely been ignored by theoreticians and has instead become the standard programming language for work on artificial intelligence. I believe that pure LISP is in precisely the same role in computational mathematics that set theory is in theoretical mathematics, in that it provides a beautifully elegant and extremely powerful formalism which enables concepts such as that of numbers and functions to be defined from a handful of more primitive notions.
Simultaneously there have been profound theoretical advances. Gödel and Turing's fundamental undecidable proposition, the question of whether an algorithm ever halts, is equivalent to the question of whether it ever produces any output.
Cambridge LCF is an interactive theorem prover for reasoning about computable functions. The terms of its logic, PPλ, constitute a tiny functional programming language. Cambridge LCF can be used for experimenting with first order proof; studying abstract constructions in domain theory; comparing the denotational semantics of programming languages; and verifying functional programs. It can reason about both strict and lazy evaluation.
There are many theorem provers in the LCF family. Each is descended from Edinburgh LCF and retains its fundamental ideas:
The user interacts with the prover through a programmable meta language, ML.
Logical formulae, theorems, rules, and proof strategies are ML data.
The prover guarantees soundness: it checks each inference and records each assumption.
Edinburgh LCF was developed in order to experiment with Scott's Logic of Computable Functions [41]. It performed many proofs involving denotational semantics and functional programming.
Cambridge LCF extended the logic of Edinburgh LCF with ∨, ∃, ⇔, and predicates, improved the efficiency, and added several inference mechanisms. It has been used in proofs about functional programming and several other theorem provers have been built from it.
LCF_LSM was developed for reasoning about digital circuits in a formalism related to CCS [32]. It verified some realistic devices [39], but is now obsolete.
HOL supports Church's Higher Order Logic [37], a general mathematical formalism. It is mainly used to prove digital circuits correct.
Another system supports a higher order Calculus of Constructions that can represent many other calculi.
Reasoning about functional programs requires the ability to define interesting data structures in PPλ. In Scott's original calculus the only type constructor is function space: if σ and τ are types then so is the function type σ → τ. A fundamental data structure is the Cartesian product, the type σ × τ. Other data type constructors are the strict product, various sums, and lifting. Starting from atomic types, many data structures can be expressed. Each type constructor has constructor functions for creating elements of the type. For taking elements apart it may have destructor functions or else an eliminator functional.
Furthermore recursive type constructors are definable. Domain theory allows the solution of recursive domain equations involving sum, product, function space, and several other type constructors. Cambridge LCF provides a mechanism for defining PPλ types in a style similar to Standard ML datatype definitions: LCF can generate axioms to define a PPλ type recursively as the sum of products of types, with one constructor function for each summand of the type.
The constructor functions determine whether a type is strict or lazy. If a constructor function is not strict in all arguments then it can construct partial objects: objects that have undefined parts. The resulting type, if recursive, also contains infinite objects. An example is the type of lazy lists, which contains partially defined and infinite lists. If every constructor is strict in every argument, then the resulting strict type contains only finite objects.