To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces a number of language features that are new in Ada 2005. The main focus is on execution time and how tasks can monitor and control the amount of processor time they are using. For real-time systems this is of crucial importance as the processor is usually the resource in least supply. It needs to be
used in a manner that is sympathetic to the scheduling policy of the system,
not over-used by failing components (tasks), but
fairly reallocated dynamically if spare capacity becomes available.
However before considering these topics, a more general view of Ada's model of event handling is warranted. The facilities for execution-time control all make use of events and event handling, and hence this chapter will start by examining events and a particular kind of event termed a timing event.
Events and event handling
It is useful in concurrent systems to distinguish between two forms of computation that occur at run-time: tasks and events. A task (or process or thread) is a long-lived entity with state and periods of activity and inactivity. While active, it competes with other tasks for the available resources – the rules of this competition are captured in a scheduling or dispatching policy (for example, fixed priority or EDF). By comparison, an event is a short-lived, stateless, one-shot computation. Its handler's execution is, at least conceptually, immediate; and having completed it has no lasting direct effect other than by means of changes it has made to the permanent state of the system. When an event occurs we say it is triggered; other terms used are fired, invoked and delivered.
Computer languages, like natural languages, evolve. For regulated languages, changes are consolidated into distinct versions at well-defined points in time. For Ada, the main versions have been Ada 83, Ada 95 and now Ada 2005. Whereas Ada 95 was a significantly different language from its ancestor, Ada 2005 is more an upgrade. It brings Ada up to date with respect to current practice in other languages, operating systems and theory – especially in the real-time domain.
Although Ada is a general purpose programming language, much of its development has been driven by the requirements of particular application areas. Specifically, the needs of high-integrity and safety-critical systems, real-time systems, embedded systems and large complex long-life systems. To support this wide range of applications, Ada has a large number of language features and primitives that can be grouped into the following:
strong typing with safe pointer operations,
object-oriented programming support via tagged types and interfaces,
hierarchical libraries and separate compilation,
exception handling,
annexes to give support to particular application domains,
low-level programming features that enable device drivers and interrupt handlers to be written,
an expressive concurrency model and
an extensive collection of entities that support real-time systems programming.
This book has concentrated on the last three items in this list to provide a comprehensive description of real-time and concurrent programming. These are two of the unquestionable strengths of the Ada language.
The models of synchronisation discussed in the previous four chapters have the common feature that they are based on avoidance synchronisation. Guards or barriers are used to prevent rendezvous and task-protected object interactions when the conditions are not appropriate for the communications event to start. Indeed, one of the key features of the tasking model is the consistent use of avoidance to control synchronisation. The use of guards and barriers represents a high-level abstract means of expressing and enforcing necessary synchronisations; and as such they can be compared favourably with the use of low-level primitives such as semaphores or monitor signals (see Chapter 3). This chapter starts by giving a more systematic assessment of avoidance synchronisation in order to motivate the requirement for ‘requeue’. It then describes the syntax and semantics of the requeue statement and gives examples of its use.
The need for requeue
Different language features are often compared in terms of their expressive power and ease of use (usability). Expressive power is the more objective criterion, and is concerned with the ability of language features to allow application requirements to be programmed directly. Ease of use is more subjective, and includes the ease with which the features under investigation interact with each other and with other language primitives.
In her evaluation of synchronisation primitives, Bloom (1979) used the following criteria to evaluate and compare the expressive power and usability of different language models.
The last few chapters have demonstrated the extensive set of facilities that Ada 2005 provides for the support of real-time programming. The expressive power of the full language is clearly more comprehensive than any other mainstream engineering language. There are, however, situations in which a restricted set of features are desirable. This chapter looks at ways in which certain restrictions can be identified in an Ada program. It then describes in detail the Ravenscar profile, which is a collection of restrictions aimed at applications that require very efficient implementations, or have high-integrity requirements, or both. This chapter also includes a discussion of the metrics identified in the Real-Time Systems Annex.
Restricted tasking and other language features
Where it is necessary to produce very efficient programs, it is useful to have runtime systems (kernels) that are tailored to the particular needs of the program actually executing. As this would be impossible to do in general, the Ada language defines a set of restrictions that a run-time system should recognise and ‘reward’ by giving more effective support. The following restrictions are identified by the pragma called Restrictions, and are checked and enforced before run-time. Note however, that there is no requirement on the run-time to tailor itself to the restrictions specified.
The major difficulties associated with concurrent programming arise from process interaction. Rarely are processes as independent of one another as they were in the simple example of the previous chapter. One of the main objectives of embedded systems design is to specify those activities that should be represented as processes (that is, active entities and servers), and those that are more accurately represented as protected entities (that is, resources). It is also critically important to indicate the nature of the interfaces between these concurrent objects. This chapter reviews several historically significant inter-process communication primitives: shared variables, semaphores, monitors and message passing. Before considering language primitives, however, it is necessary to discuss the inherent properties of inter-process communication. This discussion will be structured using the following headings:
Data communication;
Synchronisation;
Deadlocks and indefinite postponements;
System performance, correctness and reliability.
These are the themes that have influenced the design of the Ada tasking model.
As this model directly represents active and protected entities, there are two main forms of communication between active tasks:
direct – task-to-task communication;
indirect – communication via a protected resource.
Both these models are appropriate in Ada programs. In the following sections, however, we start by considering the problems of communicating indirectly by the use of only passive entities.
Data communication
The partitioning of a system into tasks invariably leads to the requirement that these tasks exchange data in order for the system to function correctly. For example, a device driver (a process with sole control over an external device) needs to receive requests from other processes and return data if appropriate.
It has been mentioned several times already in this book that real-time programming represents a major application area for Ada, and particularly for Ada tasking. The Real-Time Systems Annex specifies additional characteristics of the language that facilitate the programming of embedded and real-time systems. If an implementation supports the Real-Time Systems Annex then it must also support the Systems Programming Annex (see previous chapter). All issues discussed in the Real-Time Systems Annex affect the tasking facilities of the language. They can be grouped together into the following topics.
Time and clocks – introduced in Chapter 1.
Scheduling – how to allocate system resources, in particular the processor.
Resource control – how to monitor and manage the use of the processor by individual tasks or groups of tasks.
Optimisations and restrictions – specifically the Ravenscar profile.
All of these topics are discussed in this and the next two chapters; starting with the important issue of scheduling.
Scheduling
The functional correctness of a concurrent program should not depend on the exact order in which tasks are executed. It may be necessary to prove that the non-determinism of such programs cannot lead to deadlock or livelock (that is, progress is always taking place), but it is not necessary to program the order in which all actions must occur explicitly.
Designing, implementing and maintaining software for large systems is a non-trivial exercise and one which is fraught with difficulties. These difficulties relate to the management of the software production process itself, as well as to the size and complexity of the software components. Ada is a mature general-purpose programming language that has been designed to address the needs of large-scale system development, especially in the embedded systems domain. A major aspect of the language, and the one that is described comprehensively in this book, is its support for concurrent and real-time programming.
Ada has evolved over the last thirty years from an object-based concurrent programming language into a flexible concurrent and distributed object-oriented language that is well suited for high-reliability, long-lived applications. It has been particularly successful in high-integrity areas such as air traffic control, space systems, railway signalling, and both the civil and military avionics domains. Ada success is due to a number of factors including the following.
Hierarchical libraries and other facilities that support large-scale software development.
Strong compile-time type checking.
Safe object-oriented programming facilities.
Language-level support for concurrent programming.
A coherent approach to real-time systems development.
High-performance implementations.
Well-defined subsetting mechanisms, and in particular the SPARK subset for formal verification.
The development and standardisation of Ada have progressed through a number of definitions, the main ones being Ada 83 and Ada 95. Ada 2005 now builds on this success and introduces a relatively small number of language changes to provide:
Better support for multiple inheritance through the addition of Java-like interfaces.
Better support for OO style of programming by use of the Object.Operator notation.
In this chapter we look at type systems for the language aPi. The traditional view of types is that they provide an aid to programmers to avoid runtime errors during the execution of programs. We start with this view, explaining what runtime errors might occur during the execution of aPi processes, and design a type system that eliminates the possibility of such errors.
We then modify the set of types, so that they implement a more subtle notion of resource access control. Here the resources are simply channels, and the more subtle type system enables access to these resources to be managed; that is read and write accesses can be controlled. The view taken is that one no longer programs directly with resource names. Instead programming is in terms of capabilities on these names, the ability to read to or write from a particular resource. These capabilities are managed by the owners of resources and may be selectively distributed to be used by other processes.
Runtime errors
The traditional use of types and type checking is to eliminate runtime errors in highlevel programs. Specifically types are annotations inserted into the program text by the program designer, or inferred automatically by a type inference system, which indicate the intended use of various resources. Then prior to execution the annotated program is typechecked, that is syntactically analysed, to ensure that the behaviour of the program will indeed respect the intended use of these resources.
From ATM machines dispensing cash from our bank accounts, to online shopping websites, interactive systems permeate our everyday life. The underlying technology to support these systems, both hardware and software, is well advanced. However design principles and techniques for assuring their correct behaviour are at a much more primitive stage.
The provision of solid foundations for such activities, mathematical models of system behaviour and associated reasoning tools, has been a central theme of theoretical computer science over the last two decades. One approach has been the design of formal calculi in which the fundamental concepts underlying interactive systems can be described, and studied. The most obvious analogy is the use of the λ-calculus as a simple model for the study of sequential computation, or indeed the study of sequential programming languages. CCS (a Calculus for Communicating Systems) [28] was perhaps the first calculus proposed for the study of interactive systems, and was followed by numerous variations. This calculus consists of:
A simple formal language for describing systems in terms of their structure; how they are constructed from individual, but interconnected, components.
A semantic theory that seeks to understand the behaviour of systems described in the language, in terms of their ability to interact with users.
Here a system consists of a finite number of independent processes that intercommunicate using a fixed set of named communication channels. This set of channels constitutes a connection topology through which all communication takes place; it includes both communication between system components, and between the system and its users.