41 results
Building High Integrity Applications with SPARK
- John W. McCormick, Peter C. Chapin
-
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015
-
Software is pervasive in our lives. We are accustomed to dealing with the failures of much of that software - restarting an application is a very familiar solution. Such solutions are unacceptable when the software controls our cars, airplanes and medical devices or manages our private information. These applications must run without error. SPARK provides a means, based on mathematical proof, to guarantee that a program has no errors. SPARK is a formally defined programming language and a set of verification tools specifically designed to support the development of software used in high integrity applications. Using SPARK, developers can formally verify properties of their code such as information flow, freedom from runtime errors, functional correctness, security properties and safety properties. Written by two SPARK experts, this is the first introduction to the just-released 2014 version. It will help students and developers alike master the basic concepts for building systems with SPARK.
Index
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 363-367
-
- Chapter
- Export citation
9 - Advanced Techniques
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 326-354
-
- Chapter
- Export citation
-
Summary
In this chapter we examine some advanced techniques for proving properties of Spark programs. Although the approaches we describe here will not be needed for the development of many programs, you may find them useful or even necessary for handling larger, realistic applications.
Ghost Entities
Ghost entities make it easier to express assertions about a program. The essential property of ghost entities is that they have no effect on the execution behavior of a valid program. Thus, a valid program that includes ghost entities will execute the same with or without them.
9.1.1 Ghost Functions
In applications where you are trying to prove strong statements about the correctness of your programs, the expressions you need to write in assertions become very complex. To help manage that complexity, it is desirable to factor certain subexpressions into separate functions, both to document them and to facilitate reuse of them in multiple assertions.
Functions that you create for verification purposes only are called ghost functions. The essential property of ghost functions is that they do not normally play any role in the execution of your program. Ghost functions may only be called from assertions such as preconditions, postconditions, and loop invariants. They may not be called from the ordinary, non-assertive portions of your program.
As an example consider the specification of a package Sorted_Arrays that contains subprograms for creating and processing sorted arrays of integers:
Notice that the postcondition of Sort and the precondition of Binary_Search both use the same quantified expression to assert that the array being processed is sorted. Although the expression is not exceptionally unwieldy in this case, it is still somewhat obscure and hard to read. Having it duplicated on two subprograms also hurts the package's maintainability.
Our second version of this specification introduces a ghost function to abstract and simplify the pre- and postconditions on the other subprograms. We use the Boolean aspect Ghost to indicate that the function Is_Sorted is included only for verification purposes.
The postcondition on Sort and the precondition on Binary_Search use the Is_Sorted function rather than the lengthier quantified predicates. They are now clearer and easier to maintain.
The function Is_Sorted is decorated with a postcondition that explains its effect.
8 - Software Engineering with Spark
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 286-325
-
- Chapter
- Export citation
-
Summary
In the preceding chapters we have concentrated on the details of the Spark language. In this chapter, we look at a broader picture of how Spark might be used in the context of a software engineering process. The Spark 2014 Toolset User's Guide (Spark Team, 2014b) lists three common usage scenarios:
Conversion of existing software developed in Spark 2005 to Spark 2014
Analysis and/or conversion of legacy Ada software
Development of new Spark 2014 code from scratch
We start by examining each of these scenarios in more detail, discussing the interplay between proof and testing, and then presenting a case study to illustrate some issues arising when developing new Spark 2014 code from scratch.
Conversion of Spark 2005
Converting a working Spark 2005 program to Spark 2014 makes sense when that program is still undergoing active maintenance for enhanced functionality. The larger language and the enhanced set of analysis tools provided by Spark 2014 offer a potential savings in development time when adding functionality to an existing Spark 2005 program.
As Spark 2014 is a superset of Spark 2005, the conversion is straight forward. Section 7.2 of the Spark 2014 Toolset User's Guide (Spark Team, 2014b) provides a short introduction to this conversion. Appendix A of the Spark 2014 Reference Manual (Spark Team, 2014a) has information and a wealth of examples for converting Spark 2005 constructs to Spark 2014. Explanations and examples are provided for converting subprograms, type (ADT) packages, variable (ASM) packages, external subsystems, proofs, and more. Should you need to constrain your code to the Spark 2005 constructs but wish to use the cleaner syntax of Spark 2014, you may use pragma Restrictions (Spark 05) to have the analysis tools flag Spark 2014 constructs that are not available in Spark 2005.
Dross et al. (2014) discuss their experiences with converting Spark 2005 to Spark 2014 in three different domains. AdaCore has a Spark 2005 to Spark 2014 translator to assist with the translation process. At the time of this writing, this tool is available only to those using the pro versions of their GNAT and Spark products.
We illustrate a simple example of converting a Spark 2005 package to Spark 2014. The package encapsulates a circular buffer holding temperature data, for example, from an analog to digital converter.
4 - Dependency Contracts
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 99-134
-
- Chapter
- Export citation
-
Summary
In this chapter we describe Spark's features for describing data dependencies and information flow dependencies in our programs. This analysis offers two major services. First, it verifies that no uninitialized data is ever used. Second, it verifies that all results computed by the program participate in some way in the program's eventual output – that is, all computations are effective.
The value of the first service is fairly obvious. Uninitialized data has an indeterminate value. If it is used, the effect will likely be a runtime exception or, worse, the program may simply compute the wrong output. The value of the second service is less clear. A program that produces results that are not used is at best needlessly inefficient. However, ineffective computations may also be a symptom of a larger problem. Perhaps the programmer forgot to implement or incompletely implemented some necessary logic. The flow analysis done by the Spark tools helps prevent the programmer from shipping a program that is in reality only partially complete.
It is important to realize, however, that flow analysis by itself will not show your programs to be free from the possibility of runtime errors. Flow analysis is only the first step toward building robust software. It can reveal a significant number of faults, but to create highly robust systems, it is necessary to use proof techniques as described in Chapter 6.
As described in Chapter 1, there are three layers of analysis to consider in increasing order of rigor:
Show that the program is legal Ada that abides by the restrictions of Spark where appropriate. The most straightforward way to verify this is by compiling the code with a Spark-enabled compiler such as GNAT.
Show that the program has no data dependency or flow dependency errors. Verify this by running the Spark tools to “examine” each source file.
Show that the program is free from runtime errors and that it honors all its contracts, invariants, and other assertions. Verify this by running the Spark tools to “prove” each source file.
We recommend making these three steps explicit in your work. Move on to the next step only when all errors from the previous step have been remedied. This chapter discusses the second step.
3 - Programming in the Large
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 68-98
-
- Chapter
- Export citation
-
Summary
DeRemer and Kron (1975) distinguished the activities of writing large programs from that of writing small programs. They considered large programs to be systems built from many small programs (modules), usually written by different people. It is common today to separate the features of a programming language along the same lines. In Chapter 2, we presented the aspects of Ada required to write the most basic programs. In this chapter, we discuss some of Ada's features that support the development of large programs.
To facilitate the construction of large programs, Ada makes use of programming units. An Ada program consists of a main subprogram that uses services provided by library units. A library unit is a unit of Ada code that we may compile separately. Library units are often called compilation units. We have already made use of many predefined library units in our examples. The with clause provides access to a library unit. The use clause provides direct visibility to the public declarations within a library unit so we do not have to prefix them with the name of the library unit.
A library unit is a subprogram (a procedure or function), package, or generic unit. The main subprogram is itself a library unit. Subprograms, packages, and generic units that are nested within another programming unit are not library units; they must be compiled with the programming unit in which they are nested. Generally, we use a compiler and linker to create an executable from a collection of library units. Library units also play a role in mixing Spark and non-Spark code in a single program – a topic we discuss in Chapter 7. In the following sections, we will introduce you to the package and to generic units.
Encapsulation and information hiding are the cornerstones of programming in the large. Both concepts deal with the handling complexity. There are two aspects of encapsulation: the combining of related resources and the separation of specification from implementation. In object-oriented design and programming, we use encapsulation to combine data and methods into a single entity called a class. Encapsulation also allows us to separate what methods a class supplies for manipulating the data without revealing how those methods are implemented.
1 - Introduction and Overview
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 1-17
-
- Chapter
-
- You have access Access
- Export citation
-
Summary
Software is critical to many aspects of our lives. It comes in many forms. The applications we install and run on our computers and smart phones are easily recognized as software. Other software, such as that controlling the amount of fuel injected into a car's engine, is not so obvious to its users. Much of the software we use lacks adequate quality. A report by the National Institute of Standards and Technology (NIST, 2002) indicated that poor quality software costs the United States economy more than $60 billion per year. There is no evidence to support any improvement in software quality in the decade since that report was written.
Most of us expect our software to fail. We are never surprised and rarely complain when our e-mail program locks up or the font changes we made to our word processing document are lost. The typical “solution” to a software problem of turning the device off and then on again is so encultured that it is often applied to problems outside of the realm of computers and software. Even our humor reflects this view of quality. A classic joke is the software executive's statement to the auto industry, “If GM had kept up with the computing industry we would all be driving $25 cars that got 1,000 miles per gallon,” followed by the car maker's list of additional features that would come with such a vehicle:
For no apparent reason, your car would crash twice a day.
Occasionally, your engine would quit on the highway. You would have to coast over to the side of the road, close all of the windows, turn off the ignition, restart the car, and then reopen the windows before you could continue.
Occasionally, executing a maneuver, such as slowing down after completion of a right turn of exactly 97 degrees, would cause your engine to shut down and refuse to restart, in which case you would have to reinstall the engine.
Occasionally, your car would lock you out and refuse to let you in until you simultaneously lift the door handle, turn the key, and kick the door (an operation requiring the use of three of your four limbs).
Why do we not care about quality? The simple answer is that defective software works “well enough.”
Contents
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp v-viii
-
- Chapter
- Export citation
7 - Interfacing with Spark
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 247-285
-
- Chapter
- Export citation
-
Summary
It is often infeasible or even undesirable to write an entire program in Spark. Some portions of the program may need to be in full Ada to take advantage of Ada features that are not available in Spark such as access types and exceptions. It may be necessary for Spark programs to call third-party libraries written in Ada or some other programming language such as C. Of course Spark's assurances of correctness cannot be formally guaranteed when the execution of a program flows into the non-Spark components. However, mixing Spark and non-Spark code is of great practical importance. In this chapter we explore the issues around building programs that are only partially Spark. In Chapter 8 we look at how combining proof with testing can verify applications that are not all Spark.
Spark and Ada
In this section we discuss mixing Spark with full Ada. Calling Spark from Ada is trivial because Spark is a subset of Ada and thus appears entirely ordinary from the point of view of the full Ada compiler. Calling full Ada from Spark, however, presents more issues because the limitations of Spark require special handling at the interface between the two languages.
7.1.1 SparkMode
Conceptually each part or construct of your program is either “in Spark” or “not in Spark.” If a construct is in Spark, then it conforms to the restrictions of Spark, whereas if a construct is not in Spark, it can make use of all the features of full Ada as appropriate for the construct. It is not permitted for Spark constructs to directly reference non-Spark constructs. For example, a subprogram body that is in Spark cannot call a subprogram with a non-Spark declaration. However, as declarations and bodies are separate constructs, it is permitted for a Spark subprogram body to call a subprogram with a Spark declaration even if the body of the called subprogram is not in Spark.
It is up to you to mark the Spark constructs of your program as such by specifying their Sparkmode. This is done using the SPARK_Mode pragma or SPARK_Mode aspect as appropriate. The Spark mode can be explicitly set to either On or Off.
5 - Mathematical Background
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 135-154
-
- Chapter
- Export citation
-
Summary
In this chapter we present some background in mathematical logic in the context of software analysis. This material may be review for some readers, but we encourage all to at least skim this chapter to gain understanding of our notation, terminology, and use in writing Spark programs. You may wish to consult a discrete mathematics textbook such as those by Epp (2010), Gersting (2014), or Rosen (2011) for a complete treatment of these topics.
Propositional Logic
A proposition is a meaningful declarative sentence that is either true or false. Propositions are also called logical statements or just statements. A statement cannot be true at one point in time and false at another time. Here, for example, are two simple propositions, one true and one false:
Sodium Azide is a poison.
New York City is the capital of New York state.
Not all statements that can be uttered in a natural language are unambiguously true or false. When a person makes a statement such as, “I like Italian food,” there are usually many subtle qualifications to the meaning at play. The speaker might really mean, “I usually like Italian food,” or, “I've had Italian food that I liked.” The true meaning is either evident from the context of the conversation or can be explored in greater depth by asking clarifying questions. In any case, the speaker almost certainly does not mean he or she definitely likes all Italian food in the world. The original statement is neither completely true nor completely false.
Even mathematical expressions may be ambiguous. We cannot tell whether the expression x ≥ 17 is true or false as we do not know the value of x.We can turn this expression into a proposition by giving x a value. In Section 5.4, we will show how to use quantifiers to give values to such variables.
Whereas literature, poetry, and humor depend on the emotional impact of ambiguous statements rife with subtle meanings, high-integrity systems must be constructed in more absolute terms. The pilot of an aircraft wants to know that if a certain control is activated, the landing gear will definitely respond. Thus, we are interested in statements with clear truth values.
6 - Proof
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 155-246
-
- Chapter
- Export citation
-
Summary
In this chapter we describe how you can use Spark to prove certain correctness properties of your programs. When you ask to “prove” your code, the Spark tools will by default endeavor to prove that it will never raise any of the predefined language exceptions that we describe in Section 6.1. If you additionally include pre- and postconditions, loop invariants, or other kinds of assertions, the tools will also attempt to prove that those assertions will never fail.
It is important to understand that proofs created by the Spark tools are entirely static. This means if they succeed, the thing being proved will be true for every possible execution of your program regardless of the inputs provided. This is the critical property of proof that sets it apart from testing.
However, Ada assertions are also executable under the control of the assertion policy in force at the time a unit is compiled. Assertions for which proofs could not be completed can be checked when the program is run, for example during testing, to help provide a certain level of confidence about the unproved assertions. Testing can thus be used to complement the proof techniques described here to obtain greater overall reliability. We discuss this further in Section 8.4.
Runtime Errors
A logical error is an error in the logic of the program itself that may cause the program to fail as it is executing. It is an error that, in principle, arises entirely because of programmer oversight. In contrast, an external error is an error caused by a problem in the execution environment of the program, such as being unable to open a file because it does not exist. If a program is correct, it should not contain any logical errors. However, external errors are outside of a program's control and may occur regardless of how well constructed the program might be. A properly designed program should be able to cope with any external errors that might arise. However, the handling of external errors is outside the scope of this book and is a matter for software analysis, design, and testing (see Black, 2007).
We distinguish a runtime error as a special kind of logical error that is detected by Ada-mandated checks during program execution. Examples of runtime errors include the attempt to access an array with an out of bounds index, arithmetic overflow, or division by zero.
Notes
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 355-358
-
- Chapter
- Export citation
Preface
-
- By John W. McCormick, University of Northern Iowa, Peter C. Chapin, Vermont Technical College
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp ix-xiv
-
- Chapter
- Export citation
-
Summary
Spark is a formally defined programming language and a set of verification tools specifically designed to support the development of high integrity software. Using Spark, developers can formally verify properties of their code such as
• information flow,
• freedom from runtime errors,
• functional correctness,
• security policies, and
• safety policies.
Spark meets the requirements of all high integrity software safety standards, including DO-178B/C (and the Formal Methods supplement DO-333), CENELEC 50128, IEC 61508, and DEFSTAN 00-56. Spark can be used to support software assurance at the highest levels specified in the Common Criteria Information Technology Security Evaluation standard.
It has been twenty years since the first proof of a nontrivial system was written in Spark (Chapman and Schanda, 2014). The 27,000 lines of Spark code for SHOLIS, a system that assists with the safe operation of helicopters at sea, generated nearly 9,000 verification conditions (VCs). Of these VCs, 75.5% were proven automatically by the Spark tools. The remaining VCs were proven by hand using an interactive proof assistance tool. Fast-forward to 2011 when the NATS iFACTS enroute air traffic control system went online in the United Kingdom. The 529,000 lines of Spark code were proven to be “crash proof.” The Spark tools had improved to the point where 98.76% of the 152,927 VCs were proven automatically. Most of the remaining proofs were accomplished by the addition of user-defined rules, leaving only 200 proofs to be done “by review.”
Although Spark and other proof tools have significant successes, their use is still limited. Many software engineers presume that the intellectual challenges of proof are too high to consider using these technologies on their projects. Therefore, an important goal in the design of the latest version of Spark, called Spark 2014, was to provide a less demanding approach for working with proof tools. The first step toward this goal was the arrival of Ada 2012 with its new syntax for contracts. We no longer need to write Spark assertions as special comments in the Ada code. The subset of Ada that is legal as Spark language has grown to encompass a larger subset of Ada, giving developers a much richer set of constructs from which to develop their code.
References
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 359-362
-
- Chapter
- Export citation
Frontmatter
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp i-iv
-
- Chapter
- Export citation
2 - The Basic Spark Language
- John W. McCormick, University of Northern Iowa, Peter C. Chapin
-
- Book:
- Building High Integrity Applications with SPARK
- Published online:
- 05 October 2015
- Print publication:
- 31 August 2015, pp 18-67
-
- Chapter
- Export citation
-
Summary
Spark is a programming language based on Ada. The syntax and semantics of the Ada language are defined in the Ada Reference Manual (ARM, 2012). The SparkReference Manual (Spark Team, 2014a) contains the specification of the subset of Ada used in Spark and the aspects that are Spark specific. As stated in Chapter 1, a major goal of Spark 2014 was to embody the largest possible subset of Ada 2012 amenable to formal analysis. The following Ada 2012 features are not currently supported by Spark:
• Aliasing of names; no object may be referenced by multiple names
• Pointers (access types) and dynamic memory allocation
• Goto statements
• Expressions or functions with side effects
• Exception handlers
• Controlled types; types that provide fine control of object creation, assignment, and destruction
• Tasking/multithreading (will be included in future releases)
This chapter and Chapter 3 cover many, but not all, of the features of Ada 2012 available in Spark. We discuss those features that are most relevant to Spark and the examples used in this book.We assume that the reader has little, if any, knowledge of Ada. Barnes (2014) presents a comprehensive description of the Ada programming language. Ben-Ari (2009) does an excellent job describing the aspects of Ada relevant to software engineering. Dale, Weems, and McCormick (2000) provide an introduction to Ada for novice programmers. Ada implementations of the common data structures can be found in Dale and McCormick (2007). There are also many Ada language resources available online that you may find useful while reading this chapter, including material by English (2001), Riehle (2003), and Wikibooks (2014).
Let us start with a simple example that illustrates the basic structure of an Ada program. The following program prompts the user to enter two integers and displays their average.
The first three lines of the program are context items. Together, these three context items make up the context clause of the program. The three with clauses specify the library units our program requires. In this example, we use input and output operations from three different library units: one for the input and output of strings and characters (Ada.Text_IO), one for the input and output of integers (Ada.Integer_Text_IO), and one for the input and output of floating point real numbers (Ada.Float_Text_IO).
References
- John W. McCormick, University of Northern Iowa, Frank Singhoff, Université de Bretagne Occidentale, Jérôme Hugues
-
- Book:
- Building Parallel, Embedded, and Real-Time Applications with Ada
- Published online:
- 01 June 2011
- Print publication:
- 07 April 2011, pp 359-364
-
- Chapter
- Export citation
7 - Real-time systems and scheduling concepts
- John W. McCormick, University of Northern Iowa, Frank Singhoff, Université de Bretagne Occidentale, Jérôme Hugues
-
- Book:
- Building Parallel, Embedded, and Real-Time Applications with Ada
- Published online:
- 01 June 2011
- Print publication:
- 07 April 2011, pp 251-293
-
- Chapter
- Export citation
-
Summary
Real-time systems are defined as those “systems in which the correctness of the system depends not only on the logical result of computation, but also on the time at which the results are produced” (Stankovic, 1988). When we design a real-time system, we must ensure that it meets three properties:
Correctness of functionality. We expect that our system will produce the correct output for every set of input data. Meeting this property is an expectation of all types of software including information technology and web applications. Traditional verification techniques such as testing and formal proof may be used to demonstrate functional correctness.
Correctness of timing behavior. As we stated in Chapter 1, the requirements of a real-time system include timing properties that must be met by the implementation. Deadlines may be assigned to particular system functions, and then to the tasks that implement these functions. Correct timing behavior is verified by checking that task execution times never exceed the required deadlines. This analysis of the timing behavior is called “schedulability analysis.”
Reliability. Software reliability is the probability of failure-free software operation for a specified period of time in a specified environment (Lyu, 1995). Real-time systems are often safety or mission critical — a failure may result in loss of life or property. Therefore, reliability is usually an important property of a real-time system.
5 - Communication and synchronization based on direct interaction
- John W. McCormick, University of Northern Iowa, Frank Singhoff, Université de Bretagne Occidentale, Jérôme Hugues
-
- Book:
- Building Parallel, Embedded, and Real-Time Applications with Ada
- Published online:
- 01 June 2011
- Print publication:
- 07 April 2011, pp 166-194
-
- Chapter
- Export citation
-
Summary
In Chapter 4 we showed how protected objects can provide an indirect means for tasks to communicate and synchronize. Protected functions and procedures provide safe access to shared data while protected entries provide the means for tasks to synchronize their activities. In this chapter we look at direct communication between tasks through Ada's rendezvous.
The rendezvous
Rendezvous is a sixteenth century French word which literally translates to “present yourself.” To rendezvous is to come together at a prearranged meeting place. In Ada, the rendezvous is a mechanism for controlled direct interaction between two tasks. The rendezvous provides a second way to synchronize tasks and transfer data between them.
Ada's rendezvous is based on a client-server model. One task, called the server, declares one or more services that it can offer to other tasks, the clients. These services are defined as entries in the server task's specification. Task entries are similar to protected entries. In fact, we may requeue a protected entry call onto a task entry and vice versa. A client task requests a rendezvous with a server by making entry calls just as if the server was a protected object. Server tasks indicate a willingness to accept a rendezvous on an entry by executing an accept statement. The accept statement may include code to be executed during the rendezvous.
For the rendezvous to take place, both the client and the server must have issued their requests.
Preface
- John W. McCormick, University of Northern Iowa, Frank Singhoff, Université de Bretagne Occidentale, Jérôme Hugues
-
- Book:
- Building Parallel, Embedded, and Real-Time Applications with Ada
- Published online:
- 01 June 2011
- Print publication:
- 07 April 2011, pp xiii-xviii
-
- Chapter
- Export citation
-
Summary
The arrival and popularity of multi-core processors have sparked a renewed interest in the development of parallel programs. Similarly, the availability of low cost microprocessors and sensors has generated a great interest in embedded real-time programs. This book provides students and programmers with traditional backgrounds in sequential programming the opportunity to expand their capabilities into these important emerging paradigms. It also addresses the theoretical foundations of real-time scheduling analysis, focusing on theory that is useful for real applications.
Two excellent books by Burns and Wellings (2007; 2009) provide a complete, in depth presentation of Ada's concurrent and real-time features. They make use of Ada's powerful object-oriented programming features to create high-level concurrent patterns. These books are “required reading” for software engineers working with Ada on real-time projects. However, we found that their coverage of all of Ada's concurrent and real-time features and the additional level of abstraction provided by their clever use of the object-oriented paradigm made it difficult for our undergraduate students to grasp the fundamental concepts of parallel, embedded, and real-time programming. We believe that the subset of Ada presented in this book provides the simplest model for understanding the fundamental concepts. With this basic knowledge, our readers can more easily learn the more detailed aspects of the Ada language and the more widely applicable patterns presented by Burns and Wellings.