To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The founding paper [Pratt 1976] on dynamic logic begins as follows:
“This paper deals with logics of programs. The objective is to formalize a notion of program description and to give both plausible (semantic) and effective (syntactic) criteria for the notion of truth of a description. A novel feature of this treatment is the development of the mathematics underlying Floyd-Hoare axiom systems independently of such systems.”
This book continues study of such mathematics with particular emphasis on semantic frameworks. We intend for these frameworks to be flexible, relying on no particular concept of state. Ultimately, extensions of the theory are to address at least program semantics, operating systems, concurrent processes and distributed networks; but the accomplishments of the foundational core herein are modest.
We shall be concerned with a category-theoretic foundation. One possible paradigm is that a morphism is the behaviour of a program. Composition of morphisms models program-chaining. An implementation of a programming language must provide a definite category in which to assign morphisms to programs. We shall also require that high-level specifications about programs map, as well, to true-false assertions about the corresponding interpreted programs.
Our semantic frameworks are categories satisfying certain axioms, that is, are models of the first-order theory of categories. Composition is the only primitive operation. Such models are strongly typed in that two morphisms cannot be composed unless the target of the first coincides exactly with the source of the second.
The original motivation for the development of action semantics was dissatisfaction with pragmatic aspects of denotational semantics.
Early work on abstract semantic algebras focused on the use of algebraic axioms to specify the intended interpretation of action notation.
Although the concrete form of action notation has varied greatly, the underlying primitives and combinators have remained rather stable.
The adoption of a meta-notation based on unified algebras simplified the algebraic specification of generic abstract data types, and allowed the use of operations on sorts in actions.
The provision of a structural operational semantics for action notation emphasized the operational essence of action notation, and allowed the verification of algebraic laws.
Recent enhancements of action semantics concern the grammars for specifying abstract syntax, action notation for communication and indirect bindings, and the notation for sorts of actions.
Current and future projects involve: the action semantic description of various programming languages; the implementation of systems supporting the creation, editing, checking, and interpretation of descriptions; action semantics directed compiler generation; and the further investigation of the theory of action notation.
The author welcomes comments on action semantics, and maintains a mailing list.
This concluding chapter explains the original motivation for the development of action semantics. It then gives what amounts to an annotated bibliography for action semantics and for its precursor, a framework called abstract semantic algebras. Finally, it describes current work, and invites you to participate in the future development of action semantics.
Part IV concludes by briefly relating action semantics to other frameworks, and by sketching its development. It cites the main sources for action semantics and the related frameworks, and mentions some current projects.
Action notation includes a communicative action notation for specifying information processing by distributed systems of agents.
Communicative actions are concerned with permanent information.
Chapter 17 illustrates the use of communicative action notation in the semantic description of tasks and entry calls.
So fax, we have dealt with sequential performance of actions by a single agent in isolation. Let us now consider concurrent performance by a distributed system of agents, where each agent can communicate with the other agents, sending and receiving messages and offering contracts. Even when only one agent is active, this generalizes action performance sufficiently to allow the representation of interactive input-output behaviour, where nonterminating information processing is especially significant.
An agent represents the identity of a single process, embedded in a universal communication medium, or ‘ether’. Implementations of processes may run them on physically separate processors, linked by buses or networks, or on a single processor using time-sharing. Of course each processor itself may consist of a number of connected parts, such as CPU, memory modules, video cards, etc. Agents may correspond to the behaviours of such special-purpose subprocessors, as well as to processes specified directly in high-level programming languages.
Communication between agents is asynchronous: to send a message, an agent emits the message into the medium, with another agent specified as the receiver, and then carries on performing, not waiting until the message has been delivered to the other agent.
Part I introduces the concepts and formalism used in action semantics. First it motivates formal descriptions of programming languages, and discusses their main features. It then explains the particular kinds of formal specification used in action semantic descriptions of programming languages, giving a simple illustrative example. Finally it presents an unorthodox framework for algebraic specifications, and sketches the algebraic foundations of action semantics.
Navigation
Have you read the Preface? If not, please do so now—it explains how this book is organized.
The rationale behind the development of parameterized semantics in Chapter 5 is that it facilitates a multitude of interpretations of the mixed λ-calculus and combinatory logic. We saw examples of ‘standard semantics’ in Chapter 5 and a code generation example in Chapter 6 and in this chapter we shall give examples of static program analyses. We shall follow the approach of abstract interpretation but will only cover a rather small part of the concepts, techniques and tools that have been developed. The Bibliographical Notes will contain pointers to some of those that are not covered; in particular the notions of liveness (as opposed to safety), inducing (a best analysis) and expected forms (for certain operators).
We cover a basic strictness analysis in Section 7.1. It builds on Wadler's four-point domain for lists of base types but generalizes the formulation to lists of arbitrary types. In Section 7.2 we then illustrate the precision obtained by Wadler's notion of case analysis. We then review the tensor product which has been put forward as a way of modelling so-called relational program analyses (as opposed to the independent attribute program analyses). Finally we show that there is a rather intimate connection between these two ideas.
Strictness Analysis
Strictness of a function means that ⊥ is mapped to ⊥. In practical terms this means that if a function is strict it is safe to evaluate the argument to the function before beginning to evaluate the body of the function.
The previous chapter developed the notion of parameterized semantics for the mixed λ-calculus and combinatory logic. This was applied to showing that the run-time part of the language could be equipped with various mixtures of lazy and eager features. In this chapter we shall stick to one of these: the lazy semantics S. The power of parameterized semantics will then be used to specify code that describes how to compute the results specified by the lazy semantics.
The abstract machine and the code generation are both developed in Section 6.1 as it is hard to understand the details of the instructions in the abstract machine without some knowledge of how they are used for code generation and vice versa. The abstract machine is a variant of the categorical abstract machine and its semantics is formulated as a transition system on configurations consisting of a code component and a stack of values. The code generation is specified as an interpretation K in the sense of Chapter 5.
The remainder of the chapter is devoted to demonstrating the correctness of the code generation, K, with respect to the lazy semantics, S. To cut down on the overall length of the proof we shall exclude lists from our consideration. Section 6.2 then begins by showing that the code generation function behaves in a way that admits substitution. Next, Section 6.3 shows that the code generated is ‘well-behaved’ in that it operates in a stack-like manner.
In the previous chapters we have focused on the theoretical development of the language of the mixed λ-calculus and combinatory logic (Chapters 2, 3 and 4) and on the different standard and non-standard semantics of the language (Chapters 5, 6 and 7). There are two immediate application areas for this work, one is in the efficient implementation of functional languages and the other is in denotational semantics.
Optimized Code Generation
Much work in the community of functional languages has been devoted to the development of efficient implementations. This is well documented in [86] which contains a number of techniques that may be used to improve the overall performance of a ‘naive’ implementation. However, the theoretical soundness of all these techniques has not been established (although [52] goes part of the way). We believe that the main reason for this is that it is not well-understood how to structure correctness proofs even for naive code generation schemes. So although we have a firm handle on how to prove the safety of large classes of program analyses it is less clear how to formally prove the correctness of exploiting the analyses to generate ‘optimized’ code.
Before addressing the question on how to improve the code generation of Chapter 6 let us briefly review the techniques we have used. The code generation is specified as an interpretation K (in the sense of Chapter 5) and its correctness is expressed by means of Kripke-logical relations.