To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Up to now, we have presented session type systems for the pi calculus. If session types are to be used for practical software development, they need to be applied to standard programming language paradigms rather than a foundational calculus. This chapter takes a step in that direction by defining a session type system for a concurrent lambda calculus, which can be regarded as a core concurrent programming language. After presenting some examples, we introduce a syntax for functional programs with concurrency and channel-based communication. We include infinite types from the beginning (based on Chapter 3) and include sharing (Chapter 4) and subtyping (Chapter 5). We then introduce a type system for the functional language, develop its operational semantics and prove type safety.
This chapter presents the basic concepts of session types, using the pi calculus as a core concurrent programming language for which a type system is defined. It assumes some familiarity with the pi calculus and the concepts of operational semantics and type systems. References to background reading are included at the end of the chapter.
Many type systems include the concept of subtyping, allowing a value of one type (the subtype) to be used as if it is a value of another type (the supertype). The aim is to allow greater flexibility in programming, while maintaining safety. In this chapter, we see how subtyping can be included in our system of session types. We build on the language in Chapter 4, using replicated input, rather than recursive process definitions, to express repetitive behaviour.
This chapter describes the evolution of computing systems, from data processing to an emphasis on communication, and motivates a corresponding evolution of the concept of typing. Data types codify the structure of data, and go back to the early days of programming languages. This book is about session types, which codify the structure of communication – they are type-theoretic specifications of communication protocols. The chapter summarises the assumptions about communication that are necessary for the theory of session types, and describes the behavioural safety properties that are guaranteed by checking session types.
The earlier chapters present session type systems declaratively, focusing on how typing judgements describe the way in which processes use channels. In order to apply session types to programming languages, it is essential to be able to implement an efficient typechecking algorithm which answers the question: given a candidate typing judgement, is it derivable? The declarative typing rules, however, are not immediately suitable for implementation. In this chapter we explain the problem and how to overcome it.
This chapter develops a theory of infinite session types in order to describe communication protocols that allow unbounded behaviour. The theory is based on the technical machinery of recursive types, coalgebras and coinduction, which the chapter introduces at an elementary level. Recursive process definitions are introduced so that unbounded behaviour can be implemented. The type safety results of Chapter 2 are extended to the new setting.
Communication in the languages presented so far is synchronous: a sending action blocks the sender until it can interact with a compatible receiving action at the intended receiver. In this chapter, we consider an alternative semantics for interactions: asynchronous communication. Asynchronous communication allows for a sending action to be executed without waiting for the receiver to be ready by storing the sent message in a message queue that the intended receiver can later read.
We introduce our first choreography language, Simple Choreographies, which allows for writing sequences of interactions between processes. The key aspect of the language is that interactions are syntactically manifest in choreographies. A semantics of choreographies is obtained in terms of a labelled transition system.
Before venturing into the study of choreographies, we introduce the formalism of inference systems. Inference systems are widely used in the fields of formal logic and programming languages and they were later applied to theory of choreographies as well.
To model system implementations, we define the language of Simple Processes. In this language, systems are defined in the classical style of giving a separate program for each process. Process programs use send and receive actions that need to match during execution in order to achieve a communication. We discuss how implementations of choreographies from the previous chapter can be written in terms of this language. We also formulate in our setting the key properties of parallelism, communication safety, and starvation-freedom, respectively: the capability of executing independent communications in any order; the property that processes never attempt to interact by performing incompatible actions; and the property that every running process eventually gets to act.
We explore conservative extensions to Recursive Choreographies, which aim at making choreographies easier to read or to write. These extensions are given as syntactic sugar and include constructs for request-reply interactions, message destructuring, and distributed conditions.