To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The theoretical foundation of functional programming is the Curry-Howard correspondence, also known as the propositions as types paradigm. Types in simply typed lambda calculus correspond to propositions in intuitionistic logic: function types correspond to logical implications, and product types correspond to logical conjunctions. Not only that, programs correspond to proofs and computation corresponds to a procedure of cut elimination or proof normalisation in which proofs are progressively simplified. The Curry-Howard view has proved to be robust and general and has been extended to varied and more powerful type systems and logics. In one of these extensions the language is a form of pi calculus and the logic is linear logic, with its propositions interpreted as session types. In this chapter we present this system and its key results.
This chapter introduces a variant of the pi calculus defined in Chapter 2. We drop choice and channel closing, retaining only message passing. To compensate, messages may now carry values other than channel endpoints: we introduce record and variant values, both of which are standard in functional languages. We further introduce new processes to eliminate the new values. We play the same game at the type level: from input/output, external/internal choice and the two end types, we retain only input/output types. In return we incorporate record and variant types, again standard from functional languages. Unlike the input and output types in all preceding chapters, those in this chapter have no continuation. These changes lead to a linear pi calculus with record and variant types. The interesting characteristic of this calculus is that it allows us to faithfully encode the pi calculus with session types, even though it has no specific support for session types. We present an encoding based on work by Dardha, Giachino and Sangiorgi.
In this chapter, we aim to contribute to the ongoing discussions involving legal entities, big tech, and governments by introducing several key topics and questions related to data privacy, decision-making, and regulation. We explore the balance between mathematical logic and social justice, the challenge of eliminating persistent biases through programming, and the extent of control and accountability humans should maintain over generative systems. We also consider whether machines should be held to the same ethical standards as humans and contemplate the role of the free market in shaping societal outcomes.
The chapter concludes with an examination of how data is monetized through Ad markets, its role in perpetuating bias, and the need to define personal data. Through these discussions, we hope to provide a foundation for deeper exploration and understanding of the complex issues surrounding data privacy, decision-making, and regulation.
This chapter delves into the complexities and challenges of data science, emphasizing the potential pitfalls and ethical considerations inherent in decision-making based on data. It explores the intricate nature of data, which can be multifaceted, noisy, temporally and spatially disjointed, and often a result of the interplay among numerous interconnected components. This complexity poses significant difficulties in drawing causal inferences and making informed decisions.
A central theme of the chapter is the compromise of privacy that individuals may face in the quest for data-driven insights, which raises ethical concerns regarding the use of personal data. The discussion extends to the concept of algorithmic fairness, particularly in the context of racial bias, shedding light on the need for mitigating biases in data-driven decision-making processes.
Through a series of examples, the chapter illustrates the challenges and potential pitfalls associated with data science, underscoring the importance of robust methodologies and ethical considerations. It concludes with a thought-provoking examination of income inequality as a controversial example of data science in practice. The example highlights the nuanced interplay between data, decisions, and societal impacts.
The previous chapters present a linear world in which each channel endpoint is possessed by exactly one thread. Linearity allows precise control of interference and that can be useful. If two processes engage in a series of consecutive message exchanges then we can be sure that messages are not sent to or received from foreign processes. Although linearity is often convenient, it can also be detrimental in concurrent systems. Purely linear systems cannot describe e-commerce interactions where a server (or a pool of servers) is expected to serve a much larger number of clients. Linear systems also fail to describe an auction, where at the end, only one bidder is to get the item for sale. This chapter introduces a type discipline that provides for channels that can be shared and consequently for processes that compete for resources, thus creating race conditions and nondeterminism. We do this in such a way that we do not need to revisit process formation or process reduction, and process typing only requires mild changes. We define the language and type system as extensions of those in Chapter 3.
This chapter asserts that the evolution of AI over the past seven decades has been closely intertwined with advancements in computational power. It identifies four key computing developments – mainframes, personal computers, wireless communication and the internet, and embedded systems – that have significantly influenced the field of data science and AI.
Starting from the early concepts of Turing machines, the chapter traces the parallel evolution of AI through milestones such as the invention of the perceptron, the development of machine learning techniques, and the current state of AI systems. It highlights key moments in AI history, from the first computer to play checkers to the algorithmic triumph of Deep Blue over a chess champion, as well as the recent achievements of AlphaGo.
By placing these advances in the context of broader computing history, the chapter argues that contemporary AI capabilities are the culmination of deliberate and iterative technological progress. It concludes by examining the profound impact of computing and AI on political institutions, citing examples such as the Arab Spring and the Cambridge Analytica scandal.
This chapter introduces Data, Systems, and Society (DSS), a new transdiscipline bridging statistics, information and decision systems, and social and institutional behavior. It emphasizes the value of transdisciplinarity over multidisciplinarity and interdisciplinarity and advocates for integrating DSS across domains where data and systems are pivotal (e.g., engineering, sciences, social sciences, and management). The chapter concludes by illustrating how DSS training has been instrumental in tackling different facets of the COVID-19 pandemic, including testing, vaccination strategies, and evaluating regional policies.
Machine-learning (ML) methods have shown great potential for weather downscaling. These data-driven approaches provide a more efficient alternative for producing high-resolution weather datasets and forecasts compared to physics-based numerical simulations. Neural operators, which learn solution operators for a family of partial differential equations, have shown great success in scientific ML applications involving physics-driven datasets. Neural operators are grid-resolution-invariant and are often evaluated on higher grid resolutions than they are trained on, i.e., zero-shot super-resolution. Given their promising zero-shot super-resolution performance on dynamical systems emulation, we present a critical investigation of their zero-shot weather downscaling capabilities, which is when models are tasked with producing high-resolution outputs using higher upsampling factors than are seen during training. To this end, we create two realistic downscaling experiments with challenging upsampling factors (e.g., 8x and 15x) across data from different simulations: the European Centre for Medium-Range Weather Forecasts Reanalysis version 5 (ERA5) and the Wind Integration National Dataset Toolkit. While neural operator-based downscaling models perform better than interpolation and a simple convolutional baseline, we show the surprising performance of an approach that combines a powerful transformer-based model with parameter-free interpolation at zero-shot weather downscaling. We find that this Swin-Transformer-based approach mostly outperforms models with neural operator layers in terms of average error metrics, whereas an Enhanced Super-Resolution Generative Adversarial Network-based approach is better than most models in terms of capturing the physics of the ground truth data. We suggest their use in future work as strong baselines.
We present a family of minimal modal logics (namely, modal logics based on minimal propositional logic) corresponding each to a different classical modal logic. The minimal modal logics are defined based on their classical counterparts in two distinct ways: (1) via embedding into fusions of classical modal logics through a natural extension of the Gödel–Johansson translation of minimal logic into modal logic S4; (2) via extension to modal logics of the multi- vs. single-succedent correspondence of sequent calculi for classical and minimal logic. We show that, despite being mutually independent, the two methods turn out to be equivalent for a wide class of modal systems. Moreover, we compare the resulting minimal version of K with the constructive modal logic CK studied in the literature, displaying tight relations among the two systems. Based on these relations, we also define a constructive correspondent for each minimal system, thus obtaining a family of constructive modal logics which includes CK as well as other constructive modal logics studied in the literature.
This chapter explores how students and faculty have applied their training in the DSS transdiscipline to address complex societal problems. Through examples from fields as diverse as climate change, social media, genomics, and anesthesia, it demonstrates the breadth of the transdiscipline’s power. Those examples illustrate how the DSS framework enables researchers to navigate and effectively address complex societal questions.
Up to now, we have presented session type systems for the pi calculus. If session types are to be used for practical software development, they need to be applied to standard programming language paradigms rather than a foundational calculus. This chapter takes a step in that direction by defining a session type system for a concurrent lambda calculus, which can be regarded as a core concurrent programming language. After presenting some examples, we introduce a syntax for functional programs with concurrency and channel-based communication. We include infinite types from the beginning (based on Chapter 3) and include sharing (Chapter 4) and subtyping (Chapter 5). We then introduce a type system for the functional language, develop its operational semantics and prove type safety.
This chapter presents the basic concepts of session types, using the pi calculus as a core concurrent programming language for which a type system is defined. It assumes some familiarity with the pi calculus and the concepts of operational semantics and type systems. References to background reading are included at the end of the chapter.
Many type systems include the concept of subtyping, allowing a value of one type (the subtype) to be used as if it is a value of another type (the supertype). The aim is to allow greater flexibility in programming, while maintaining safety. In this chapter, we see how subtyping can be included in our system of session types. We build on the language in Chapter 4, using replicated input, rather than recursive process definitions, to express repetitive behaviour.