To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work… the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these case studies, the authors provide a better understanding of machine fairness and algorithmic transparency. They explain why interventions in algorithmic systems are necessary to ensure that algorithms are not used to control citizens' participation in politics and undercut democracy. This title is also available as Open Access on Cambridge Core.
This collection explores the relevance of global trade law for data, big data and cross-border data flows. Contributing authors from different disciplines including law, economics and political science analyze developments at the World Trade Organization and in preferential trade venues by asking what future-oriented models for data governance are available and viable in the area of trade law and policy. The collection paints the broad picture of the interaction between digital technologies and trade regulation as well as provides in-depth analyses of critical to the data-driven economy issues, such as privacy and AI, and different countries' perspectives. This title is also available as Open Access on Cambridge Core.
Common Law, Civil Law, and Colonial Law builds upon the legal historian F.W. Maitland's famous observation that history involves comparison, and that those who ignore every system but their own 'hardly came in sight of the idea of legal history'. The extensive introduction addresses the intellectual challenges posed by comparative approaches to legal history. This is followed by twelve essays derived from papers delivered at the 24th British Legal History Conference. These essays explore patterns in legal norms, processes, and practice across an exceptionally broad chronological and geographical range. Carefully selected to provide a network of inter-connections, they contribute to our better understanding of legal history by combining depth of analysis with historical contextualization. This title is also available as Open Access on Cambridge Core.
Offering an innovative perspective on early modern debates concerning embodiment, Alanna Skuse examines diverse kinds of surgical alteration, from mastectomy to castration, and amputation to facial reconstruction. Body-altering surgeries had profound socio-economic and philosophical consequences. They reached beyond the physical self, and prompted early modern authors to develop searching questions about the nature of body integrity and its relationship to the soul: was the body a part of one's identity, or a mere 'prison' for the mind? How was the body connected to personal morality? What happened to the altered body after death? Drawing on a wide variety of texts including medical treatises, plays, poems, newspaper reports and travel writings, this volume will argue the answers to these questions were flexible, divergent and often surprising, and helped to shape early modern thoughts on philosophy, literature, and the natural sciences. This title is also available as Open Access on Cambridge Core.
Knowing Women is a study of same-sex desire in West Africa, which explores the lives and friendships of working-class women in southern Ghana who are intimately involved with each other. Based on in-depth research of the life histories of women in the region, Serena O. Dankwa highlights the vibrancy of everyday same-sex intimacies that have not been captured in a globally pervasive language of sexual identity. Paying close attention to the women's practices of self-reference, Dankwa refers to them as 'knowing women' in a way that both distinguishes them from, and relates them to categories such as lesbian or supi, a Ghanaian term for female friend. In doing so, this study is not only a significant contribution to the field of global queer studies in which both women and Africa have been underrepresented, but a starting point to further theorize the relation between gender, kinship, and sexuality that is key to queer, feminist, and postcolonial theories. This title is also available as Open Access on Cambridge Core.
Institutions are failing in many areas of contemporary politics, not least of which concerns climate change. However, remedying such problems is not straightforward. Pursuing institutional improvement is an intensely political process, playing out over extended timeframes, and intricately tied to existing setups. Such activities are open-ended, and outcomes are often provisional and indeterminate. The question of institutional improvement, therefore, centers on understanding how institutions are (re)made within complex settings. This Element develops an original analytical foundation for studying institutional remaking and its political dynamics. It explains how institutional remaking can be observed and provides a typology comprising five areas of institutional production involved in institutional remaking (Novelty, Uptake, Dismantling, Stability, Interplay). This opens up a new research agenda on the politics of responding to institutional breakdown, and brings sustainability scholarship into closer dialogue with scholarship on processes of institutional change and development. Also available as Open Access on Cambridge Core.
Universities and public research institutes play a key role in enabling the application of scientific breakthroughs and innovations in the marketplace. Many countries – developed and developing alike – have implemented national strategies to support the application or commercialization of knowledge produced by public research organizations. Universities and public research institutes have introduced practices to support these activities, for instance by including knowledge transfer to promote innovation as a core part of their mission. As a result, a vital question for policymakers is how to improve the efficiency of these knowledge transfer practices to help maximize innovation-driven growth and/or to seek practical solutions to critical societal challenges. This book aims to develop a conceptual framework to evaluate knowledge transfer practices and outcomes; to improve knowledge transfer metrics, surveys and evaluation frameworks; and to generate findings on what works and what does not, and to propose related policy lessons. This book is also available as Open Access.
We live in a networked world. Online social networking platforms and the World Wide Web have changed how society thinks about connectivity. Because of the technological nature of such networks, their study has predominantly taken place within the domains of computer science and related scientific fields. But arts and humanities scholars are increasingly using the same kinds of visual and quantitative analysis to shed light on aspects of culture and society hitherto concealed. This Element contends that networks are a category of study that cuts across traditional academic barriers, uniting diverse disciplines through a shared understanding of complexity in our world. Moreover, we are at a moment in time when it is crucial that arts and humanities scholars join the critique of how large-scale network data and advanced network analysis are being harnessed for the purposes of power, surveillance, and commercial gain. This title is also available as Open Access on Cambridge Core.
This chapter is a tutorial about some of the key issues in semantics of the first-order aspects of probabilistic programming languages for statistical modelling – languages such as Church, Anglican, Venture and WebPPL. We argue that s-finite measures and s-finite kernels provide a good semantic basis.
Reasoning about probabilistic programs is hard because it compounds the difficulty of classic program analysis with sometimes subtle questions of probability theory. Having precise mathematical models, or semantics, describing their behaviour is therefore particularly important. In this chapter, we review two probabilistic semantics. First, an operational semantics which models the local, step-by-step, behaviour of programs, then a denotational semantics describing global behaviour as an operator transforming probability distributions over memory states.
Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and masking of soft errors is challenging, expensive and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning and big data analytics) can often naturally tolerate soft errors. In this chapter, we demonstrate how a programming language, Rely, enables developers to reason about and verify the quantitative reliability of an application – namely, the probability that it produces the correct result when executed on unreliable hardware. Rely leverages a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering.
Church's λ-calculus has become a universally accepted model of pure functional programming, and its properties have been thoroughly scrutinised by the research community in the last 90 years. Many variations of it have been introduced for the sake of capturing programming with various forms of effects, thus going beyond pure functional programming. This chapter is meant to be a gentle introduction to a family of such calculi, namely probabilistic λ-calculi, in their two main variations: randomised λ-calculi and Bayesian λ-calculi. We focus our attention on the operational semantics, expressive power and termination properties of randomised λ-calculi, only giving some hints and references about denotational models and Bayesian λ-calculi.
The quantitative analysis of probabilistic programs answers queries involving the expected values of program variables and expressions involving them, as well as bounds on the probabilities of assertions. In this chapter, we will present the use of concentration of measure inequalities to reason about such bounds. First, we will briefly present and motivate standard concentration of measure inequalities. Next, we survey approaches to reason about quantitative properties using concentration of measure inequalities, illustrating these on numerous motivating examples. Finally, we discuss currently open challenges in this area for future work.
Property-based random testing á la QuickCheck requires building efficient generators for well-distributed random data satisfying complex logical predicates, but writing these generators can be difficult and error prone. This chapter introduces a probabilistic domain-specific language in which generators are conveniently expressed by decorating predicates with lightweight annotations to control both the distribution of generated values and the amount of constraint solving that happens before each variable is instantiated. This language, called Luck, makes generators easier to write, read and maintain. We give Luck a probabilistic formal semantics and prove several fundamental properties, including the soundness and completeness of random generation with respect to a standard predicate semantics. We evaluate Luck on common examples from the property-based testing literature and on two significant case studies, showing that it can be used in complex domains with comparable bug-finding effectiveness and a significant reduction in testing code size compared to handwritten generators.
In this chapter, we explore how (Type-2) computable distributions can be used to give both (algorithmic) sampling and distributional semantics to probabilistic programs with continuous distributions. To this end, we sketch an encoding of computable distributions in a fragment of Haskell and show how topological domains can be used to model the resulting PCF-like language. We also examine the implications that a (Type-2) computable semantics has for implementing conditioning. We hope to draw out the connection between an approach based on (Type-2) computability and ordinary programming throughout the chapter as well as highlight the relation with constructive mathematics (via realizability).
This chapter is concerned with analysing the expected runtime of probabilistic programs by exploiting program verification techniques. We introduce a weakest pre-conditioning framework á la Dijkstra that enables to determine the expected runtime in a compositional manner. Like weakest pre-conditions, it is a reasoning framework at the syntax level of programs. Applications of the weakest pre-conditioning framework include determining the expected runtime of randomised algorithms, as well as determining whether a program is positive almost-surely terminating, i.e., whether the expected number of computation steps until termination is finite for every possible input. For Bayesian networks, a restricted class of probabilistic programs, we show that the expected runtime analysis can be fully automated. In this way, the simulation time under rejection sampling can be determined. This is particularly useful for ill-conditioned inference queries.
Monads are a popular feature of the programming language Haskell because they can model many different notions of computation in a uniform and purely functional way. Our particular interest here is the probability monad, which can be -- and has been -- used to synthesise models for probabilistic programming. Quantitative Information Flow, or QIF, arises when security is combined with probability, and concerns the measurement of the amount of information that 'leaks' from a probabilistic program's state to a (usually) hostile observer: that is, not 'whether' leaks occur but rather 'how much?' Recently it has been shown that QIF can be seen monadically, a 'lifting' of the probability monad so that programs become functions from distributions to distributions of distributions: the codomain is 'hyper distributions'. Haskell's support for monads therefore suggests a synthesis of an executable model for QIF. Here, we provide the first systematic and thorough account of doing that: using distributions of distributions to synthesise a model for Quantitative Information Flow in terms of monads in Haskell.
Tabular is a domain-specific language for expressing probabilistic models of relational data. Tabular has several features that set it apart from other probabilistic programming languages including: (1) programs and data are stored as spreadsheet tables; (2) programs consist of probabilistic annotations on the relational schema of the data; and (3) inference returns estimations of missing values and latent columns, as well as parameters. Our primary implementation is for Microsoft Excel and relies on Infer.NET for inference. Still, the language can be called independently of Excel and can target alternative inference engines.