To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces higher-order quantification and its role in logic programming. It discusses the syntax and proof theory of higher-order quantification. The chapter explores the concept of near-focused proofs in the context of building proof systems for higher-order quantification. It builds upon the proof-theoretic foundations established in earlier chapters to extend the logic programming paradigms to the higher-order setting.
Thia chapter considers methods for both regression and classification based on Gaussian process, a stochastic process with Gaussian distribution, of which the mean vector and covariance matrix can be obtained based on the labeled samples in the training set. The resulting Gaussian process serves as a nonlinear regression function that fits the given dataset. This function can be treated as the probability for data samples' the class identity and used for classificationas as shown before. This Gaussian process approach also has some two advantages: first, the certainty (or confidence) of the regression or classification result can be quantitatively measured; second proper tradeoff between overfitting and underfitting can be made by adjusting a parameter for the covariance of the Gaussian process model.
This chapter is solely dedicated to reinforcement learning (RL), one of the three main learning paradigms covered in the book (together with regression and classification). The goal of RL is for an agent to learn from and respond to its environment modeled as a Markov decision process (MDP), by following a set of policies to take the best action at each state of the MDP, in order to receive the maximum total accumulated reward. The utmost goal is to come up with the optimal policy in terms of the best action to take at each state. Different from all optimization problems previously considered for maximizing (or minimizing) certain objective functions, RL achieves its goal by the general method of dynamic programming (while linear and quadratic programmings are for constrained optimization), which solves a complex problem by breaking it up and solving a set of subproblems recursively. Specifically, the main method for RL is the Q-learning algorithm which finds the optimal policy that takes the best action selected based on the expected values of the total reward at all states and all actions at each state. Toward to end of the chapter, various more advanced versions of RL are briefly discussed based on some previously learned methods such as neural networks and deep learning.
The goal of this chapter is to prepare for the future discussion of various artificial neural network (ANN) learning algorithms by introducing some basic concepts in neural networks and some biologically inspired examples the Habbian and Hopfield networks to illustrate how an ANN based on some simple learning rule can achieve meaningful results, although they are not actually widely used in machine learning practice. Specifically, the behavior of the Hebbian learning network mimics the associative nature of brain, as a simple model of associative memory, and the Hopfield network further shows how a pattern can be stored and then recalled based on a noisy and imcomplete copy of itself, a function that is commenly demonstratedof the brain.
Applications of cryptography are plenty in everyday life. This guidebook is about the security analysis or 'cryptanalysis' of the basic building blocks on which these applications rely. Rather than covering a variety of techniques at an introductory level, this book provides a comprehensive and in-depth treatment of linear cryptanalysis. The subject is introduced from a mathematical point of view, providing an overview of the most influential papers on linear cryptanalysis and placing them in a consistent framework based on linear algebra. A large number of examples and exercises are included, drawing upon practice as well as theory. The book is accessible to students with no prior knowledge of cryptography. It covers linear cryptanalysis starting from the basics, including linear approximations and trails, correlation matrices, automatic search, key-recovery techniques, up to advanced topics, such as multiple and multidimensional linear cryptanalysis, zero-correlation approximations, and the geometric approach.
We introduce Displayed Type Theory (dTT), a multi-modal homotopy type theory with discrete and simplicial modes. In the intended semantics, the discrete mode is interpreted by a model for an arbitrary $\infty$-topos, while the simplicial mode is interpreted by Reedy fibrant augmented semi-simplicial diagrams in that model. This simplicial structure is represented inside the theory by a primitive notion of display or dependency, guarded by modalities, yielding a partially-internal form of unary parametricity. Using the display primitive, we then give a coinductive definition, at the simplicial mode, of a type of semi-simplicial types. Roughly speaking, a semi-simplicial type consists of a type together with, for each , a displayed semi-simplicial type over . This mimics how simplices can be generated geometrically through repeated cones, and is made possible by the display primitive at the simplicial mode. The discrete part of then yields the usual infinite indexed definition of semi-simplicial types, both semantically and syntactically. Thus, dTT enables working with semi-simplicial types in full semantic generality.
We present a critical survey on the consistency of uncertainty quantification used in deep learning and highlight partial uncertainty coverage and many inconsistencies. We then provide a comprehensive and statistically consistent framework for uncertainty quantification in deep learning that accounts for all major sources of uncertainty: input data, training and testing data, neural network weights, and machine-learning model imperfections, targeting regression problems. We systematically quantify each source by applying Bayes’ theorem and conditional probability densities and introduce a fast, practical implementation method. We demonstrate its effectiveness on a simple regression problem and a real-world application: predicting cloud autoconversion rates using a neural network trained on aircraft measurements from the Azores and guided by a two-moment bin model of the stochastic collection equation. In this application, uncertainty from the training and testing data dominates, followed by input data, neural network model, and weight variability. Finally, we highlight the practical advantages of this methodology, showing that explicitly modeling training data uncertainty improves robustness to new inputs that fall outside the training data, and enhances model reliability in real-world scenarios.
In the 1980s, Erdős and Sós initiated the study of Turán problems with a uniformity condition on the distribution of edges: the uniform Turán density of a hypergraph $H$ is the infimum over all $d$ for which any sufficiently large hypergraph with the property that all its linear-size subhypergraphs have density at least $d$ contains $H$. In particular, they asked to determine the uniform Turán densities of $K_4^{(3)-}$ and $K_4^{(3)}$. After more than 30 years, the former was solved in [Israel J. Math. 211 (2016), 349 – 366] and [J. Eur. Math. Soc. 20 (2018), 1139 – 1159], while the latter still remains open. Till today, there are known constructions of $3$-uniform hypergraphs with uniform Turán density equal to $0$, $1/27$, $4/27$, and $1/4$ only. We extend this list by a fifth value: we prove an easy to verify sufficient condition for the uniform Turán density to be equal to $8/27$ and identify hypergraphs satisfying this condition.
Blockchain technology has attracted attention from public sector agencies, mainly for its perceived potential to improve transparency, data integrity, and administrative processes. However, its concrete value and applicability within government settings remain contested, and real-world adoption has been limited and uneven. This raises questions regarding the conditions that promote or impede adoption at the institutional level. Fuzzy-set qualitative comparative analysis is employed in this research to explore how the combined effects of national-level regulatory clarity, financial provision, digital readiness, and ecosystem engagement shape patterns of blockchain adoption in the European public sector. Rather than identifying any single factor as decisive, our findings reveal a plurality of institutional paths leading to high adoption intensity, with regulatory certainty and European Union funding appearing most frequently on high-consistency paths. In contrast, digital readiness indicators and national research and development budgets are substitutable, challenging resource-based perceptions of technology adoption and supporting a configurational understanding that accounts for institutional interdependence and contextuality. We argue that policy strategies cannot look for overall readiness but should place key institutional strengths relative to local conditions and public value objectives.
Although design research is a relatively recent academic field, it has developed several influential typologies over the past decades. This study conducts a systematic review to evaluate how design research approaches relate to the design process, with a specific focus on two overlooked dimensions: the point of research integration in design and the research attitude guiding the inquiry. Drawing on foundational models by Frayling, Cross and Buchanan, the paper proposes a conceptual framework that cross-analyzes research typologies with these two dimensions. Seventy peer-reviewed studies in architecture and related disciplines were identified and analyzed through PRISMA guidelines and Critical Appraisal Skills Programme (CASP) checklist. The findings reveal four distinct clusters: (1) research about design – basic research – design epistemology, (2) research through design – applied research – design praxeology, (3) research for design – clinical research – design phenomenology and (4) a fourth category, research through design (II) – applied research – design epistemology. Moreover, five research attitudes were identified across the studies: practitioner, practitioner with user, practitioner with AI, researcher and user. These findings provide a more nuanced understanding of how design knowledge is produced in architectural research.
The core topics at the intersection of human-computer interaction (HCI) and US law -- privacy, accessibility, telecommunications, intellectual property, artificial intelligence (AI), dark patterns, human subjects research, and voting -- can be hard to understand without a deep foundation in both law and computing. Every member of the author team of this unique book brings expertise in both law and HCI to provide an in-depth yet understandable treatment of each topic area for professionals, researchers, and graduate students in computing and/or law. Two introductory chapters explaining the core concepts of HCI (for readers with a legal background) and U.S. law (for readers with an HCI background) are followed by in-depth discussions of each topic.
In recent years, the manufacturing sector has seen an influx of artificial intelligence applications, seeking to harness its capabilities to improve productivity. However, manufacturing organizations have limited understanding of risks that are posed by the usage of artificial intelligence, especially those related to trust, responsibility, and ethics. While significant effort has been put into developing various general frameworks and definitions to capture these risks, manufacturing and supply chain practitioners face difficulties in implementing these and understanding their impact. These issues can have a significant effect on manufacturing companies, not only at an organization level but also on their employees, clients, and suppliers. This paper aims to increase understanding of trustworthy, responsible, and ethical Artificial Intelligence challenges as they apply to manufacturing and supply chains. We first conduct a systematic mapping study on concepts relevant to trust, responsibility and ethics and their interrelationships. We then use a broadened view of a machine learning lifecycle as a basis to understand how risks and challenges related to these concepts emanate from each phase in the lifecycle. We follow a case study driven approach, providing several illustrative examples that focus on how these challenges manifest themselves in actual manufacturing practice. Finally, we propose a series of research questions as a roadmap for future research in trustworthy, responsible and ethical artificial intelligence applications in manufacturing, to ensure that the envisioned economic and societal benefits are delivered safely and responsibly.
In many contexts, an individual’s beliefs and behavior are affected by the choices of their social or geographic neighbors. This influence results in local correlation in people’s actions, which in turn affects how information and behaviors spread. Previously developed frameworks capture local social influence using network games, but discard local correlation in players’ strategies. This paper develops a network games framework that allows for local correlation in players’ strategies by incorporating a richer partial information structure than previous models. Using this framework we also examine the dependence of equilibrium outcomes on network clustering—the probability that two individuals with a mutual neighbor are connected to each other. We find that clustering reduces the number of players needed to provide a public good and allows for market sharing in technology standards competitions.