To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 11 delves into the manipulation of quantum resources, the core aspect of quantum resource theories that explore the transformation and conversion of quantum states within a given resource theory framework. The chapter introduces the generalized asymptotic equipartition property and the generalized quantum Stein’s lemma, both foundational to understanding the asymptotic behavior of quantum resources. These concepts pave the way for discussing the uniqueness of the Umegaki relative entropy in quantifying the efficiency of resource conversion processes. Furthermore, the text explores asymptotic interconversions, detailing the conditions and limits for converting one resource into another when multiple copies of quantum states are considered. This analysis is pivotal for establishing the reversible exchange rates between different resources in the asymptotic limit. By providing a comprehensive overview of resource manipulation strategies, the chapter equips readers with the theoretical tools needed for advanced study and research in quantum resource theories, emphasizing both the single-shot and asymptotic domains.
This chapter describes interviews the authors conducted with federal agency officials about their use of automated legal guidance. This chapter offers insights gained from these interviews, including regarding the different models that agencies use to develop such guidance, their views on the usability of such guidance, the ways that agencies evaluate the guidance, and agencies’ views on successes and challenges that such guidance faces.
A proof of the Topological Representation Theorem, including an introduction to shelling, topological interpretation of oriented matroid concepts, and an application to counting topes, are provided in this chapter.
Chapter 10 delves into the quantification of quantum resources, an essential aspect of quantum resource theories that determines the value of quantum states for specific applications. It begins by defining resource measures and investigating their fundamental properties such as monotonicity under free operations and convexity. The chapter discusses distance-based resource measures, which quantify how far a given quantum state is from the set of free states. Such measures often utilize divergences and metrics explored in earlier chapters. Techniques to compute the relative entropy of a resource are also covered.
To refine resource measures, the chapter introduces the concept of smoothing, which considers small deviations from the ideal state to make the measures more robust against perturbations. This approach is crucial in single-shot scenarios where finite resources are available. Furthermore, the chapter examines resource monotones and support functions, offering a comprehensive framework for the theoretical and practical assessment of quantum resources.
Chapter 7 discusses quantum conditional entropy, extending the concept of conditional majorization and introducing the notion of negative quantum conditional entropy. The chapter starts with the basic definition of conditional entropy, exploring its key properties like monotonicity and additivity. It further delves into the concepts of conditional min- and max-entropies, emphasizing their roles in quantifying uncertainty in quantum states and their operational significance in quantum information theory.
The text presents conditional entropy as a measure sensitive to the effects of entanglement, showing that negative conditional entropy is a distinctive feature of quantum systems, contrasting with the classical domain where entropy values are nonnegative. This negativity is particularly pronounced in the context of maximally entangled states and is connected to the fundamental differences between classical and quantum information processing. Moreover, the chapter includes theorems and exercises to solidify understanding, like the invariance of conditional entropy under local isometric channels and its reduction to entropy for product states. It concludes by underscoring the inevitability of negative conditional entropy in quantum systems, a topic of both theoretical and practical importance in the quantum domain.
Chapter 8 explores the asymptotic regime of quantum information processing, beginning with quantum typicality, which illustrates the convergence of quantum states toward a typical form with increasing copies. This leads to the asymptotic equipartition property (AEP), indicating that with a high number of copies, probability vectors become uniformly distributed. The method of types is introduced next, a tool from classical information theory that classifies sequences based on their statistical properties. This is crucial for understanding the behavior of large quantum systems and has implications for quantum data compression. Advancing to quantum hypothesis testing, the chapter outlines efficient strategies for distinguishing between two quantum states through repeated measurements. Central to this is the Quantum Stein’s lemma, which asserts the exponential decline in the error probability of hypothesis testing as the sample size of quantum systems increases. The chapter highlights the deep interplay between typicality, statistical methods, and hypothesis testing, laying the groundwork for asymptotic interconversion of quantum resources.
Chapter 16, centered on the resource theory of nonuniformity, serves as an essential precursor to discussions on thermodynamics as a resource theory. It presents nonuniformity as a fundamental quantum resource, using it as a toy model to prepare for more complex thermodynamic concepts. In this model, free states are considered to be maximally mixed states, analogous to Gibbs states with a trivial Hamiltonian, providing a simplified context for exploring quantum thermodynamics. The chapter carefully outlines how nonuniformity is quantified, offering closed formulas for the conversion distance, nonuniformity cost, and distillable nonuniformity. These measures are explored both in the single-shot and the asymptotic domains. The availability of closed formulas makes this model particularly insightful, demonstrating clear, quantifiable relationships between various measures of nonuniformity.
This paper is an extended version of Bílková et al. ((2023b). Logic, Language, Information, and Computation. WoLLIC 2023, Lecture Notes in Computer Science, vol. 13923, Cham, Springer Nature Switzerland, 101–117.). We discuss two-layered logics formalising reasoning with probabilities and belief functions that combine the Łukasiewicz $[0,1]$-valued logic with Baaz $\triangle$ operator and the Belnap–Dunn logic. We consider two probabilistic logics – $\mathsf {Pr}^{{\mathsf {\unicode {x0141}}}^2}_\triangle$ (introduced by Bílková et al. 2023d. Annals of Pure and Applied Logic, 103338.) and $\mathbf {4}\mathsf {Pr}^{{\mathsf {\unicode {x0141}}}_\triangle }$ (from Bílková et al. 2023b. Logic, Language, Information, and Computation. WoLLIC 2023, Lecture Notes in Computer Science, vol. 13923, Cham, Springer Nature Switzerland, 101–117.) – that present two perspectives on the probabilities in the Belnap–Dunn logic. In $\mathsf {Pr}^{{\mathsf {\unicode {x0141}}}^2}_\triangle$, every event $\phi$ has independent positive and negative measures that denote the likelihoods of $\phi$ and $\neg \phi$, respectively. In $\mathbf {4}\mathsf {Pr}^{{\mathsf {\unicode {x0141}}}_\triangle }$, the measures of the events are treated as partitions of the sample into four exhaustive and mutually exclusive parts corresponding to pure belief, pure disbelief, conflict and uncertainty of an agent in $\phi$. In addition to that, we discuss two logics for the paraconsistent reasoning with belief and plausibility functions from Bílková et al. ((2023d). Annals of Pure and Applied Logic, 103338.) – $\mathsf {Bel}^{{\mathsf {\unicode {x0141}}}^2}_\triangle$ and $\mathsf {Bel}^{\mathsf {N}{\mathsf {\unicode {x0141}}}}$. Both these logics equip events with two measures (positive and negative) with their main difference being that in $\mathsf {Bel}^{{\mathsf {\unicode {x0141}}}^2}_\triangle$, the negative measure of $\phi$ is defined as the belief in$\neg \phi$ while in $\mathsf {Bel}^{\mathsf {N}{\mathsf {\unicode {x0141}}}}$, it is treated independently as the plausibility of$\neg \phi$. We provide a sound and complete Hilbert-style axiomatisation of $\mathbf {4}\mathsf {Pr}^{{\mathsf {\unicode {x0141}}}_\triangle }$ and establish faithful translations between it and $\mathsf {Pr}^{\mathsf {\unicode {x0141}}^2}_\triangle$. We also show that the validity problem in all the logics is $\mathsf {coNP}$-complete.
The newly introduced discipline of Population-Based Structural Health Monitoring (PBSHM) has been developed in order to circumvent the issue of data scarcity in “classical” SHM. PBSHM does this by using data across an entire population, in order to improve diagnostics for a single data-poor structure. The improvement of inferences across populations uses the machine-learning technology of transfer learning. In order that transfer makes matters better, rather than worse, PBSHM assesses the similarity of structures and only transfers if a threshold of similarity is reached. The similarity measures are implemented by embedding structures as models —Irreducible-Element (IE) models— in a graph space. The problem with this approach is that the construction of IE models is subjective and can suffer from author-bias, which may induce dissimilarity where there is none. This paper proposes that IE-models be transformed to a canonical form through reduction rules, in which possible sources of ambiguity have been removed. Furthermore, in order that other variations —outside the control of the modeller— are correctly dealt with, the paper introduces the idea of a reality model, which encodes details of the environment and operation of the structure. Finally, the effects of the canonical form on similarity assessments are investigated via a numerical population study. A final novelty of the paper is in the implementation of a neural-network-based similarity measure, which learns reduction rules from data; the results with the new graph-matching network (GMN) are compared with a previous approach based on the Jaccard index, from pure graph theory.
We show that the principal types of the closed terms of the affine fragment of λ-calculus, with respect to a simple type discipline, are structurally isomorphic to their interpretations, as partial involutions, in a natural Geometry of Interaction model à la Abramsky. This permits to explain in elementary terms the somewhat awkward notion of linear application arising in Geometry of Interaction, simply as the resolution between principal types using an alternate unification algorithm. As a consequence, we provide an answer, for the purely affine fragment, to the open problem raised by Abramsky of characterizing those partial involutions which are denotations of combinatory terms.
We present an axiom system for what we call Prior’s Ideal Language and prove its completeness and pure completeness with respect to general models. With this is done, we explain, with examples, why this system provides a useful setting for exploring Arthur Prior’s work.
We revisit the communication primitive in ambient calculi. Previously, such communication was confined to first-order (FO) mode (e.g., merely names or capabilities of ambients can be sent), local mode (e.g., the communication only occurs inside an ambient), or particular cross-hierarchy mode (e.g., parent-child communication). In this work, we explore further higher-order (HO) communication in ambient calculi. Specifically, such a communication mechanism allows sending a whole piece of a program across the borders of ambients and is the only form of communication that can happen exactly between ambients. Since ambients are basically of HO nature (i.e., those being moved may be ambients themselves), in a sense, it appears more natural to have HO communication than FO communication. We stipulate that communications merely occur between equally positioned ambients in a peer-to-peer fashion (e.g., between sibling ambients). Following this line, we drop the local or other forms of communication that violate this criterion. As the workbench, we work on a variant of Fair Ambients extended with HO communication, FAHO. This variant also strengthens the original version in that entirely real-identity interaction is guaranteed. We study the semantics, bisimulation, and expressiveness of FAHO. Particularly, we provide the operational semantics using a labeled transition system. Over the semantics, we define the bisimulation in line with the standard notion of bisimulation for ambients and prove that the bisimulation equivalence (i.e., bisimilarity) is a congruence. In addition, we demonstrate that bisimilarity coincides with observational congruence (i.e., barbed congruence). Moreover, we show that FAHO can encode a minimal Turing-complete HO calculus and thus is computationally complete.
Data-based methods have gained increasing importance in engineering. Success stories are prevalent in areas such as data-driven modeling, control, and automation, as well as surrogate modeling for accelerated simulation. Beyond engineering, generative and large-language models are increasingly helping with tasks that, previously, were solely associated with creative human processes. Thus, it seems timely to seek artificial-intelligence-support for engineering design tasks to automate, help with, or accelerate purpose-built designs of engineering systems for instance in mechanics and dynamics, where design so far requires a lot of specialized knowledge. Compared with established, predominantly first-principles-based methods, the datasets used for training, validation, and test become an almost inherent part of the overall methodology. Thus, data publishing becomes just as important in (data-driven) engineering science as appropriate descriptions of conventional methodology in publications in the past. However, in mechanics and dynamics, quite widely, still traditional publishing practices are prevalent that largely do not yet take into account the rising role of data as much as that may already be the case in pure data-scientific research. This article analyzes the value and challenges of data publishing in mechanics and dynamics, in particular regarding engineering design tasks, showing that the latter raise also challenges and considerations not typical in fields where data-driven methods have been booming originally. Researchers currently find barely any guidance to overcome these challenges. Thus, ways to deal with these challenges are discussed and a set of examples from across different design problems shows how data publishing can be put into practice.
We provide a fine classification of bisimilarities between states of possibly different labelled Markov processes (LMP). We show that a bisimilarity relation proposed by Panangaden that uses direct sums coincides with “event bisimilarity” from his joint work with Danos, Desharnais, and Laviolette. We also extend Giorgio Bacci’s notions of bisimilarity between two different processes to the case of nondeterministic LMP and generalize the game characterization of state bisimilarity by Clerc et al. for the latter.
Online customer feedback management (CFM) is becoming increasingly important for businesses. Providing timely and effective responses to guest reviews can be challenging, especially as the volume of reviews grows. This paper explores the response process and the potential for artificial intelligence (AI) augmentation in response formulation. We propose an orchestration concept for human–AI collaboration in co-writing within the hospitality industry, supported by a novel NLP-based solution that combines the strengths of both human and AI. Although complete automation of the response process remains out of reach, our findings offer practical implications for improving response speed and quality through human–AI collaboration. Additionally, we formulate policy recommendations for businesses and regulators in CFM. Our study provides transferable design knowledge for developing future CFM products.
In the wake of the recent resurgence of the Datalog language of databases, together with its extensions for ontological reasoning settings, this work aims to bridge the gap between the theoretical studies of DatalogMTL (Datalog extended with metric temporal logic) and the development of production-ready reasoning systems. In particular, we lay out the functional and architectural desiderata of a modern reasoner and propose our system, Temporal Vadalog. Leveraging the vast amount of experience from the database community, we go beyond the typical chase-based implementations of reasoners, and propose a set of novel techniques and a system that adopts a modern data pipeline architecture. We discuss crucial architectural choices, such as how to guarantee termination when infinitely many time intervals are possibly generated, how to merge intervals, and how to sustain a limited memory footprint. We discuss advanced features of the system, such as the support for time series, and present an extensive experimental evaluation. This paper is a substantially extended version of “The Temporal Vadalog System” as presented at RuleML+RR ’22.
The formal theory of monads shows that much of the theory of monads can be developed in the abstract at the level of 2-categories. This means that results about monads can be established once and for all and simply instantiated in settings such as enriched category theory.
Unfortunately, these results can be hard to reason about as they involve more abstract machinery. In this paper, we present the formal theory of monads in terms of string diagrams — a graphical language for 2-categorical calculations. Using this perspective, we show that many aspects of the theory of monads, such as the Eilenberg–Moore and Kleisli resolutions of monads, liftings, and distributive laws, can be understood in terms of systematic graphical calculational reasoning.
This paper will serve as an introduction both to the formal theory of monads and to the use of string diagrams, in particular, their application to calculations in monad theory.
This study introduces an innovative deep learning method for intelligent healthcare emotion analysis, specifically targeting the recognition of pain based on facial expressions. The suggested approach combines cloud-based mobile applications, utilising separate front-end and back-end elements to optimise data processing. The main contributions consist of a Smart Automated System (SASys) that integrates statistical and deep learning methods to extract features, thereby guaranteeing both resilience and efficiency. Image preprocessing encompasses the tasks of detecting faces and normalising them, which is crucial for extracting features with high accuracy. The comparison of statistical feature representation using Histogram of Oriented Gradients and Local Binary Pattern, along with machine learning classifiers, against an enhanced deep learning-based approach with an integrated multi-tasking feature known as multi-task convolutional neural network, demonstrates encouraging outcomes that support the superiority of the convolutional neural network architecture. Statistical and deep learning-based classification scores, when combined, greatly enhance the system’s overall performance. The results of the experiments prove that the method is effective, outperforming traditional classifiers and exhibiting comparable accuracy to cutting-edge healthcare SASys.