To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper offers a substantial improvement in the revision-theoretic approach to conditionals in theories of transparent truth. The main modifications are (i) a new limit rule; (ii) a modification of the extension to the continuum-valued case; and (iii) the suggestion of a variation on how universal quantification is handled, leading to more satisfactory laws of restricted quantification.
Two salient notions of sameness of theories are synonymy, aka definitional equivalence, and bi-interpretability. Of these two definitional equivalence is the strictest notion. In which cases can we infer synonymy from bi-interpretability? We study this question for the case of sequential theories. Our result is as follows. Suppose that two sequential theories are bi-interpretable and that the interpretations involved in the bi-interpretation are one-dimensional and identity preserving. Then, the theories are synonymous.
The crucial ingredient of our proof is a version of the Schröder–Bernstein theorem under very weak conditions. We think this last result has some independent interest.
We provide an example to show that this result is optimal. There are two finitely axiomatized sequential theories that are bi-interpretable but not synonymous, where precisely one of the interpretations involved in the bi-interpretation is not identity preserving.
Explanations, and in particular explanations which provide the reasons why their conclusion is true, are a central object in a range of fields. On the one hand, there is a long and illustrious philosophical tradition, which starts from Aristotle, and passes through scholars such as Leibniz, Bolzano and Frege, that give pride of place to this type of explanation, and is rich with brilliant and profound intuitions. Recently, Poggiolesi [25] has formalized ideas coming from this tradition using logical tools of proof theory. On the other hand, recent work has focused on Boolean circuits that compile some common machine learning classifiers and have the same input-output behavior. In this framework, Darwiche and Hirth [7] have proposed a theory for unveiling the reasons behind the decisions made by Boolean classifiers, and they have studied their theoretical implications. In this paper, we uncover the deep links behind these two trends, demonstrating that the proof-theoretic tools introduced by Poggiolesi provide reasons for decisions, in the sense of Darwiche and Hirth [7]. We discuss the conceptual as well as the technical significance of this result.
This paper makes a twofold contribution to the study of expressivity. First, we introduce and study the novel concept of conditional expressivity. Taking a universal logic perspective, we characterize conditional expressivity both syntactically and semantically. We show that our concept of conditional expressivity is related to, but different from, the concept of explicit definability in Beth’s definability theorem. Second, we use the concept to explore inferential relations between collective deontic admissibility statements for different groups. Negative results on conditional expressivity are stronger than standard (unconditional) inexpressivity results: we show that the well-known inexpressivity results from epistemic logic on distributed knowledge and on common knowledge only concern unconditional expressivity. By contrast, we prove negative results on conditional expressivity in the deontic logic of collective agency. In particular, we consider the full formal language of the deontic logic of collective agency, define a natural class of sublanguages of the full language, and prove that a collective deontic admissibility statement about a particular group is conditionally expressible in a sublanguage from the class if and only if that sublanguage includes a collective deontic admissibility statement about a supergroup of that group. Our negative results on conditional expressivity may serve as a proof of concept for future studies.
Can we quantify over absolutely every set? Absolutists typically affirm, while relativists typically deny, the possibility of unrestricted quantification (in set theory). In the first part of this article, I develop a novel and intermediate philosophical position in the absolutism versus relativism debate in set theory. In a nutshell, the idea is that problematic sentences related to paradoxes cannot be interpreted with unrestricted quantifier domains, while prima facie absolutist sentences (e.g., “no set is contained in the empty set”) are unproblematic in this respect and can be interpreted over a domain containing all sets. In the second part of the paper, I develop a semantic theory that can implement the intermediate position. The resulting framework allows us to distinguish between inherently absolutist and inherently relativist sentences of the language of set theory.
We investigate a system of modal semantics in which $\Box \phi $ is true if and only if $\phi $ is entailed by a designated set of formulas by a designated logics. We prove some strong completeness results as well as a natural connection to normal modal logics via an application of some lattice-theoretic fixpoint theorems. We raise a difficult problem that arises naturally in this setting about logics which are identical with their own ‘meta-logic’, and draw a surprising connection to recent work by Andrew Bacon and Kit Fine on McKinsey’s substitutional modal semantics.
The Fregean ontology can be naturally interpreted within set theory with urelements, where objects correspond to sets and urelements, and concepts to classes. Consequently, Fregean abstraction principles can be formulated as set-theoretic principles. We investigate how the size of reality—i.e., the number of urelements—interacts with these principles. We show that Basic Law V implies that for some well-ordered cardinal $\kappa $, there is no set of urelements of size $\kappa $. Building on recent work by Hamkins [10], we show that, under certain additional axioms, Basic Law V holds if and only if the urelements form a set. We construct models of urelement set theory in which the Reflection Principle holds while Hume’s Principle fails for sets. Additionally, assuming the consistency of an inaccessible cardinal, we produce a model of Kelley–Morse class theory with urelements that has a global well-ordering but lacks a definable map satisfying Hume’s Principle for classes.
An original family of labelled sequent calculi $\mathsf {G3IL}^{\star }$ for classical interpretability logics is presented, modularly designed on the basis of Verbrugge semantics (a.k.a. generalised Veltman semantics) for those logics. We prove that each of our calculi enjoys excellent structural properties, namely, admissibility of weakening, contraction and, more relevantly, cut. A complexity measure of the cut is defined by extending the notion of range previously introduced by Negri w.r.t. a labelled sequent calculus for Gödel–Löb provability logic, and a cut-elimination algorithm is discussed in detail. To our knowledge, this is the most extensive and structurally well-behaving class of analytic proof systems for modal logics of interpretability currently available in the literature.
This paper shows how to set up Fine’s “theory-application” type semantics so as to model the use-unrestricted “Official” consequence relation for a range of relevant logics. The frame condition matching the axiom $(((A \to A) \land (B \to B)) \to C) \to C$—the characteristic axiom of the very first axiomatization of the relevant logic E—is shown forth. It is also shown how to model propositional constants within the semantic framework. Whereas the related Routley–Meyer type frame semantics fails to be strongly complete with regards to certain contractionless logics such as B, the current paper shows that Fine’s weak soundness and completeness result can be extended to a strong one also for logics like B.
The family of relevant logics can be faceted by a hierarchy of increasingly fine-grained variable sharing properties—requiring that in valid entailments $A\to B$, some atom must appear in both A and B with some additional condition (e.g., with the same sign or nested within the same number of conditionals). In this paper, we consider an incredibly strong variable sharing property of lericone relevance that takes into account the path of negations and conditionals in which an atom appears in the parse trees of the antecedent and consequent. We show that this property of lericone relevance holds of the relevant logic $\mathbf {BM}$ (and that a related property of faithful lericone relevance holds of $\mathbf {B}$) and characterize the largest fragments of classical logic with these properties. Along the way, we consider the consequences for lericone relevance for the theory of subject-matter, for Logan’s notion of hyperformalism, and for the very definition of a relevant logic itself.
In previous publications, it was shown that finite non-deterministic matrices are quite powerful in providing semantics for a large class of normal and non-normal modal logics. However, some modal logics, such as those whose axiom systems contained the Löb axiom or the McKinsey formula, were not analyzed via non-deterministic semantics. Furthermore, other modal rules than the rule of necessitation were not yet characterized in the framework.
In this paper, we will overcome this shortcoming and present a novel approach for constructing semantics for normal and non-normal modal logics that is based on restricted non-deterministic matrices. This approach not only offers a uniform semantical framework for modal logics, while keeping the interpretation of the involved modal operators the same, and thus making different systems of modal logic comparable. It might also lead to a new understanding of the concept of modality.
Least-squares problems are a cornerstone of computational science and engineering. Over the years, the size of the problems that researchers and practitioners face has constantly increased, making it essential that sparsity is exploited in the solution process. The goal of this article is to present a broad review of key algorithms for solving large-scale linear least-squares problems. This includes sparse direct methods and algebraic preconditioners that are used in combination with iterative solvers. Where software is available, this is highlighted.
Time parallelization, also known as PinT (parallel-in-time), is a new research direction for the development of algorithms used for solving very large-scale evolution problems on highly parallel computing architectures. Despite the fact that interesting theoretical work on PinT appeared as early as 1964, it was not until 2004, when processor clock speeds reached their physical limit, that research in PinT took off. A distinctive characteristic of parallelization in time is that information flow only goes forward in time, meaning that time evolution processes seem necessarily to be sequential. Nevertheless, many algorithms have been developed for PinT computations over the past two decades, and they are often grouped into four basic classes according to how the techniques work and are used: shooting-type methods; waveform relaxation methods based on domain decomposition; multigrid methods in space–time; and direct time parallel methods. However, over the past few years, it has been recognized that highly successful PinT algorithms for parabolic problems struggle when applied to hyperbolic problems. We will therefore focus on this important aspect, first by providing a summary of the fundamental differences between parabolic and hyperbolic problems for time parallelization. We then group PinT algorithms into two basic groups. The first group contains four effective PinT techniques for hyperbolic problems: Schwarz waveform relaxation (SWR) with its relation to tent pitching; parallel integral deferred correction; ParaExp; and ParaDiag. While the methods in the first group also work well for parabolic problems, we then present PinT methods specifically designed for parabolic problems in the second group: Parareal; the parallel full approximation scheme in space–time (PFASST); multigrid reduction in time (MGRiT); and space–time multigrid (STMG). We complement our analysis with numerical illustrations using four time-dependent PDEs: the heat equation; the advection–diffusion equation; Burgers’ equation; and the second-order wave equation.
This paper reviews current theoretical and numerical approaches to optimization problems governed by partial differential equations (PDEs) that depend on random variables or random fields. Such problems arise in many engineering, science, economics and societal decision-making tasks. This paper focuses on problems in which the governing PDEs are parametrized by the random variables/fields, and the decisions are made at the beginning and are not revised once uncertainty is revealed. Examples of such problems are presented to motivate the topic of this paper, and to illustrate the impact of different ways to model uncertainty in the formulations of the optimization problem and their impact on the solution. A linear–quadratic elliptic optimal control problem is used to provide a detailed discussion of the set-up for the risk-neutral optimization problem formulation, study the existence and characterization of its solution, and survey numerical methods for computing it. Different ways to model uncertainty in the PDE-constrained optimization problem are surveyed in an abstract setting, including risk measures, distributionally robust optimization formulations, probabilistic functions and chance constraints, and stochastic orders. Furthermore, approximation-based optimization approaches and stochastic methods for the solution of the large-scale PDE-constrained optimization problems under uncertainty are described. Some possible future research directions are outlined.
Cut finite element methods (CutFEM) extend the standard finite element method to unfitted meshes, enabling the accurate resolution of domain boundaries and interfaces without requiring the mesh to conform to them. This approach preserves the key properties and accuracy of the standard method while addressing challenges posed by complex geometries and moving interfaces.
In recent years, CutFEM has gained significant attention for its ability to discretize partial differential equations in domains with intricate geometries. This paper provides a comprehensive review of the core concepts and key developments in CutFEM, beginning with its formulation for common model problems and the presentation of fundamental analytical results, including error estimates and condition number estimates for the resulting algebraic systems. Stabilization techniques for cut elements, which ensure numerical robustness, are also explored. Finally, extensions to methods involving Lagrange multipliers and applications to time-dependent problems are discussed.
Ensemble Kalman methods, introduced in 1994 in the context of ocean state estimation, are now widely used for state estimation and parameter estimation (inverse problems) in many arenae. Their success stems from the fact that they take an underlying computational model as a black box to provide a systematic, derivative-free methodology for incorporating observations; furthermore the ensemble approach allows for sensitivities and uncertainties to be calculated. Analysis of the accuracy of ensemble Kalman methods, especially in terms of uncertainty quantification, is lagging behind empirical success; this paper provides a unifying mean-field-based framework for their analysis. Both state estimation and parameter estimation problems are considered, and formulations in both discrete and continuous time are employed. For state estimation problems, both the control and filtering approaches are considered; analogously for parameter estimation problems, the optimization and Bayesian perspectives are both studied. As well as providing an elegant framework, the mean-field perspective also allows for the derivation of a variety of methods used in practice. In addition it unifies a wide-ranging literature in the field and suggests open problems.
The discontinuous Petrov–Galerkin (DPG) method is a Petrov–Galerkin finite element method with test functions designed for obtaining stability. These test functions are computable locally, element by element, and are motivated by optimal test functions which attain the supremum in an inf-sup condition. A profound consequence of the use of nearly optimal test functions is that the DPG method can inherit the stability of the (undiscretized) variational formulation, be it coercive or not. This paper combines a presentation of the fundamentals of the DPG ideas with a review of the ongoing research on theory and applications of the DPG methodology. The scope of the presented theory is restricted to linear problems on Hilbert spaces, but pointers to extensions are provided. Multiple viewpoints to the basic theory are provided. They show that the DPG method is equivalent to a method which minimizes a residual in a dual norm, as well as to a mixed method where one solution component is an approximate error representation function. Being a residual minimization method, the DPG method yields Hermitian positive definite stiffness matrix systems even for non-self-adjoint boundary value problems. Having a built-in error representation, the method has the out-of-the-box feature that it can immediately be used in automatic adaptive algorithms. Contrary to standard Galerkin methods, which are uninformed about test and trial norms, the DPG method must be equipped with a concrete test norm which enters the computations. Of particular interest are variational formulations in which one can tailor the norm to obtain robust stability. Key techniques to rigorously prove convergence of DPG schemes, including construction of Fortin operators, which in the DPG case can be done element by element, are discussed in detail. Pointers to open frontiers are presented.
Distributionally robust optimization (DRO) studies decision problems under uncertainty where the probability distribution governing the uncertain problem parameters is itself uncertain. A key component of any DRO model is its ambiguity set, that is, a family of probability distributions consistent with any available structural or statistical information. DRO seeks decisions that perform best under the worst distribution in the ambiguity set. This worst case criterion is supported by findings in psychology and neuroscience, which indicate that many decision-makers have a low tolerance for distributional ambiguity. DRO is rooted in statistics, operations research and control theory, and recent research has uncovered its deep connections to regularization techniques and adversarial training in machine learning. This survey presents the key findings of the field in a unified and self-contained manner.