To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
According to conciliatory views on the significance of disagreement, it’s rational for you to become less confident in your take on an issue in case your epistemic peer’s take on it is different. These views are intuitively appealing, but they also face a powerful objection: in scenarios that involve disagreements over their own correctness, conciliatory views appear to self-defeat and, thereby, issue inconsistent recommendations. This paper provides a response to this objection. Drawing on the work from defeasible logics paradigm and abstract argumentation, it develops a formal model of conciliatory reasoning and explores its behavior in the troubling scenarios. The model suggests that the recommendations that conciliatory views issue in such scenarios are perfectly reasonable—even if outwardly they may look odd.
In the literature, predicativism is connected not only with the Vicious Circle Principle but also with the idea that certain totalities are inherently potential. To explain the connection between these two aspects of predicativism, we explore some approaches to predicativity within the modal framework for potentiality developed in Linnebo (2013) and Linnebo and Shapiro (2019). This puts predicativism into a more general framework and helps to sharpen some of its key theses.
The purpose of this paper is to compare the notion of a Grzegorczyk point introduced in [19] (and thoroughly investigated in [3, 14, 16, 18]) to the standard notions of a filter in Boolean algebras and round filter in Boolean contact algebras. In particular, we compare Grzegorczyk points to filters and ultrafilters of atomic and atomless algebras. We also prove how a certain extra axiom influences topological spaces for Grzegorczyk contact algebras. Last but not least, we do not refrain from a philosophical interpretation of the results from the paper.
Standard Type Theory, ${\textrm {STT}}$, tells us that $b^n(a^m)$ is well-formed iff $n=m+1$. However, Linnebo and Rayo [23] have advocated the use of Cumulative Type Theory, $\textrm {CTT}$, which has more relaxed type-restrictions: according to $\textrm {CTT}$, $b^\beta (a^\alpha )$ is well-formed iff $\beta>\alpha $. In this paper, we set ourselves against $\textrm {CTT}$. We begin our case by arguing against Linnebo and Rayo’s claim that $\textrm {CTT}$ sheds new philosophical light on set theory. We then argue that, while $\textrm {CTT}$’s type-restrictions are unjustifiable, the type-restrictions imposed by ${\textrm {STT}}$ are justified by a Fregean semantics. What is more, this Fregean semantics provides us with a principled way to resist Linnebo and Rayo’s Semantic Argument for $\textrm {CTT}$. We end by examining an alternative approach to cumulative types due to Florio and Jones [10]; we argue that their theory is best seen as a misleadingly formulated version of ${\textrm {STT}}$.
In this paper we examine various requirements on the formalisation choices under which self-reference can be adequately formalised in arithmetic. In particular, we study self-referential numberings, which immediately provide a strong notion of self-reference even for expressively weak languages. The results of this paper suggest that the question whether truly self-referential reasoning can be formalised in arithmetic is more sensitive to the underlying coding apparatus than usually believed. As a case study, we show how this sensitivity affects the formal study of certain principles of self-referential truth.
We present a natural standard translation of inquisitive modal logic $\mathrm{InqML}$ into first-order logic over the natural two-sorted relational representations of the intended models, which captures the built-in higher-order features of $\mathrm{InqML}$. This translation is based on a graded notion of flatness that ties the inherent second-order, team-semantic features of $\mathrm{InqML}$ over information states to subsets or tuples of bounded size. A natural notion of pseudo-models, which relaxes the non-elementary constraints on the intended models, gives rise to an elementary, purely model-theoretic proof of the compactness property for $\mathrm{InqML}$. Moreover, we prove a Hennessy-Milner theorem for $\mathrm{InqML}$, which crucially uses $\omega $-saturated pseudo-models and the new standard translation. As corollaries we also obtain van Benthem style characterisation theorems.
Can conjunctive propositions be identical without their conjuncts being identical? Can universally quantified propositions be identical without their instances being identical? On a common conception of propositions, on which they inherit the logical structure of the sentences which express them, the answer is negative both times. Here, it will be shown that such a negative answer to both questions is inconsistent, assuming a standard type-theoretic formalization of theorizing about propositions. The result is not specific to conjunction and universal quantification, but applies to any binary operator and propositional quantifier. It is also shown that the result essentially arises out of giving a negative answer to both questions, as each negative answer is consistent by itself.
We address Steel’s Programme to identify a ‘preferred’ universe of set theory and the best axioms extending $\mathsf {ZFC}$ by using his multiverse axioms $\mathsf {MV}$ and the ‘core hypothesis’. In the first part, we examine the evidential framework for $\mathsf {MV}$, in particular the use of large cardinals and of ‘worlds’ obtained through forcing to ‘represent’ alternative extensions of $\mathsf {ZFC}$. In the second part, we address the existence and the possible features of the core of $\mathsf {MV}_T$ (where T is $\mathsf {ZFC}$+Large Cardinals). In the last part, we discuss the hypothesis that the core is Ultimate-L, and examine whether and how, based on this fact, the Core Universist can justify V=Ultimate-L as the best (and ultimate) extension of $\mathsf {ZFC}$. To this end, we take into account several strategies, and assess their prospects in the light of $\mathsf {MV}$’s evidential framework.
This paper is dedicated to extending and adapting to modal logic the approach of fractional semantics to classical logic. This is a multi-valued semantics governed by pure proof-theoretic considerations, whose truth-values are the rational numbers in the closed interval $[0,1]$. Focusing on the modal logic K, the proposed methodology relies on three key components: bilateral sequent calculus, invertibility of the logical rules, and stability (proof-invariance). We show that our semantic analysis of K affords an informational refinement with respect to the standard Kripkean semantics (a new proof of Dugundji’s theorem is a case in point) and it raises the prospect of a proof-theoretic semantics for modal logic.
The notion of a tensor captures three great ideas: equivariance, multilinearity, separability. But trying to be three things at once makes the notion difficult to understand. We will explain tensors in an accessible and elementary way through the lens of linear algebra and numerical linear algebra, elucidated with examples from computational and applied mathematics.
Liquid crystals are a type of soft matter that is intermediate between crystalline solids and isotropic fluids. The study of liquid crystals has made tremendous progress over the past four decades, which is of great importance for fundamental scientific research and has widespread applications in industry. In this paper we review the mathematical models and their connections to liquid crystals, and survey the developments of numerical methods for finding rich configurations of liquid crystals.
In the past decade the mathematical theory of machine learning has lagged far behind the triumphs of deep neural networks on practical challenges. However, the gap between theory and practice is gradually starting to close. In this paper I will attempt to assemble some pieces of the remarkable and still incomplete mathematical mosaic emerging from the efforts to understand the foundations of deep learning. The two key themes will be interpolation and its sibling over-parametrization. Interpolation corresponds to fitting data, even noisy data, exactly. Over-parametrization enables interpolation and provides flexibility to select a suitable interpolating model.
As we will see, just as a physical prism separates colours mixed within a ray of light, the figurative prism of interpolation helps to disentangle generalization and optimization properties within the complex picture of modern machine learning. This article is written in the belief and hope that clearer understanding of these issues will bring us a step closer towards a general theory of deep learning and machine learning.
We present an overviewof the basic theory, modern optimal transportation extensions and recent algorithmic advances. Selected modelling and numerical applications illustrate the impact of optimal transportation in numerical analysis.
This article addresses the inference of physics models from data, from the perspectives of inverse problems and model reduction. These fields develop formulations that integrate data into physics-based models while exploiting the fact that many mathematical models of natural and engineered systems exhibit an intrinsically low-dimensional solution manifold. In inverse problems, we seek to infer uncertain components of the inputs from observations of the outputs, while in model reduction we seek low-dimensional models that explicitly capture the salient features of the input–output map through approximation in a low-dimensional subspace. In both cases, the result is a predictive model that reflects data-driven learning yet deeply embeds the underlying physics, and thus can be used for design, control and decision-making, often with quantified uncertainties. We highlight recent developments in scalable and efficient algorithms for inverse problems and model reduction governed by large-scale models in the form of partial differential equations. Several illustrative applications to large-scale complex problems across different domains of science and engineering are provided.