To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Fourier developed his groundbreaking expansion methods at first as a tool for analyzing heat flow. From this application (and originally with little mathematical rigor) these expansions evolved rapidly to become one of the foremost tools in present day applied mathematics. The three main versions we describe in more detail are Fourier Transform (FT), Fourier Series (FS) and Discrete Fourier Transform (DFT), and how these are related to each other. Each case amounts to a transform pair – allowing one to move either way between physical and transform variables. The typical purpose of applying transforms is that certain operations are simpler in one of the spaces than in the other. This overview is followed by a discussion of the Fast Fourier Transform (FFT) algorithm, which is a computationally rapid way to carry out the DFT. This algorithm (by Cooley and Tukey, in 1965) caused one of the greatest computational advances of all time. The applications of this algorithm are far reaching.
In a recent proof mining application, the proof-theoretical analysis of Dykstra’s cyclic projections algorithm resulted in quantitative information expressed via primitive recursive functionals in the sense of Gödel. This was surprising as the proof relies on several compactness principles and its quantitative analysis would require the functional interpretation of arithmetical comprehension. Therefore, a priori one would expect the need of Spector’s bar-recursive functionals. In this paper, we explain how the use of bounded collection principles allows for a modified intermediate proof justifying the finitary results obtained, and discuss the approach in the context of previous eliminations of weak compactness arguments in proof mining.
Scientific computing plays a critically important role in almost all areas of engineering, modeling, and forecasting. The method of finite differences (FD) is a classical tool that is still rapidly evolving, with several key developments barely yet in the literature. Other key aspects of the method, in particular those to do with computations that require high accuracy, often 'fall through the cracks' in many treatises. Bengt Fornberg addresses that failing in this book, which adopts a practical perspective right across the field and is aimed at graduate students, scientists, and educators seeking a follow-up to more typical curriculum-oriented textbooks. The coverage extends from generating FD formulas and applying them to solving ordinary and partial differential equations, to numerical integration, evaluation of infinite sums, approximation of fractional derivatives, and computations in the complex plane.
Smooth Infinitesimal Analysis (SIA) is a remarkable late twentieth-century theory of analysis. It is based on nilsquare infinitesimals, and does not rely on limits. SIA poses a challenge of motivating its use of intuitionistic logic beyond merely avoiding inconsistency. The classical-modal account(s) provided here attempt to do just that. The key is to treat the identity of an arbitrary nilsquare, e, in relation to 0 or any other nilsquare, as objectually vague or indeterminate—pace a famous argument of Evans [10]. Thus, we interpret the necessity operator of classical modal logic as “determinateness” in truth-value, naturally understood to satisfy the modal system, S4 (the accessibility relation on worlds being reflexive and transitive). Then, appealing to the translation due to Gödel et al., and its proof-theoretic faithfulness (“mirroring theorem”), we obtain a core classical-modal interpretation of SIA. Next we observe a close connection with Kripke semantics for intuitionistic logic. However, to avoid contradicting SIA’s non-classical treatment of identity relating nilsquares, we translate “=” with a non-logical surrogate, ‘E,’ with requisite properties. We then take up the interesting challenge of adding new axioms to the core CM interpretation. Two mutually incompatible ones are considered: one being the positive stability of identity and the other being a kind of necessity of indeterminate identity (among nilsquares). Consistency of the former is immediate, but the proof of consistency of the latter is a new result. Finally, we consider moving from CM to a three-valued, semi-classical framework, SCM, based on the strong Kleene axioms. This provides a way of expressing “indeterminacy” in the semantics of the logic, arguably improving on our CM. SCM is also proof-theoretically faithful, and the extensions by either of the new axioms are consistent.
We present a family of minimal modal logics (namely, modal logics based on minimal propositional logic) corresponding each to a different classical modal logic. The minimal modal logics are defined based on their classical counterparts in two distinct ways: (1) via embedding into fusions of classical modal logics through a natural extension of the Gödel–Johansson translation of minimal logic into modal logic S4; (2) via extension to modal logics of the multi- vs. single-succedent correspondence of sequent calculi for classical and minimal logic. We show that, despite being mutually independent, the two methods turn out to be equivalent for a wide class of modal systems. Moreover, we compare the resulting minimal version of K with the constructive modal logic CK studied in the literature, displaying tight relations among the two systems. Based on these relations, we also define a constructive correspondent for each minimal system, thus obtaining a family of constructive modal logics which includes CK as well as other constructive modal logics studied in the literature.
In this paper we study logical bilateralism understood as a theory of two primitive derivability relations, namely provability and refutability, in a language devoid of a primitive strong negation and without a falsum constant, $\bot $, and a verum constant, $\top $. There is thus no negation that toggles between provability and refutability, and there are no primitive constants that are used to define an “implies falsity” negation and a “co-implies truth” co-negation. This reduction of expressive power notwithstanding, there remains some interaction between provability and refutability due to the presence of (i) a conditional and the refutability condition of conditionals and (ii) a co-implication and the provability condition of co-implications. Moreover, assuming a hyperconnexive understanding of refuting conditionals and a dual understanding of proving co-implications, neither non-trivial negation inconsistency nor hyperconnexivity is lost for unary negation connectives definable by means of certain surrogates of falsum and verum. Whilst a critical attitude towards $\bot $ and $\top $ can be justified by problematic aspects of the Brouwer-Heyting-Kolmogorov interpretation of the logical operations for these constants, the aim to reduce the availability of a toggling negation and observations on undefinability may also give further reasons to abandon $\bot $ and $\top $.
The notion of global supervenience captures the idea that the overall distribution of certain properties in the world is fixed by the overall distribution of certain other properties. A formal implementation of this idea in constant-domain Kripke models is as follows: predicates $Q_1,\dots ,Q_m$ globally supervene on predicates $P_1,\dots ,P_n$ in world w if two successors of w cannot differ with respect to the extensions of the $Q_i$ without also differing with respect to the extensions of the $P_i$. Equivalently: relative to the successors of w, the extensions of the $Q_i$ are functionally determined by the extensions of the $P_i$. In this paper, we study this notion of global supervenience, achieving three things. First, we prove that claims of global supervenience cannot be expressed in standard modal predicate logic. Second, we prove that they can be expressed naturally in an inquisitive extension of modal predicate logic, where they are captured as strict conditionals involving questions; as we show, this also sheds light on the logical features of global supervenience, which are tightly related to the logical properties of strict conditionals and questions. Third, by making crucial use of the notion of coherence, we prove that the relevant system of inquisitive modal logic is compact and has a recursively enumerable set of validities; these properties are non-trivial, since in this logic a strict conditional expresses a second-order quantification over sets of successors.
This paper presents a reverse mathematical analysis of several forms of the sorites paradox. We first illustrate how traditional discrete formulations are reliant on Hölder’s representation theorem for ordered Archimedean groups. While this is provable in $\mathsf {RCA}_0$, we also consider two forms of the sorites which rest on non-constructive principles: the continuous sorites of Weber & Colyvan [35] and a variant we refer to as the covering sorites. We show in the setting of second-order arithmetic that the former depends on the existence of suprema and thus on arithmetical comprehension ($\mathsf {ACA}_0$) while the latter depends on the Heine–Borel theorem and thus on Weak König’s lemma ($\mathsf {WKL}_0$). We finally illustrate how recursive counterexamples to these principles provide resolutions to the corresponding paradoxes which can be contrasted with supervaluationist, epistemicist, and constructivist approaches.
Some informal arguments are valid, others are invalid. A core application of logic is to tell us which is which by capturing these validity facts. Philosophers and logicians have explored how well a host of logics carry out this role, familiar examples being propositional, first-order and second-order logic. Since natural language and standard logics are countable, a natural question arises: is there a countable logic guaranteed to capture the validity patterns of any language fragment? That is, is there a countable omega-universal logic? Our article philosophically motivates this question, makes it precise, and then answers it. It is a self-contained, concise sequel to ‘Capturing Consequence’ by A.C. Paseau (RSL vol. 12, 2019).
In the topic-sensitive theory of the logic of imagination due to Berto [3], the topic of the imaginative output must be contained within the imaginative input. That is, imaginative episodes can never expand what they are about. We argue, with Badura [2], that this constraint is implausible from a psychological point of view, and it wrongly predicts the falsehood of true reports of imagination. Thus the constraint should be relaxed; but how? A number of direct approaches to relaxing the controversial content-inclusion constraint are explored in this paper. The core idea is to consider adding an expansion operator to the mereology of topics. The logic that results depends on the formal constraints placed on topic expansion, the choice of which are subject to philosophical dispute. The first semantics we explore is a topological approach using a closure operator, and we show that the resulting logic is the same as Berto’s own system. The second approach uses an inclusive and monotone increasing operator, and we give a sound and complete axiomatiation for its logic. The third approach uses an inclusive and additive operator, and we show that the associated logic is strictly weaker than the previous two systems, and additivity is not definable in the language. The latter result suggests that involved techniques or a more expressive language is required for a complete axiomatization of the system, which is left as an open question. All three systems are simple tweaks on Berto’s system in that the language remains propositional, and the underlying theory of topics is unchanged.
We say that a Kripke model is a GL-model (Gödel and Löb model) if the accessibility relation $\prec $ is transitive and converse well-founded. We say that a Kripke model is a D-model if it is obtained by attaching infinitely many worlds $t_1, t_2, \ldots $, and $t_\omega $ to a world $t_0$ of a GL-model so that $t_0 \succ t_1 \succ t_2 \succ \cdots \succ t_\omega $. A non-normal modal logic $\mathbf {D}$, which was studied by Beklemishev [3], is characterized as follows. A formula $\varphi $ is a theorem of $\mathbf {D}$ if and only if $\varphi $ is true at $t_\omega $ in any D-model. $\mathbf {D}$ is an intermediate logic between the provability logics $\mathbf {GL}$ and $\mathbf {S}$. A Hilbert-style proof system for $\mathbf {D}$ is known, but there has been no sequent calculus. In this paper, we establish two sequent calculi for $\mathbf {D}$, and show the cut-elimination theorem. We also introduce new Hilbert-style systems for $\mathbf {D}$ by interpreting the sequent calculi. Moreover, we show that D-models can be defined using an arbitrary limit ordinal as well as $\omega $. Finally, we show a general result as follows. Let X and $X^+$ be arbitrary modal logics. If the relationship between semantics of X and semantics of $X^+$ is equal to that of $\mathbf {GL}$ and $\mathbf {D}$, then $X^+$ can be axiomatized based on X in the same way as the new axiomatization of $\mathbf {D}$ based on $\mathbf {GL}$.
This paper provides a consistent first-order theory solving the knower paradoxes of Kaplan and Montague, with the following main features: 1. It solves the knower paradoxes by providing a faithful formalization of the principle of veracity (that knowledge requires truth), using both a knowledge and a truth predicate. 2. It is genuinely untyped i.e., it is untyped not only in the sense that it uses a single knowledge predicate applying to all sentences in the language (including sentences in which this predicate occurs), but in the sense that its axioms quantify over all sentences in the language, thus supporting comprehensive reasoning with untyped knowledge ascriptions. 3. Common knowledge predicates can be defined in the system using self-reference. These facts, together with a technique based on Löb’s theorem, enables it to support comprehensive reasoning with untyped common knowledge ascriptions (without having any axiom directly addressing common knowledge).
We explore general notions of consistency. These notions are sentences $\mathcal {C}_{\alpha }$ (they depend on numerations $\alpha $ of a certain theory) that generalize the usual features of consistency statements. The following forms of consistency fit the definition of general notions of consistency (${\texttt {Pr}}_{\alpha }$ denotes the provability predicate for the numeration $\alpha $): $\neg {\texttt {Pr}}_{\alpha }(\ulcorner \perp \urcorner )$, $\omega \text {-}{\texttt {Con}}_{\alpha }$ (the formalized $\omega $-consistency), $\neg {\texttt {Pr}}_{\alpha }(\ulcorner {\texttt {Pr}}_{\alpha }(\ulcorner \cdots {\texttt {Pr}}_{\alpha }(\ulcorner \perp \urcorner )\cdots \urcorner )\urcorner )$, and $n\text {-}{\texttt {Con}}_{\alpha }$ (the formalized n-consistency of Kreisel).
We generalize the former notions of consistency while maintaining two important features, to wit: Gödel’s Second Incompleteness Theorem, i.e., (with $\xi $ some standard $\Delta _0(T)$-numeration of the axioms of T), and a result by Feferman that guarantees the existence of a numeration $\tau $ such that $T\vdash \mathcal {C}_\tau $.
We encompass slow consistency into our framework. To show how transversal and natural our approach is, we create a notion of provability from a given $\mathcal {C}_{\alpha }$, we call it $\mathcal {P}_{\mathcal {C}_{\alpha }}$, and we present sufficient conditions on $\mathcal {C}_{\alpha }$ for the notion $\mathcal {P}_{\mathcal {C}_{\alpha }}$ to satisfy the standard derivability conditions. Moreover, we also develop a notion of interpretability from a given $\mathcal {C}_{\alpha }$, we call it $\rhd _{\mathcal {C}_{\alpha }}$, and we study some of its properties. All these new notions—of provability and interpretability—serve primarily to emphasize the naturalness of our notions, not necessarily to give insights on these topics.
The aim of this paper is to give a full exposition of Leibniz’s mereological system. My starting point will be his papers on Real Addition, and the distinction between the containment and the part-whole relation. In the first part (§2), I expound the Real Addition calculus; in the second part (§3), I introduce the mereological calculus by restricting the containment relation via the notion of homogeneity which results in the parthood relation (this corresponds to an extension of the Real Addition calculus via what I call the Homogeneity axiom). I analyze in detail such a notion, and argue that it implies a gunk conception of (proper) part. Finally, in the third part (§4), I scrutinize some of the applications of the containment-parthood distinction showing that a number of famous Leibnizian doctrines depend on it.
Glivenko’s theorem says that classical provability of a propositional formula entails intuitionistic provability of the double negation of that formula. This stood right at the beginning of the success story of negative translations, indeed mainly designed for converting classically derivable formulae into intuitionistically derivable ones. We now generalise this approach: simultaneously from double negation to an arbitrary nucleus; from provability in a calculus to an inductively generated abstract consequence relation; and from propositional logic to any set of objects whatsoever. In particular, we give sharp criteria for the generalisation of classical logic to be a conservative extension of the one of intuitionistic logic with double negation.