To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We ask, when is a property of a model a logical property? According to the so-called Tarski–Sher criterion this is the case when the property is preserved by isomorphisms. We relate this to model-theoretic characteristics of abstract logics in which the model class is definable. This results in a graded concept of logicality in the terminology of Sagi [46]. We investigate which characteristics of logics, such as variants of the Löwenheim–Skolem theorem, Completeness theorem, and absoluteness, are relevant from the logicality point of view, continuing earlier work by Bonnay, Feferman, and Sagi. We suggest that a logic is the more logical the closer it is to first order logic. We also offer a refinement of the result of McGee that logical properties of models can be expressed in $L_{\infty \infty }$ if the expression is allowed to depend on the cardinality of the model, based on replacing $L_{\infty \infty }$ by a “tamer” logic.
The symmetries between points and lines in planar projective geometry and between points and planes in solid projective geometry are striking features of these geometries that were extensively discussed during the nineteenth century under the labels “duality” or “reciprocity.” The aims of this article are, first, to provide a systematic analysis of duality from a modern point of view, and, second, based on this, to give a historical overview of how discussions about duality evolved during the nineteenth century. Specifically, we want to see in which ways geometers’ preoccupation with duality was shaped by developments that lead to modern logic towards the end of the nineteenth century, and how these developments in turn might have been influenced by reflections on duality.
We consider the complexity of special $\alpha $-limit sets, a kind of backward limit set for non-invertible dynamical systems. We show that these sets are always analytic, but not necessarily Borel, even in the case of a surjective map on the unit square. This answers a question posed by Kolyada, Misiurewicz, and Snoha.
We consider G, a linear algebraic group defined over $\Bbbk $, an algebraically closed field (ACF). By considering $\Bbbk $ as an embedded residue field of an algebraically closed valued field K, we can associate to it a compact G-space $S^\mu _G(\Bbbk )$ consisting of $\mu $-types on G. We show that for each $p_\mu \in S^\mu _G(\Bbbk )$, $\mathrm {Stab}^\mu (p)=\mathrm {Stab}\left (p_\mu \right )$ is a solvable infinite algebraic group when $p_\mu $ is centered at infinity and residually algebraic. Moreover, we give a description of the dimension of $\mathrm {Stab}\left (p_\mu \right )$ in terms of the dimension of p.
We prove completeness of preferential conditional logic with respect to convexity over finite sets of points in the Euclidean plane. A conditional is defined to be true in a finite set of points if all extreme points of the set interpreting the antecedent satisfy the consequent. Equivalently, a conditional is true if the antecedent is contained in the convex hull of the points that satisfy both the antecedent and consequent. Our result is then that every consistent formula without nested conditionals is satisfiable in a model based on a finite set of points in the plane. The proof relies on a result by Richter and Rogers showing that every finite abstract convex geometry can be represented by convex polygons in the plane.
This paper introduces three model-theoretic constructions for generalized Epstein semantics: reducts, ultramodels and $\textsf {S}$-sets. We apply these notions to obtain metatheoretical results. We prove connective inexpressibility by means of a reduct, compactness by an ultramodel and definability theorem which states that a set of generalized Epstein models is definable iff it is closed under ultramodels and $\textsf {S}$-sets. Furthermore, a corollary concerning definability of a set of models by a single formula is given on the basis of the main theorem and the compactness theorem. We also provide an example of a natural set of generalized Epstein models which is undefinable. Its undefinability is proven by means of an $\textsf {S}$-set.
There has been a recent interest in hierarchical generalizations of classic incompleteness results. This paper provides evidence that such generalizations are readily obtainable from suitably formulated hierarchical versions of the principles used in the original proofs. By collecting such principles, we prove hierarchical versions of Mostowski’s theorem on independent formulae, Kripke’s theorem on flexible formulae, Woodin’s theorem on the universal algorithm, and a few related results. As a corollary, we obtain the expected result that the formula expressing “$\mathrm {T}$ is $\Sigma _n$-ill” is a canonical example of a $\Sigma _{n+1}$ formula that is $\Pi _{n+1}$-conservative over $\mathrm {T}$.
A set of reals is universally Baire if all of its continuous preimages in topological spaces have the Baire property. ${\sf Sealing}$ is a type of generic absoluteness condition introduced by Woodin that asserts in strong terms that the theory of the universally Baire sets cannot be changed by set forcings. The ${\sf Largest\ Suslin\ Axiom}$ (${\sf LSA}$) is a determinacy axiom isolated by Woodin. It asserts that the largest Suslin cardinal is inaccessible for ordinal definable surjections. Let ${\sf LSA}$-${\sf over}$-${\sf uB}$ be the statement that in all (set) generic extensions there is a model of $\sf {LSA}$ whose Suslin, co-Suslin sets are the universally Baire sets. We outline the proof that over some mild large cardinal theory, $\sf {Sealing}$ is equiconsistent with $\sf {LSA}$-$\sf {over}$-$\sf {uB}$. In fact, we isolate an exact theory (in the hierarchy of strategy mice) that is equiconsistent with both (see Definition 3.1). As a consequence, we obtain that $\sf {Sealing}$ is weaker than the theory “$\sf {ZFC}$ + there is a Woodin cardinal which is a limit of Woodin cardinals.” This significantly improves upon the earlier consistency proof of $\sf {Sealing}$ by Woodin. A variation of $\sf {Sealing}$, called $\sf {Tower \ Sealing}$, is also shown to be equiconsistent with $\sf {Sealing}$ over the same large cardinal theory. We also outline the proof that if V has a proper class of Woodin cardinals, a strong cardinal, and a generically universally Baire iteration strategy, then $\sf {Sealing}$ holds after collapsing the successor of the least strong cardinal to be countable. This result is complementary to the aforementioned equiconsistency result, where it is shown that $\sf {Sealing}$ holds in a generic extension of a certain minimal universe. This theorem is more general in that no minimal assumption is needed. A corollary of this is that $\sf {LSA}$-$\sf {over}$-$\sf {uB}$ is not equivalent to $\sf {Sealing}$.
In Baghdad in the mid twelfth century Abū al-Barakāt proposes a radical new procedure for finding the conclusions of premise-pairs in syllogistic logic, and for identifying those premise-pairs that have no conclusions. The procedure makes no use of features of the standard Aristotelian apparatus, such as conversions or syllogistic figures. In place of these al-Barakāt writes out pages of diagrams consisting of labelled horizontal lines. He gives no instructions and no proof that the procedure will yield correct results. So the reader has to work out what his procedure is and whether it is correct. The procedure turns out to be insightful and entirely correct, but this paper may be the first study to give a full description of the procedure and a rigorous proof of its correctness.
Strong negation is a well-known alternative to the standard negation in intuitionistic logic. It is defined virtually by giving falsity conditions to each of the connectives. Among these, the falsity condition for implication appears to unnecessarily deviate from the standard negation. In this paper, we introduce a slight modification to strong negation, and observe its comparative advantages over the original notion. In addition, we consider the paraconsistent variants of our modification, and study their relationship with non-constructive principles and connexivity.
We present a new manifestation of Gödel’s second incompleteness theorem and discuss its foundational significance, in particular with respect to Hilbert’s program. Specifically, we consider a proper extension of Peano arithmetic ($\mathbf {PA}$) by a mathematically meaningful axiom scheme that consists of $\Sigma ^0_2$-sentences. These sentences assert that each computably enumerable ($\Sigma ^0_1$-definable without parameters) property of finite binary trees has a finite basis. Since this fact entails the existence of polynomial time algorithms, it is relevant for computer science. On a technical level, our axiom scheme is a variant of an independence result due to Harvey Friedman. At the same time, the meta-mathematical properties of our axiom scheme distinguish it from most known independence results: Due to its logical complexity, our axiom scheme does not add computational strength. The only known method to establish its independence relies on Gödel’s second incompleteness theorem. In contrast, Gödel’s theorem is not needed for typical examples of $\Pi ^0_2$-independence (such as the Paris–Harrington principle), since computational strength provides an extensional invariant on the level of $\Pi ^0_2$-sentences.
The paper provides a proof theoretic characterization of the Russellian theory of definite descriptions (RDD) as characterized by Kalish, Montague and Mar (KMM). To this effect three sequent calculi are introduced: LKID0, LKID1 and LKID2. LKID0 is an auxiliary system which is easily shown to be equivalent to KMM. The main research is devoted to LKID1 and LKID2. The former is simpler in the sense of having smaller number of rules and, after small change, satisfies cut elimination but fails to satisfy the subformula property. In LKID2 an additional analysis of different kinds of identities leads to proliferation of rules but yields the subformula property. This refined proof theoretic analysis leading to fully analytic calculus with constructive proof of cut elimination is the main contribution of the paper.
Inferentialism is a theory in the philosophy of language which claims that the meanings of expressions are constituted by inferential roles or relations. Instead of a traditional model-theoretic semantics, it naturally lends itself to a proof-theoretic semantics, where meaning is understood in terms of inference rules with a proof system. Most work in proof-theoretic semantics has focused on logical constants, with comparatively little work on the semantics of non-logical vocabulary. Drawing on Robert Brandom’s notion of material inference and Greg Restall’s bilateralist interpretation of the multiple conclusion sequent calculus, I present a proof-theoretic semantics for atomic sentences and their constituent names and predicates. The resulting system has several interesting features: (1) the rules are harmonious and stable; (2) the rules create a structure analogous to familiar model-theoretic semantics; and (3) the semantics is compositional, in that the rules for atomic sentences are determined by those for their constituent names and predicates.
Jean Nicod (1893–1924) is a French philosopher and logician who worked with Russell during the First World War. His PhD, with a preface from Russell, was published under the title La géométrie dans le monde sensible in 1924, the year of his untimely death. The book did not have the impact he deserved. In this paper, I discuss the methodological aspect of Nicod’s approach. My aim is twofold. I would first like to show that Nicod’s definition of various notions of equivalence between theories anticipates, in many respects, the (syntactic and semantic) model-theoretic notion of interpretation of a theory into another. I would secondly like to present the philosophical agenda that led Nicod to elaborate his logical framework: the defense of rationalism against Bergson’s attacks.
It is customary to expect from a logical system that it can be algebraizable, in the sense that an algebraic companion of the deductive machinery can always be found. Since the inception of da Costa’s paraconsistent calculi, algebraic equivalents for such systems have been sought. It is known, however, that these systems are not self-extensional (i.e., they do not satisfy the replacement property). More than this, they are not algebraizable in the sense of Blok–Pigozzi. The same negative results hold for several systems of the hierarchy of paraconsistent logics known as Logics of Formal Inconsistency (LFIs). Because of this, several systems belonging to this class of logics are only characterizable by semantics of a non-deterministic nature. This paper offers a solution for two open problems in the domain of paraconsistency, in particular connected to algebraization of LFIs, by extending with rules several LFIs weaker than $C_1$, thus obtaining the replacement property (that is, such LFIs turn out to be self-extensional). Moreover, these logics become algebraizable in the standard Lindenbaum–Tarski’s sense by a suitable variety of Boolean algebras extended with additional operations. The weakest LFI satisfying replacement presented here is called RmbC, which is obtained from the basic LFI called mbC. Some axiomatic extensions of RmbC are also studied. In addition, a neighborhood semantics is defined for such systems. It is shown that RmbC can be defined within the minimal bimodal non-normal logic $\mathbf {E} {\oplus } \mathbf {E}$ defined by the fusion of the non-normal modal logic E with itself. Finally, the framework is extended to first-order languages. RQmbC, the quantified extension of RmbC, is shown to be sound and complete w.r.t. the proposed algebraic semantics.
A subset of the Cantor cube is null-additive if its algebraic sum with any null set is null. We construct a set of cardinality continuum such that: all continuous images of the set into the Cantor cube are null-additive, it contains a homeomorphic copy of a set that is not null-additive, and it has the property $\unicode{x3b3} $, a strong combinatorial covering property. We also construct a nontrivial subset of the Cantor cube with the property $\unicode{x3b3} $ that is not null additive. Set-theoretic assumptions used in our constructions are far milder than used earlier by Galvin–Miller and Bartoszyński–Recław, to obtain sets with analogous properties. We also consider products of Sierpiński sets in the context of combinatorial covering properties.
In 1988, Sibe Mardešić and Andrei Prasolov isolated an inverse system $\textbf {A}$ with the property that the additivity of strong homology on any class of spaces which includes the closed subsets of Euclidean space would entail that $\lim ^n\textbf {A}$ (the nth derived limit of $\textbf {A}$) vanishes for every $n>0$. Since that time, the question of whether it is consistent with the $\mathsf {ZFC}$ axioms that $\lim ^n \textbf {A}=0$ for every $n>0$ has remained open. It remains possible as well that this condition in fact implies that strong homology is additive on the category of metric spaces.
We show that assuming the existence of a weakly compact cardinal, it is indeed consistent with the $\mathsf {ZFC}$ axioms that $\lim ^n \textbf {A}=0$ for all $n>0$. We show this via a finite-support iteration of Hechler forcings which is of weakly compact length. More precisely, we show that in any forcing extension by this iteration, a condition equivalent to $\lim ^n\textbf {A}=0$ will hold for each $n>0$. This condition is of interest in its own right; namely, it is the triviality of every coherent n-dimensional family of certain specified sorts of partial functions $\mathbb {N}^2\to \mathbb {Z}$ which are indexed in turn by n-tuples of functions $f:\mathbb {N}\to \mathbb {N}$. The triviality and coherence in question here generalise the classical and well-studied case of $n=1$.
We develop an untyped framework for the multiverse of set theory. $\mathsf {ZF}$ is extended with semantically motivated axioms utilizing the new symbols $\mathsf {Uni}(\mathcal {U})$ and $\mathsf {Mod}(\mathcal {U, \sigma })$, expressing that $\mathcal {U}$ is a universe and that $\sigma $ is true in the universe $\mathcal {U}$, respectively. Here $\sigma $ ranges over the augmented language, leading to liar-style phenomena that are analyzed. The framework is both compatible with a broad range of multiverse conceptions and suggests its own philosophically and semantically motivated multiverse principles. In particular, the framework is closely linked with a deductive rule of Necessitation expressing that the multiverse theory can only prove statements that it also proves to hold in all universes. We argue that this may be philosophically thought of as a Copernican principle that the background theory does not hold a privileged position over the theories of its internal universes. Our main mathematical result is a lemma encapsulating a technique for locally interpreting a wide variety of extensions of our basic framework in more familiar theories. We apply this to show, for a range of such semantically motivated extensions, that their consistency strength is at most slightly above that of the base theory $\mathsf {ZF}$, and thus not seriously limiting to the diversity of the set-theoretic multiverse. We end with case studies applying the framework to two multiverse conceptions of set theory: arithmetic absoluteness and Joel D. Hamkins’ multiverse theory.
We study the notion of weak canonical bases in an NSOP$_{1}$ theory T with existence. Given $p(x)=\operatorname {tp}(c/B)$ where $B=\operatorname {acl}(B)$ in ${\mathcal M}^{\operatorname {eq}}\models T^{\operatorname {eq}}$, the weak canonical base of p is the smallest algebraically closed subset of B over which p does not Kim-fork. With this aim we firstly show that the transitive closure $\approx $ of collinearity of an indiscernible sequence is type-definable. Secondly, we prove that given a total $\mathop {\smile \hskip -0.9em ^| \ }^K$-Morley sequence I in p, the weak canonical base of $\operatorname {tp}(I/B)$ is $\operatorname {acl}(e)$, if the hyperimaginary $I/\approx $ is eliminable to e, a sequence of imaginaries. We also supply a couple of criteria for when the weak canonical base of p exists. In particular the weak canonical base of p is (if exists) the intersection of the weak canonical bases of all total $\mathop {\smile \hskip -0.9em ^| \ }^K$-Morley sequences in p over B. However, while we investigate some examples, we point out that given two weak canonical bases of total $\mathop {\smile \hskip -0.9em ^| \ }^K$-Morley sequences in p need not be interalgebraic, contrary to the case of simple theories. Lastly we suggest an independence relation relying on weak canonical bases, when T has those. The relation, satisfying transitivity and base monotonicity, might be useful in further studies on NSOP$_1$ theories .