The seventeenth century can be viewed as an era of (closely related) innovation in the formal and natural sciences and of paradigmatic diversity in philosophy (due to the coexistence of at least the humanist, the late scholastic, and the early modern tradition). Within this environment, the present study focuses on scholastic logic and, in particular, syllogistic. In seventeenth-century scholastic logic two different approaches to logic can be identified, one represented by the Dominicans Báñez, Poinsot, and Comas del Brugar, the other represented by the Jesuits Hurtado, Arriaga, Oviedo, and Compton. These two groups of authors can be contrasted in three prominent features. First, in the role of the theory of validity, which is either a common basis for all particular theories (in this case, sentential logic and syllogistic), or a set of observations regarding a particular theory (in this case, syllogistic). Second, in the view of syllogistic, which is either an implication of a general theory of validity and a semantics of terms, or an algebra of structured objects. Third, in the role of the scholastic analysis of language in terms of suppositio, which either is a semantic underpinning of syllogistic, or it is replaced by a semantics of propositions.

]]>We will present a three-valued consequence relation for metainferences, called CM, defined through ST and TS, two well known substructural consequence relations for inferences. While ST recovers every classically valid inference, it invalidates some classically valid metainferences. While CM works as ST at the inferential level, it also recovers every classically valid metainference. Moreover, CM can be safely expanded with a transparent truth predicate. Nevertheless, CM cannot recapture every classically valid meta-metainference. We will afterwards develop a hierarchy of consequence relations CMn for metainferences of level n (for 1 ≤ n < ω). Each CMn recovers every metainference of level n or less, and can be nontrivially expanded with a transparent truth predicate, but cannot recapture every classically valid metainferences of higher levels. Finally, we will present a logic CMω, based on the hierarchy of logics CMn, that is fully classical, in the sense that every classically valid metainference of any level is valid in it. Moreover, CMω can be nontrivially expanded with a transparent truth predicate.

]]>In this article we study proofs of some general forms of the Second Incompleteness Theorem. These forms conform to the Feferman format, where the proof predicate is fixed and the representation of the set of axioms varies. We extend the Feferman framework in one important point: we allow the interpretation of number theory to vary.

]]>The large-structure tools of cohomology including toposes and derived categories stay close to arithmetic in practice, yet published foundations for them go beyond ZFC in logical strength. We reduce the gap by founding all the theorems of Grothendieck’s SGA, plus derived categories, at the level of Finite-Order Arithmetic, far below ZFC. This is the weakest possible foundation for the large-structure tools because one elementary topos of sets with infinity is already this strong.

]]>In the article [2] a hierarchy of modal logics has been defined to capture the logical features of Bayesian belief revision. Elements in that hierarchy were distinguished by the cardinality of the set of elementary propositions. By linking the modal logics in the hierarchy to the modal logics of Medvedev frames it has been shown that the modal logic of Bayesian belief revision determined by probabilities on a finite set of elementary propositions is not finitely axiomatizable. However, the infinite case remained open. In this article we prove that the modal logic of Bayesian belief revision determined by standard Borel spaces (these cover probability spaces that occur in most of the applications) is also not finitely axiomatizable.

]]>The variety DMM of De Morgan monoids has just four minimal subvarieties. The join-irreducible covers of these atoms in the subvariety lattice of DMM are investigated. One of the two atoms consisting of idempotent algebras has no such cover; the other has just one. The remaining two atoms lack nontrivial idempotent members. They are generated, respectively, by 4-element De Morgan monoids C4 and D4, where C4 is the only nontrivial 0-generated algebra onto which finitely subdirectly irreducible De Morgan monoids may be mapped by noninjective homomorphisms. The homomorphic preimages of C4 within DMM (together with the trivial De Morgan monoids) constitute a proper quasivariety, which is shown to have a largest subvariety U. The covers of the variety (C4) within U are revealed here. There are just ten of them (all finitely generated). In exactly six of these ten varieties, all nontrivial members have C4 as a retract. In the varietal join of those six classes, every subquasivariety is a variety—in fact, every finite subdirectly irreducible algebra is projective. Beyond U, all covers of (C4) [or of (D4)] within DMM are discriminator varieties. Of these, we identify infinitely many that are finitely generated, and some that are not. We also prove that there are just 68 minimal quasivarieties of De Morgan monoids.

]]>We argue against Foreman’s proposal to settle the continuum hypothesis and other classical independent questions via the adoption of generic large cardinal axioms.

]]>In a recent article, Barrett & Halvorson (2016) define a notion of equivalence for first-order theories, which they call “Morita equivalence.” To argue that Morita equivalence is a reasonable measure of “theoretical equivalence,” they make use of the claim that Morita extensions “say no more” than the theories they are extending. The goal of this article is to challenge this central claim by raising objections to their argument for it and by showing why there is good reason to think that the claim itself is false. In light of these criticisms, this article develops a natural way for the advocate of Morita equivalence to respond. I prove that this response makes her criterion a special case of bi-interpretability, an already well-established barometer of theoretical equivalence. I conclude by providing reasons why the advocate of Morita equivalence should opt for a notion of theoretical equivalence that is defined in terms of interpretability rather than Morita extensions.

]]>We shall be concerned with the modal logic BK—which is based on the Belnap–Dunn four-valued matrix, and can be viewed as being obtained from the least normal modal logic K by adding ‘strong negation’. Though all four values ‘truth’, ‘falsity’, ‘neither’ and ‘both’ are employed in its Kripke semantics, only the first two are expressible as terms. We show that expanding the original language of BK to include constants for ‘neither’ or/and ‘both’ leads to quite unexpected results. To be more precise, adding one of these constants has the effect of eliminating the respective value at the level of BK-extensions. In particular, if one adds both of these, then the corresponding lattice of extensions turns out to be isomorphic to that of ordinary normal modal logics.

]]>Although much technical and philosophical attention has been given to relevance logics, the notion of relevance itself is generally left at an intuitive level. It is difficult to find in the literature an explicit account of relevance in formal reasoning. In this article I offer a formal explication of the notion of relevance in deductive logic and argue that this notion has an interesting place in the study of classical logic. The main idea is that a premise is relevant to an argument when it contributes to the validity of that argument. I then argue that the sequents which best embody this ideal of relevance are the so-called perfect sequents—that is, sequents which are valid but have no proper subsequents that are valid. Church’s theorem entails that there is no recursively axiomatizable proof-system that proves all and only the perfect sequents, so the project that emerges from studying perfection in classical logic is not one of finding a perfect subsystem of classical logic, but is rather a comparative study of classifying subsystems of classical logic according to how well they approximate the ideal of perfection.

]]>