Alongside the analogy between maximal ideals and complete theories, the Jacobson radical carries over from ideals of commutative rings to theories of propositional calculi. This prompts a variant of Lindenbaum’s Lemma that relates classical validity and intuitionistic provability, and the syntactical counterpart of which is Glivenko’s Theorem. The Jacobson radical in fact turns out to coincide with the classical deductive closure. As a by-product we obtain a possible interpretation in logic of the axioms-as-rules conservation criterion for a multi-conclusion Scott-style entailment relation over a single-conclusion one.

]]>The complete characterisation of order types of non-standard models of Peano arithmetic and its extensions is a famous open problem. In this paper, we consider subtheories of Peano arithmetic (both with and without induction), in particular, theories formulated in proper fragments of the full language of arithmetic. We study the order types of their non-standard models and separate all considered theories via their possible order types. We compare the theories with and without induction and observe that the theories without induction tend to have an algebraic character that allows model constructions by closing a model under the relevant algebraic operations.

]]>We show that, assuming the Axiom of Determinacy, every non-selfdual Wadge class can be constructed by starting with those of level (that is, the ones that are closed under Borel preimages) and iteratively applying the operations of expansion and separated differences. The proof is essentially due to Louveau, and it yields at the same time a new proof of a theorem of Van Wesep (namely, that every non-selfdual Wadge class can be expressed as the result of a Hausdorff operation applied to the open sets). The exposition is self-contained, except for facts from classical descriptive set theory.

]]>When is an ideal of a ring radical or prime? By examining its generators, one may in many cases definably and uniformly test the ideal’s properties. We seek to establish such definable formulas in rings of p-adic power series, such as , , and related rings of power series over more general valuation rings and their fraction fields. We obtain a definable, uniform test for radicality, and, in the one-dimensional case, for primality. This builds upon the techniques stemming from the proof of the quantifier elimination results for the analytic theory of the p-adic integers by Denef and van den Dries, and the linear algebra methods of Hermann and Seidenberg.

Abstract prepared by Madeline G. Barnicle.

E-mail: barnicle@math.ucla.edu

]]>The main topics of this thesis are cardinal invariants, P -points and

In the preliminaries we recall the principal properties of filters, ultrafilters, ideals,

The second chapter is dedicated to a principle of Sierpiński. The principleof Sierpiński is the following statement: There is a family of functions such that for every there is for which This principle was recently studied by Arnie Miller. He showed that this principle is equivalent to the following statement: There is a set such that for every there is such that if then is infinite (sets with that property are referred to as -Luzin sets ). Miller showed that the principle of Sierpiński implies that

The third chapter is dedicated to a conjecture of Hrušák. Michael Hrušák conjectured the following: Every Borel cardinal invariant is either at most

In the fourth chapter we present a survey on destructibility of ideals and

The fifth chapter is one of the most important chapters in the thesis. A

In the fourth and fifth chapters, we introduce several notions of

In the seventh chapter we build models without P -points. We show that there are no P -points after adding Silver reals either iteratively or by the side by side product. These results have some important consequences: The first one is that is its possible to get rid of P -points using only definable forcings. This answers a question of Michael Hrušák. We can also use our results to build models with no P -points and with arbitrarily large continuum, which was also an open question. These results were obtained with David Chodounský.

Abstract prepared by Osvaldo Guzmán González

E-mail : oguzman@matmor.unam.mx

]]>From the interaction among areas such as Computer Science, Formal Logic, and Automated Deduction arises an important new subject called Logic Programming. This has been used continuously in the theoretical study and practical applications in various fields of Artificial Intelligence. After the emergence of a wide variety of non-classical logics and the understanding of the limitations presented by first-order classical logic, it became necessary to consider logic programming based on other types of reasoning in addition to classical reasoning. A type of reasoning that has been well studied is the paraconsistent, that is, the reasoning that tolerates contradictions. However, although there are many paraconsistent logics with different types of semantics, their application to logic programming is more delicate than it first appears, requiring an in-depth study of what can or cannot be transferred directly from classical first-order logic to other types of logic.

Based on studies of Tarcisio Rodrigues on the foundations of Paraconsistent Logic Programming (2010) for some Logics of Formal Inconsistency (LFIs), this thesis intends to resume the research of Rodrigues and place it in the specific context of LFIs with three- and four-valued semantics. This kind of logics are interesting from the computational point of view, as presented by Luiz Silvestrini in his Ph.D. thesis entitled “A new approach to the concept of quase-truth” (2011), and by Marcelo Coniglio and Martín Figallo in the article “Hilbert-style presentations of two logics associated to tetravalent modal algebras” [Studia Logica (2012)]. Based on original techniques, this study aims to define well-founded systems of paraconsistent logic programming based on well-known logics, in contrast to the ad hoc approaches to this question found in the literature.

Abstract prepared by Kleidson Êglicio Carvalho da Silva Oliveira.

E-mail: kecso10@yahoo.com.br

URL: http://repositorio.unicamp.br/jspui/handle/REPOSIP/322632

]]>We call multioperation any operation that return for even argument a set of values instead of a single value. Through multioperations we can define an algebraic structure equipped with at least one multioperation. This kind of structure is called multialgebra. The study of them began in 1934 with the publication of a paper of Marty. In the realm of Logic, multialgebras were considered by Avron and his collaborators under the name of non-deterministic matrices (or Nmatrices) and used as semantics tool for characterizing some logics which cannot be characterized by a single finite matrix. Carnielli and Coniglio introduced the semantics of swap structures for LFIs (Logics of Formal Inconsistency), which are Nmatrices defined over triples in a Boolean algebra, generalizing Avron’s semantics. In this thesis, we will introduce a new method of algebraization of logics based on multialgebras and swap structures that is similar to classical algebraization method of Lindenbaum-Tarski, but more extensive because it can be applied to systems such that some operators are non-congruential. In particular, this method will be applied to a family of non-normal modal logics and to some LFIs that are not algebraizable by the very general techniques introduced by Blok and Pigozzi. We also will obtain representation theorems for some LFIs and we will prove that, within out approach, the classes of swap structures for some axiomatic extensions of mbC are a subclass of the class of swap structures for the logic mbC.

Abstract prepared by Ana Claudia de Jesus Golzio.

E-mail: anaclaudiagolzio@yahoo.com.br

URL: http://repositorio.unicamp.br/jspui/handle/REPOSIP/322436

]]>Gazzari provides a mathematical theory of occurrences and of substitutions, which are a generalisation of occurrences constituting substitution functions. The dissertation focusses on term occurrences in terms of a first order language, but the methods and results obtained there can easily be carried over to arbitrary kinds of occurrences in arbitrary kinds of languages.

The aim of the dissertation is twofold: first, Gazzari intends to provide an adequate formal representation of philosophically relevant concepts (not only of occurrences and substitutions, but also of substitution functions, of calculations as well as of intuitively given properties of the discussed entities) and to improve this way our understanding of these concepts; second, he intends to provide a formal exploration of the introduced concepts including the detailed development of the methods needed for their adequate treatment.

The dissertation serves as a methodological fundament for consecutive research on topics demanding a precise treatment of occurrences and as a foundation for all scientific work dealing with occurrences only informally; the formal investigations are complemented by a brief survey of the development of the notion of occurrences in mathematics, philosophy and computer science.

The notion of occurrences. Occurrences are determined by three aspects: an occurrence is always an occurrence of a syntactic entity (its shape) in a syntactic entity (its context) at a specific position. Context and shape can be any meaningful combination of well-known syntactic entities as, in logic, terms, formulae or formula trees. Gazzari’s crucial idea is to represent the position of occurrences by nominal forms, essentially as introduced by Schütte [2]. The nominal forms are a generalisation of standard syntactic entities in which so called nominal symbols may occur. The position of an occurrence is obtained by eliminating the intended shape in the context, which means to replace the intended shape by suitable nominal symbols.

Standard occurrences. Central tool of the theory of nominal terms (nominal forms generalising standard terms) is the general substitution function mapping a nominal term and a sequence of them to the result of replacing simultaneously the nominal symbols in the first argument by the respective entries of the second argument.

A triple is a standard occurrence, if an application of the general substitution function on the position and the shape s results in the context t of that occurrence. As can occur more than once in , arbitrary many single occurrences in the context t of the common shape s can be subsumed in .1 Gazzari illustrates the appropriateness of his approach by solving typical problems (counting formally the number of specific occurrences, deciding whether an occurrence lies within another) which are not solvable without a good theory of occurrences.

Multi-shape occurrences. The multi-shape occurrences are the generalisation of standard occurrences, where the shape is a sequence of standard terms. Such occurrences subsume arbitrary non-overlapping single occurrences in the context t.

Gazzari addresses the non-trivial identity of such occurrences and their independence. The latter represents formally the idea of non-overlapping occurrences and is a far-reaching generalisation of disjointness as discussed by Huet with respect to single occurrences. Independent occurrences can be merged into one occurrence; an occurrence can be split up into independent occurrences.

Substitutions. A substitution satisfies that both and are occurrences such that the shapes have the same length. Such a substitution represents the replacement of in t at by resulting in . This means that a substitution is understood as a process and not as a (specific type of a) function.

Identity and independence are addressed again, using and extending the methods developed for occurrences; as before, independent substitutions can be merged and substitutions can be split up into sequences of independent substitutions. Substitutions are used to represent formally calculations (as found in everyday mathematics) and to investigate them.

Sets of substitutions turn out to be set-theoretic functions mapping the affected occurrences and the inserted shapes to the result of a substitution . Such sets are called explicit substitution functions. In order to qualify functions which are usually understood as substitution functions (and which are not formulated in a theory of occurrences) as substitution functions, Gazzari develops the concept of an explication method transforming such functions into explicit substitution functions. The appropriateness and the (philosophical) limitations of this concept are illustrated with example functions.

Conclusion. Gazzari’s theory of occurrences is strong (not restricted to single occurrences), canonical (nominal forms are a canonical generalisation of the underlying syntactic entities) and general (presupposing the grammar for the underlying syntactic entities, suitable nominal forms are easily defined and the theory of occurrences is immediately carried over). Another advantage is a kind of methodological pureness: positions are generalised syntactic entities (and not extraneous, as sequences of natural numbers) and can be treated, in particular, with the well-known methods developed for the underlying syntactic entities.

Abstract prepared by René Gazzari.

E-mail: rene.gazzari@uni-tuebingen.de

]]>This thesis divides naturally into two parts, each concerned with the extent to which the theory of can be changed by forcing.

The first part focuses primarily on applying generic-absoluteness principles to how that definable sets of reals enjoy regularity properties. The work in Part I is joint with Itay Neeman and is adapted from our paper Happy and mad families in, JSL, 2018. The project was motivated by questions about mad families, maximal families of infinite subsets of of which any two have only finitely many members in common. We begin, in the spirit of Mathias, by establishing (Theorem 2.8) a strong Ramsey property for sets of reals in the Solovay model, giving a new proof of Törnquist’s theorem that there are no infinite mad families in the Solovay model.

In Chapter 3 we stray from the main line of inquiry to briefly study a game-theoretic characterization of filters with the Baire Property.

Neeman and Zapletal showed, assuming roughly the existence of a proper class of Woodin cardinals, that the boldface theory of cannot be changed by proper forcing. They call their result the Embedding Theorem, because they conclude that in fact there is an elementary embedding from the of the ground model to that of the proper forcing extension. With a view toward analyzing mad families under and in under large-cardinal hypotheses, in Chapter 4 we establish triangular versions of the Embedding Theorem. These are enough for us to use Mathias’s methods to show (Theorem 4.5) that there are no infinite mad families in under large cardinals and (Theorem 4.9) that implies that there are no infinite mad families. These are again corollaries of theorems about strong Ramsey properties under large-cardinal assumptions and , respectively. Our first theorem improves the large-cardinal assumption under which Todorcevic established the nonexistence of infinite mad families in . Part I concludes with Chapter 5, a short list of open questions.

In the second part of the thesis, we undertake a finer analysis of the Embedding Theorem and its consistency strength. Schindler found that the the Embedding Theorem is consistent relative to much weaker assumptions than the existence of Woodin cardinals. He defined remarkable cardinals, which can exist even in L, and showed that the Embedding Theorem is equiconsistent with the existence of a remarkable cardinal. His theorem resembles a theorem of Harrington–Shelah and Kunen from the 1980s: the absoluteness of the theory of to ccc forcing extensions is equiconsistent with a weakly compact cardinal. Joint with Itay Neeman, we improve Schindler’s theorem by showing that absoluteness for -closed ccc posets—instead of the larger class of proper posets—implies the remarkability of in L. This requires a fundamental change in the proof, since Schindler’s lower-bound argument uses Jensen’s reshaping forcing, which, though proper, need not be -closed ccc in that context. Our proof bears more resemblance to that of Harrington–Shelah than to Schindler’s.

The proof of Theorem 6.2 splits naturally into two arguments. In Chapter 7 we extend the Harrington–Shelah method of coding reals into a specializing function to allow for trees with uncountable levels that may not belong to L. This culminates in Theorem 7.4, which asserts that if there are and a tree of height such that X is codable along T (see Definition 7.3), then -absoluteness for ccc posets must fail.

We complete the argument in Chapter 8, where we show that if in any -closed extension of V there is no codable along a tree T, then must be remarkable in L.

In Chapter 9 we review Schindler’s proof of generic absoluteness from a remarkable cardinal to show that the argument gives a level-by-level upper bound: a strongly -remarkable cardinal is enough to get -absoluteness for -linked proper posets.

Chapter 10 is devoted to partially reversing the level-by-level upper bound of Chapter 9. Adapting the methods of Neeman, Hierarchies of forcing axioms II, we are able to show that -absoluteness for -linked posets implies that the interval is -remarkable in L.

Abstract prepared by Zach Norwood.

E-mail: zachnorwood@gmail.com

]]>In the context of propositional logics, we apply semantics modulo satisfiability—a restricted semantics which comprehends only valuations that satisfy some specific set of formulas—with the aim to efficiently solve some computational tasks. Three possible such applications are developed.

We begin by studying the possibility of implicitly representing rational McNaughton functions in Łukasiewicz Infinitely-valued Logic through semantics modulo satisfiability. We theoretically investigate some approaches to such representation concept, called representation modulo satisfiability, and describe a polynomial algorithm that builds representations in the newly introduced system. An implementation of the algorithm, test results and ways to randomly generate rational McNaughton functions for testing are presented. Moreover, we propose an application of such representations to the formal verification of properties of neural networks by means of the reasoning framework of Łukasiewicz Infinitely-valued Logic.

Then, we move to the investigation of the satisfiability of joint probabilistic assignments to formulas of Łukasiewicz Infinitely-valued Logic, which is known to be an NP-complete problem. We provide an exact decision algorithm derived from the combination of linear algebraic methods with semantics modulo satisfiability. Also, we provide an implementation for such algorithm for which the phenomenon of phase transition is empirically detected.

Lastly, we study the game theory situation of observable games, which are games that are known to reach a Nash equilibrium, however, an external observer does not know what is the exact profile of actions that occur in a specific instance; thus, such observer assigns subjective probabilities to players actions. We study the decision problem of determining if a set of these probabilistic constraints is coherent by reducing it to the problems of satisfiability of probabilistic assignments to logical formulas both in Classical Propositional Logic and Łukasiewicz Infinitely-valued Logic depending on whether only pure equilibria or also mixed equilibria are allowed. Such reductions rely upon the properties of semantics modulo satisfiability. We provide complexity and algorithmic discussion for the coherence problem and, also, for the problem of computing maximal and minimal probabilistic constraints on actions that preserves coherence.

Abstract prepared by Sandro Márcio da Silva Preto.

E-mail: spreto@ime.usp.br

]]>We study various properties of formalised relativised interpretability. In the central part of this thesis we study for different interpretability logics the following aspects: completeness for modal semantics, decidability and algorithmic complexity.

In particular, we study two basic types of relational semantics for interpretability logics. One is the Veltman semantics, which we shall refer to as the regular or ordinary semantics; the other is called generalised Veltman semantics. In the recent years and especially during the writing of this thesis, generalised Veltman semantics was shown to be particularly well-suited as a relational semantics for interpretability logics. In particular, modal completeness results are easier to obtain in some cases; and decidability can be proven via filtration in all known cases. We prove various new and reprove some old completeness results with respect to the generalised semantics. We use the method of filtration to obtain the finite model property for various logics.

Apart from results concerning semantics in its own right, we also apply methods from semantics to determine decidability (implied by the finite model property) and complexity of provability (and consistency) problems for certain interpretability logics.

From the arithmetical standpoint, we explore three different series of interpretability principles. For two of them, for which arithmetical and modal soundness was already known, we give a new proof of arithmetical soundness. The third series results from our modal considerations. We prove it arithmetically sound and also characterise frame conditions w.r.t. ordinary Veltman semantics. We also prove results concerning the new series and generalised Veltman semantics.

Abstract prepared by Luka Mikec.

E-mail: luka.mikec1@gmail.com

]]>In this doctoral thesis, we show how the bounded functional interpretation of F. Ferreira and P. Oliva can be used and contribute to the Proof Mining program, a program which aims to extract computational information from mathematical theorems using proof-theoretic techniques. We present a method for the elimination of sequential weak compactness arguments from the quantitative analysis of certain mathematical results. This method works as a “macro” and allowed us to obtain quantitative versions of important results of F. E. Browder, R. Wittmann, and H. H. Bauschke in fixed point theory in Hilbert spaces. Although the theorems of Browder and Wittmann were previously analyzed by U. Kohlenbach using the monotone functional interpretation, it was not clear why such analyses did not require the use of functionals defined by bar recursion. This phenomenon is now fully understood by a theoretical justification for the elimination of sequential weak compactness in the context of the bounded functional interpretation. Bauschke’s theorem is an important generalization of Wittmann’s theorem and its original proof is also analyzed here. The analyses of these results also require a quantitative version of a projection argument which turned out to be simpler when guided by the bounded functional interpretation than when using the monotone functional interpretation. In the context of the theory of monotone operators, results due to Boikanyo/Moroşanu and Xu for the strong convergence of variants of the proximal point algorithm are analyzed and bounds on the metastablility property of these iterations are obtained. These results are the first applications of the bounded functional interpretation to the proof mining of concrete mathematical results.

Abstract prepared by Pedro Pinto.

E-mail: pinto@mathematik.tu-darmstadt.de

]]>This thesis is devoted to the exploration of the complexity of some mathematical problems using the framework of computable analysis and (effective) descriptive set theory. We will especially focus on Weihrauch reducibility as a means to compare the uniform computational strength of problems. After a short introduction of the relevant background notions, we investigate the uniform computational content of problems arising from theorems that lie at the higher levels of the reverse mathematics hierarchy.

We first analyze the strength of the open and clopen Ramsey theorems. Since there is not a canonical way to phrase these theorems as multi-valued functions, we identify eight different multi-valued functions (five corresponding to the open Ramsey theorem and three corresponding to the clopen Ramsey theorem) and study their degree from the point of view of Weihrauch, strong Weihrauch, and arithmetic Weihrauch reducibility.

We then discuss some new operators on multi-valued functions and study their algebraic properties and the relations with other previously studied operators on problems. In particular, we study the first-order part and the deterministic part of a problem f, capturing the Weihrauch degree of the strongest multi-valued problem that is reducible to f and that, respectively, has codomain or is single-valued.

These notions proved to be extremely useful when exploring the Weihrauch degree of the problem of computing descending sequences in ill-founded linear orders. They allow us to show that , and the Weihrauch equivalent problem of finding bad sequences through non-well quasi-orders, while being very “hard” to solve, are rather weak in terms of uniform computational strength. We then generalize and by considering -presented orders, where is a Borel pointclass or , , . We study the obtained -hierarchy and -hierarchy of problems in comparison with the (effective) Baire hierarchy and show that they do not collapse at any finite level.

Finally, we work in the context of geometric measure theory and we focus on the characterization, from the point of view of descriptive set theory, of some conditions involving the notions of Hausdorff/Fourier dimension and Salem sets. We first work in the hyperspace of compact subsets of and show that the closed Salem sets form a -complete family. This is done by characterizing the complexity of the family of sets having sufficiently large Hausdorff or Fourier dimension. We also show that the complexity does not change if we increase the dimension of the ambient space and work in . We also generalize the results by relaxing the compactness of the ambient space and show that the closed Salem sets are still -complete when we endow with the Fell topology. A similar result holds also for the Vietoris topology.

We conclude by analyzing the same notions from the point of view of effective descriptive set theory and Type-2 Theory of Effectivity, and show that the complexities remain the same also in the lightface case. In particular, we show that the family of all the closed Salem sets is -complete. We furthermore characterize the Weihrauch degree of the functions computing Hausdorff and Fourier dimension of closed sets.

Abstract prepared by Manlio Valenti.

E-mail:manliovalenti@gmail.com

]]>