To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper puts forward a new account of rigorous mathematical proof and its epistemology. One novel feature is a focus on how the skill of reading and writing valid proofs is learnt, as a way of understanding what validity itself amounts to. The account is used to address two current questions in the literature: that of how mathematicians are so good at resolving disputes about validity, and that of whether rigorous proofs are necessarily formalizable.
Several authors have investigated the question of whether canonical logic-based accounts of belief revision, and especially the theory of AGM revision operators, are compatible with the dynamics of Bayesian conditioning. Here we show that Leitgeb’s stability rule for acceptance, which has been offered as a possible solution to the Lottery paradox, allows to bridge AGM revision and Bayesian update: using the stability rule, we prove that AGM revision operators emerge from Bayesian conditioning by an application of the principle of maximum entropy. In situations of information loss, or whenever the agent relies on a qualitative description of her information state—such as a plausibility ranking over hypotheses, or a belief set—the dynamics of AGM belief revision are compatible with Bayesian conditioning; indeed, through the maximum entropy principle, conditioning naturally generates AGM revision operators. This mitigates an impossibility theorem of Lin and Kelly for tracking Bayesian conditioning with AGM revision, and suggests an approach to the compatibility problem that highlights the information loss incurred by acceptance rules in passing from probabilistic to qualitative representations of belief.
Henle, Mathias, and Woodin proved in [21] that, provided that ${\omega }{\rightarrow }({\omega })^{{\omega }}$ holds in a model M of ZF, then forcing with $([{\omega }]^{{\omega }},{\subseteq }^*)$ over M adds no new sets of ordinals, thus earning the name a “barren” extension. Moreover, under an additional assumption, they proved that this generic extension preserves all strong partition cardinals. This forcing thus produces a model $M[\mathcal {U}]$, where $\mathcal {U}$ is a Ramsey ultrafilter, with many properties of the original model M. This begged the question of how important the Ramseyness of $\mathcal {U}$ is for these results. In this paper, we show that several classes of $\sigma $-closed forcings which generate non-Ramsey ultrafilters have the same properties. Such ultrafilters include Milliken–Taylor ultrafilters, a class of rapid p-points of Laflamme, k-arrow p-points of Baumgartner and Taylor, and extensions to a class of ultrafilters constructed by Dobrinen, Mijares, and Trujillo. Furthermore, the class of Boolean algebras $\mathcal {P}({\omega }^{{\alpha }})/{\mathrm {Fin}}^{\otimes {\alpha }}$, $2\le {\alpha }<{\omega }_1$, forcing non-p-points also produce barren extensions.
Recent work in computability theory has focused on various notions of asymptotic computability, which capture the idea of a set being “almost computable.” One potentially upsetting result is that all four notions of asymptotic computability admit “almost computable” sets in every Turing degree via coding tricks, contradicting the notion that “almost computable” sets should be computationally close to the computable sets. In response, Astor introduced the notion of intrinsic density: a set has defined intrinsic density if its image under any computable permutation has the same asymptotic density. Furthermore, introduced various notions of intrinsic computation in which the standard coding tricks cannot be used to embed intrinsically computable sets in every Turing degree. Our goal is to study the sets which are intrinsically small, i.e. those that have intrinsic density zero. We begin by studying which computable functions preserve intrinsic smallness. We also show that intrinsic smallness and hyperimmunity are computationally independent notions of smallness, i.e. any hyperimmune degree contains a Turing-equivalent hyperimmune set which is “as large as possible” and therefore not intrinsically small. Our discussion concludes by relativizing the notion of intrinsic smallness and discussing intrinsic computability as it relates to our study of intrinsic smallness.
We prove first-order definability of the prime subring inside polynomial rings, whose coefficient rings are (commutative unital) reduced and indecomposable. This is achieved by means of a uniform formula in the language of rings with signature $(0,1,+,\cdot )$. In the characteristic zero case, the claim implies that the full theory is undecidable, for rings of the referred type. This extends a series of results by Raphael Robinson, holding for certain polynomial integral domains, to a more general class.
We show that if M is a countable transitive model of $\text {ZF}$ and if $a,b$ are reals not in M, then there is a G generic over M such that $b \in L[a,G]$. We then present several applications such as the following: if J is any countable transitive model of $\text {ZFC}$ and $M \not \subseteq J$ is another countable transitive model of $\text {ZFC}$ of the same ordinal height $\alpha $, then there is a forcing extension N of J such that $M \cup N$ is not included in any transitive model of $\text {ZFC}$ of height $\alpha $. Also, assuming $0^{\#}$ exists, letting S be the set of reals generic over L, although S is disjoint from the Turing cone above $0^{\#}$, we have that for any non-constructible real a, $\{ a \oplus s : s \in S \}$ is cofinal in the Turing degrees.
We set up a general context in which one can prove Sauer–Shelah type lemmas. We apply our general results to answer a question of Bhaskar [1] and give a slight improvement to a result of Malliaris and Terry [7]. We also prove a new Sauer–Shelah type lemma in the context of $ \operatorname {\textrm{op}}$-rank, a notion of Guingona and Hill [4].
We consider the structures $(\mathbb {Z}; \mathrm {SF}^{\mathbb {Z}})$, $(\mathbb {Z}; <, \mathrm {SF}^{\mathbb {Z}})$, $(\mathbb {Q}; \mathrm {SF}^{\mathbb {Q}})$, and $(\mathbb {Q}; <, \mathrm {SF}^{\mathbb {Q}})$ where $\mathbb {Z}$ is the additive group of integers, $\mathrm {SF}^{\mathbb {Z}}$ is the set of $a \in \mathbb {Z}$ such that $v_{p}(a) < 2$ for every prime p and corresponding p-adic valuation $v_{p}$, $\mathbb {Q}$ and $\mathrm {SF}^{\mathbb {Q}}$ are defined likewise for rational numbers, and $<$ denotes the natural ordering on each of these domains. We prove that the second structure is model-theoretically wild while the other three structures are model-theoretically tame. Moreover, all these results can be seen as examples where number-theoretic randomness yields model-theoretic consequences.
Drawing on the analogy between any unary first-order quantifier and a “face operator,” this paper establishes several connections between model theory and homotopy theory. The concept of simplicial set is brought into play to describe the formulae of any first-order language L, the definable subsets of any L-structure, as well as the type spaces of any theory expressed in L. An adjunction result is then proved between the category of o-minimal structures and a subcategory of the category of linearly ordered simplicial sets with distinguished vertices.
Rybakov (1984a) proved that the admissible rules of $\mathsf {IPC}$ are decidable. We give a proof of the same theorem, using the same core idea, but couched in the many notions that have been developed in the mean time. In particular, we illustrate how the argument can be interpreted as using refinements of the notions of exactness and extendibility.
The Wadge hierarchy was originally defined and studied only in the Baire space (and some other zero-dimensional spaces). Here we extend the Wadge hierarchy of Borel sets to arbitrary topological spaces by providing a set-theoretic definition of all its levels. We show that our extension behaves well in second countable spaces and especially in quasi-Polish spaces. In particular, all levels are preserved by continuous open surjections between second countable spaces which implies e.g., several Hausdorff–Kuratowski (HK)-type theorems in quasi-Polish spaces. In fact, many results hold not only for the Wadge hierarchy of sets but also for its extension to Borel functions from a space to a countable better quasiorder Q.
A problem is a multivalued function from a set of instances to a set of solutions. We consider only instances and solutions coded by sets of integers. A problem admits preservation of some computability-theoretic weakness property if every computable instance of the problem admits a solution relative to which the property holds. For example, cone avoidance is the ability, given a noncomputable set A and a computable instance of a problem ${\mathsf {P}}$, to find a solution relative to which A is still noncomputable.
In this article, we compare relativized versions of computability-theoretic notions of preservation which have been studied in reverse mathematics, and prove that the ones which were not already separated by natural statements in the literature actually coincide. In particular, we prove that it is equivalent to admit avoidance of one cone, of $\omega $ cones, of one hyperimmunity or of one non-$\Sigma ^{0}_1$ definition. We also prove that the hierarchies of preservation of hyperimmunity and non-$\Sigma ^{0}_1$ definitions coincide. On the other hand, none of these notions coincide in a nonrelativized setting.
Let $\mathcal {N}(b)$ be the set of real numbers that are normal to base b. A well-known result of Ki and Linton [19] is that $\mathcal {N}(b)$ is $\boldsymbol {\Pi }^0_3$-complete. We show that the set ${\mathcal {N}}^\perp (b)$ of reals, which preserve $\mathcal {N}(b)$ under addition, is also $\boldsymbol {\Pi }^0_3$-complete. We use the characterization of ${\mathcal {N}}^\perp (b),$ given by Rauzy, in terms of an entropy-like quantity called the noise. It follows from our results that no further characterization theorems could result in a still better bound on the complexity of ${\mathcal {N}}^\perp (b)$. We compute the exact descriptive complexity of other naturally occurring sets associated with noise. One of these is complete at the $\boldsymbol {\Pi }^0_4$ level. Finally, we get upper and lower bounds on the Hausdorff dimension of the level sets associated with the noise.
In the past four decades, the notion of quantum polynomial-time computability has been mathematically modeled by quantum Turing machines as well as quantum circuits. This paper seeks the third model, which is a quantum analogue of the schematic (inductive or constructive) definition of (primitive) recursive functions. For quantum functions mapping finite-dimensional Hilbert spaces to themselves, we present such a schematic definition, composed of a small set of initial quantum functions and a few construction rules that dictate how to build a new quantum function from the existing ones. We prove that our schematic definition precisely characterizes all functions that can be computable with high success probabilities on well-formed quantum Turing machines in polynomial time, or equivalently uniform families of polynomial-size quantum circuits. Our new, schematic definition is quite simple and intuitive and, more importantly, it avoids the cumbersome introduction of the well-formedness condition imposed on a quantum Turing machine model as well as of the uniformity condition necessary for a quantum circuit model. Our new approach can further open a door to the descriptional complexity of quantum functions, to the theory of higher-type quantum functionals, to the development of new first-order theories for quantum computing, and to the designing of programming languages for real-life quantum computer
Consider a definably complete uniformly locally o-minimal expansion of the second kind of a densely linearly ordered abelian group. Let $f:X \rightarrow R^n$ be a definable map, where X is a definable set and R is the universe of the structure. We demonstrate the inequality $\dim (f(X)) \leq \dim (X)$ in this paper. As a corollary, we get that the set of the points at which f is discontinuous is of dimension smaller than $\dim (X)$. We also show that the structure is definably Baire in the course of the proof of the inequality.
We study the structure of families of theories in the language of arithmetic extended to allow these families to refer to one another and to themselves. If a theory contains schemata expressing its own truth and expressing a specific Turing index for itself, and contains some other mild axioms, then that theory is untrue. We exhibit some families of true self-referential theories that barely avoid this forbidden pattern.
If ${\mathfrak {F}}$ is a type-definable family of commensurable subsets, subgroups or subvector spaces in a metric structure, then there is an invariant subset, subgroup or subvector space commensurable with ${\mathfrak {F}}$. This in particular applies to type-definable or hyper-definable objects in a classical first-order structure.
Answering a question of Cifú Lopes, we give a syntactic characterization of those continuous sentences that are preserved under reduced products of metric structures. In fact, we settle this question in the wider context of general structures as introduced by the second author.
We give a short proof of the fundamental theorem of central element theory (see: Sanchez Terraf and Vaggione, Varieties with definable factor congruences, T.A.M.S. 361). The original proof is constructive and very involved and relies strongly on the fact that the class be a variety. Here we give a more direct nonconstructive proof which applies for the more general case of a first-order class which is both closed under the formation of direct products and direct factors.
A wide Aronszajn tree is a tree of size and height $\omega _{1}$ with no uncountable branches. We prove that under $MA(\omega _{1}\!)$ there is no wide Aronszajn tree which is universal under weak embeddings. This solves an open question of Mekler and Väänänen from 1994.
We also prove that under $MA(\omega _{1}\!)$, every wide Aronszajn tree weakly embeds in an Aronszajn tree, which combined with a result of Todorčević from 2007, gives that under $MA(\omega _{1}\!)$ every wide Aronszajn tree embeds into a Lipschitz tree or a coherent tree. We also prove that under $MA(\omega _{1}\!)$ there is no wide Aronszajn tree which weakly embeds all Aronszajn trees, improving the result in the first paragraph as well as a result of Todorčević from 2007 who proved that under $MA(\omega _{1}\!)$ there are no universal Aronszajn trees.