To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Wenmackers and Romeijn [38] formalize ideas going back to Shimony [33] and Putnam [28] into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee.
This paper collects and presents unpublished notes of Kurt Gödel concerning the field of many-valued logic. In order to get a picture as complete as possible, both formal and philosophical notes, transcribed from the Gabelsberger shorthand system, are included.
I provide an analysis of sentences of the form ‘To be F is to be G’ in terms of exact truth-maker semantics—an approach that identifies the meanings of sentences with the states of the world directly responsible for their truth-values. Roughly, I argue that these sentences hold just in case that which makes something F also makes it G. This approach is hyperintensional and possesses desirable logical and modal features. In particular, these sentences are reflexive, transitive, and symmetric, and if they are true, then they are necessarily true, and it is necessary that all and only Fs are Gs. I motivate my account over Correia and Skiles’ [11] prominent alternative and close by defining an irreflexive and asymmetric notion of analysis in terms of the symmetric and reflexive notion.
We explore the problems that confront any attempt to explain or explicate exactly what a primitive logical rule of inference is, or consists in. We arrive at a proposed solution that places a surprisingly heavy load on the prospect of being able to understand and deal with specifications of rules that are essentially self-referring. That is, any rule $\rho $ is to be understood via a specification that involves, embedded within it, reference to rule $\rho $ itself. Just how we arrive at this position is explained by reference to familiar rules as well as less familiar ones with unusual features. An inquiry of this kind is surprisingly absent from the foundations of inferentialism—the view that meanings of expressions (especially logical ones) are to be characterized by the rules of inference that govern them.
I show that the logic $\textsf {TJK}^{d+}$, one of the strongest logics currently known to support the naive theory of truth, is obtained from the Kripke semantics for constant domain intuitionistic logic by (i) dropping the requirement that the accessibility relation is reflexive and (ii) only allowing reflexive worlds to serve as counterexamples to logical consequence. In addition, I provide a simplified natural deduction system for $\textsf {TJK}^{d+}$, in which a restricted form of conditional proof is used to establish conditionals.
This paper critically examines two arguments against the generic multiverse, both of which are due to W. Hugh Woodin. Versions of the first argument have appeared a number of times in print, while the second argument is relatively novel. We shall investigate these arguments through the lens of two different attitudes one may take toward the methodology and metaphysics of set theory; and we shall observe that the impact of these arguments depends significantly on which of these attitudes is upheld. Our examination of the second argument involves the development of a new (inner) model for Steel’s multiverse theory, which is delivered in the Appendix.
Partial differential equations (PDEs) are used with huge success to model phenomena across all scientific and engineering disciplines. However, across an equally wide swath, there exist situations in which PDEs fail to adequately model observed phenomena, or are not the best available model for that purpose. On the other hand, in many situations, nonlocal models that account for interaction occurring at a distance have been shown to more faithfully and effectively model observed phenomena that involve possible singularities and other anomalies. In this article we consider a generic nonlocal model, beginning with a short review of its definition, the properties of its solution, its mathematical analysis and of specific concrete examples. We then provide extensive discussions about numerical methods, including finite element, finite difference and spectral methods, for determining approximate solutions of the nonlocal models considered. In that discussion, we pay particular attention to a special class of nonlocal models that are the most widely studied in the literature, namely those involving fractional derivatives. The article ends with brief considerations of several modelling and algorithmic extensions, which serve to show the wide applicability of nonlocal modelling.
Phase retrieval, i.e. the problem of recovering a function from the squared magnitude of its Fourier transform, arises in many applications, such as X-ray crystallography, diffraction imaging, optics, quantum mechanics and astronomy. This problem has confounded engineers, physicists, and mathematicians for many decades. Recently, phase retrieval has seen a resurgence in research activity, ignited by new imaging modalities and novel mathematical concepts. As our scientific experiments produce larger and larger datasets and we aim for faster and faster throughput, it is becoming increasingly important to study the involved numerical algorithms in a systematic and principled manner. Indeed, the past decade has witnessed a surge in the systematic study of computational algorithms for phase retrieval. In this paper we will review these recent advances from a numerical viewpoint.
We review recent advances in algorithms for quadrature, transforms, differential equations and singular integral equations using orthogonal polynomials. Quadrature based on asymptotics has facilitated optimal complexity quadrature rules, allowing for efficient computation of quadrature rules with millions of nodes. Transforms based on rank structures in change-of-basis operators allow for quasi-optimal complexity, including in multivariate settings such as on triangles and for spherical harmonics. Ordinary and partial differential equations can be solved via sparse linear algebra when set up using orthogonal polynomials as a basis, provided that care is taken with the weights of orthogonality. A similar idea, together with low-rank approximation, gives an efficient method for solving singular integral equations. These techniques can be combined to produce high-performance codes for a wide range of problems that appear in applications.
This survey describes probabilistic algorithms for linear algebraic computations, such as factorizing matrices and solving linear systems. It focuses on techniques that have a proven track record for real-world problems. The paper treats both the theoretical foundations of the subject and practical computational issues.
Topics include norm estimation, matrix approximation by sampling, structured and unstructured random embeddings, linear regression problems, low-rank approximation, subspace iteration and Krylov methods, error estimation and adaptivity, interpolatory and CUR factorizations, Nyström approximation of positive semidefinite matrices, single-view (‘streaming’) algorithms, full rank-revealing factorizations, solvers for linear systems, and approximation of kernel matrices that arise in machine learning and in scientific computing.
Essentially non-oscillatory (ENO) and weighted ENO (WENO) schemes were designed for solving hyperbolic and convection–diffusion equations with possibly discontinuous solutions or solutions with sharp gradient regions. The main idea of ENO and WENO schemes is actually an approximation procedure, aimed at achieving arbitrarily high-order accuracy in smooth regions and resolving shocks or other discontinuities sharply and in an essentially non-oscillatory fashion. Both finite volume and finite difference schemes have been designed using the ENO or WENO procedure, and these schemes are very popular in applications, most noticeably in computational fluid dynamics but also in other areas of computational physics and engineering. Since the main idea of the ENO and WENO schemes is an approximation procedure not directly related to partial differential equations (PDEs), ENO and WENO schemes also have non-PDE applications. In this paper we will survey the basic ideas behind ENO and WENO schemes, discuss their properties, and present examples of their applications to different types of PDEs as well as to non-PDE problems.
The semiclassically scaled time-dependent multi-particle Schrödinger equation describes, inter alia, quantum dynamics of nuclei in a molecule. It poses the combined computational challenges of high oscillations and high dimensions. This paper reviews and studies numerical approaches that are robust to the small semiclassical parameter. We present and analyse variationally evolving Gaussian wave packets, Hagedorn’s semiclassical wave packets, continuous superpositions of both thawed and frozen Gaussians, and Wigner function approaches to the direct computation of expectation values of observables. Making good use of classical mechanics is essential for all these approaches. The arising aspects of time integration and high-dimensional quadrature are also discussed.
This paper explores the analysis of ability, where ability is to be understood in the epistemic sense—in contrast to what might be called a causal sense. There are plenty of cases where an agent is able to perform an action that guarantees a given result even though she does not know which of her actions guarantees that result. Such an agent possesses the causal ability but lacks the epistemic ability. The standard analysis of such epistemic abilities relies on the notion of action types—as opposed to action tokens—and then posits that an agent has the epistemic ability to do something if and only if there is an action type available to her that she knows guarantees it. We show that these action types are not needed: we present a formalism without action types that can simulate analyzes of epistemic ability that rely on action types. Our formalism is a standard epistemic extension of the theory of “seeing to it that”, which arose from a modal tradition in the logic of action.
Supra-Bayesianism is the Bayesian response to learning the opinions of others. Probability pooling constitutes an alternative response. One natural question is whether there are cases where probability pooling gives the supra-Bayesian result. This has been called the problem of Bayes-compatibility for pooling functions. It is known that in a common prior setting, under standard assumptions, linear pooling cannot be nontrivially Bayes-compatible. We show by contrast that geometric pooling can be nontrivially Bayes-compatible. Indeed, we show that, under certain assumptions, geometric and Bayes-compatible pooling are equivalent. Granting supra-Bayesianism its usual normative status, one upshot of our study is thus that, in a certain class of epistemic contexts, geometric pooling enjoys a normative advantage over linear pooling as a social learning mechanism. We discuss the philosophical ramifications of this advantage, which we show to be robust to variations in our statement of the Bayes-compatibility problem.
This paper explores relational syllogistic logics, a family of logical systems related to reasoning about relations in extensions of the classical syllogistic. These are all decidable logical systems. We prove completeness theorems and complexity results for a natural subfamily of relational syllogistic logics, parametrized by constructors for terms and for sentences.
In this article, I provide Urquhart-style semilattice semantics for three connexive logics in an implication-negation language (I call these “pure theories of connexive implication”). The systems semantically characterized include the implication-negation fragment of a connexive logic of Wansing, a relevant connexive logic recently developed proof-theoretically by Francez, and an intermediate system that is novel to this article. Simple proofs of soundness and completeness are given and the semantics is used to establish various facts about the systems (e.g., that two of the systems have the variable sharing property). I emphasize the intuitive content of the semantics and discuss how natural informational considerations underly each of the examined systems.