To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Herbrand’s theorem is one of the most fundamental insights in logic. From the syntactic point of view, it suggests a compact representation of proofs in classical first- and higher-order logics by recording the information of which instances have been chosen for which quantifiers. This compact representation is known in the literature as Miller’s expansion tree proof. It is inherently analytic and hence corresponds to a cut-free sequent calculus proof. Recently several extensions of such proof representations to proofs with cuts have been proposed. These extensions are based on graphical formalisms similar to proof nets and are limited to prenex formulas.
In this paper, we present a new syntactic approach that directly extends Miller’s expansion trees by cuts and also covers non-prenex formulas. We describe a cut-elimination procedure for our expansion trees with cut that is based on the natural reduction steps and shows that it is weakly normalizing.
The famous Erdős–Gallai theorem on the Turán number of paths states that every graph with n vertices and m edges contains a path with at least (2m)/n edges. In this note, we first establish a simple but novel extension of the Erdős–Gallai theorem by proving that every graph G contains a path with at least
edges, where Nj(G) denotes the number of j-cliques in G for 1≤ j ≤ ω(G). We also construct a family of graphs which shows our extension improves the estimate given by the Erdős–Gallai theorem. Among applications, we show, for example, that the main results of [20], which are on the maximum possible number of s-cliques in an n-vertex graph without a path with ℓ vertices (and without cycles of length at least c), can be easily deduced from this extension. Indeed, to prove these results, Luo [20] generalized a classical theorem of Kopylov and established a tight upper bound on the number of s-cliques in an n-vertex 2-connected graph with circumference less than c. We prove a similar result for an n-vertex 2-connected graph with circumference less than c and large minimum degree. We conclude this paper with an application of our results to a problem from spectral extremal graph theory on consecutive lengths of cycles in graphs.
Given a graph G and a bijection f : E(G) → {1, 2,…,e(G)}, we say that a trail/path in G is f-increasing if the labels of consecutive edges of this trail/path form an increasing sequence. More than 40 years ago Chvátal and Komlós raised the question of providing worst-case estimates of the length of the longest increasing trail/path over all edge orderings of Kn. The case of a trail was resolved by Graham and Kleitman, who proved that the answer is n-1, and the case of a path is still wide open. Recently Lavrov and Loh proposed studying the average-case version of this problem, in which the edge ordering is chosen uniformly at random. They conjectured (and Martinsson later proved) that such an ordering with high probability (w.h.p.) contains an increasing Hamilton path.
In this paper we consider the random graph G = Gn,p with an edge ordering chosen uniformly at random. In this setting we determine w.h.p. the asymptotics of the number of edges in the longest increasing trail. In particular we prove an average-case version of the result of Graham and Kleitman, showing that the random edge ordering of Kn has w.h.p. an increasing trail of length (1-o(1))en, and that this is tight. We also obtain an asymptotically tight result for the length of the longest increasing path for random Erdős-Renyi graphs with p = o(1).
The genesis of this special issue was in a meeting that took place at Université Paris Diderot on December 15 and 16, 2016. Dale Miller, Professor at École polytechnique, had turned 60 a few days earlier. In a career spanning over three decades and in work conducted in collaboration with several students and colleagues, Dale had had a significant influence in an area that can be described as structural proof theory and its application to computation and reasoning. In recognition of this fact, several of his collaborators thought it appropriate to celebrate the occasion by organizing a symposium on topics broadly connected to his areas of interest and achievements. The meeting was a success in several senses: it was attended by over 35 people, there were 15 technical presentations describing new results, and, quite gratifyingly, we managed to spring the event as a complete surprise to Dale.
This paper presents a non-model-based collision detection algorithm for robots without external sensors and with closed control architecture. A reference signal of repetitive motion is recorded from the robot operation. To detect collisions, the reference is compared with measurements from the robot. One of the key contributions is a novel approach to optimal matching of compared signals, which is ensured by the newly developed modified Dynamic Time Warping (mDTW) method presented in this paper. One of the main improvements of the mDTW is that it enables comparing a signal with the most similar section of the other signal. Partial matching also enables online application of time warping principles and reduces the time and computation resources needed to perform matching. In addition to mDTW, two complementary decision rules are developed to identify collisions. The first rule, based on the absolute difference between compared matched samples, uses statistically determined thresholds to perform rapid detection of unambiguous collisions. The second rule is based on Eigen values of the covariance matrix of matched samples, and it employs its higher sensitivity to detect collisions with lower intensity. Results from experimental validation of the proposed collision algorithm on two industrial robots are shown.
We compare the expressive power of three programming abstractions for user-defined computational effects: Plotkin and Pretnar’s effect handlers, Filinski’s monadic reflection, and delimited control. This comparison allows a precise discussion about the relative expressiveness of each programming abstraction. It also demonstrates the sensitivity of the relative expressiveness of user-defined effects to seemingly orthogonal language features. We present three calculi, one per abstraction, extending Levy’s call-by-push-value. For each calculus, we present syntax, operational semantics, a natural type-and-effect system, and, for effect handlers and monadic reflection, a set-theoretic denotational semantics. We establish their basic metatheoretic properties: safety, termination, and, where applicable, soundness and adequacy. Using Felleisen’s notion of a macro translation, we show that these abstractions can macro express each other, and show which translations preserve typeability. We use the adequate finitary set-theoretic denotational semantics for the monadic calculus to show that effect handlers cannot be macro expressed while preserving typeability either by monadic reflection or by delimited control. Our argument fails with simple changes to the type system such as polymorphism and inductive types. We supplement our development with a mechanised Abella formalisation.
In the literature, there have been several methods and definitions for working out whether two theories are “equivalent” (essentially the same) or not. In this article, we do something subtler. We provide a means to measure distances (and explore connections) between formal theories. We introduce two natural notions for such distances. The first one is that of axiomatic distance, but we argue that it might be of limited interest. The more interesting and widely applicable notion is that of conceptual distance which measures the minimum number of concepts that distinguish two theories. For instance, we use conceptual distance to show that relativistic and classical kinematics are distinguished by one concept only.
Value models are increasingly discussed today as a means to frontload conceptual design activities in engineering design, with the final goal of reducing cost and rework associated with sub-optimal decisions made from a system perspective. However, there is no shared agreement in the research community about what a value model exactly is, how many types of value models are there, their input–output relationships and their usage along the engineering design process timeline. Emerging from five case studies conducted in the aerospace and in the construction equipment industry, this paper describes how to tailor the development of value models in the engineering design process. The initial descriptive study findings are summarized in the form of seven lessons learned that shall be taken into account when designing value models for design decision support. From these lessons, the paper proposes a six-step framework that considers the need to update the nature and definition of value models as far as new information becomes available, moving from initial estimations based on expert judgment to detailed quantitative analysis.
We analyze the precise modal commitments of several natural varieties of set-theoretic potentialism, using tools we develop for a general model-theoretic account of potentialism, building on those of Hamkins, Leibman and Löwe [14], including the use of buttons, switches, dials and ratchets. Among the potentialist conceptions we consider are: rank potentialism (true in all larger $V_\beta $), Grothendieck–Zermelo potentialism (true in all larger $V_\kappa $ for inaccessible cardinals $\kappa $), transitive-set potentialism (true in all larger transitive sets), forcing potentialism (true in all forcing extensions), countable-transitive-model potentialism (true in all larger countable transitive models of ZFC), countable-model potentialism (true in all larger countable models of ZFC), and others. In each case, we identify lower bounds for the modal validities, which are generally either S4.2 or S4.3, and an upper bound of S5, proving in each case that these bounds are optimal. The validity of S5 in a world is a potentialist maximality principle, an interesting set-theoretic principle of its own. The results can be viewed as providing an analysis of the modal commitments of the various set-theoretic multiverse conceptions corresponding to each potentialist account.
This article builds on a recent paper coauthored by the present author, H. Hosni and F. Montagna. It is meant to contribute to the logical foundations of probability theory on many-valued events and, specifically, to a deeper understanding of the notion of strict coherence. In particular, we will make use of geometrical, measure-theoretical and logical methods to provide three characterizations of strict coherence on formulas of infinite-valued Łukasiewicz logic.
How can state-of-the-art computational linguistic technology reduce the workload and increase the efficiency of language teachers? To address this question, we combine insights from research in second language acquisition and computational linguistics to automatically generate text-based questions to a given text. The questions are designed to draw the learner’s attention to target linguistic forms – phrasal verbs, in this particular case – by requiring them to use the forms or their paraphrases in the answer. Such questions help learners create form-meaning connections and are well suited for both practice and testing. We discuss the generation of a novel type of question combining a wh- question with a gapped sentence, and report the results of two crowdsourcing evaluation studies investigating how well automatically generated questions compare to those written by a language teacher. The first study compares our system output to gold standard human-written questions via crowdsourcing rating. An equivalence test shows that automatically generated questions are comparable to human-written ones. The second crowdsourcing study investigates two types of questions (wh- questions with and without a gapped sentence), their perceived quality, and the responses they elicit. Finally, we discuss the challenges and limitations of creating and evaluating question-generation systems for language learners.
Modular design allows to reduce costs based on scaling effects. However, due to strong alternating effects between the resulting modules and products, methods and tools are required that enable engineers to use specific views in which the respective information can be linked and retrieved according to the situation. Within the scope of this paper, the model-based systems engineering (MBSE) approach is used to model the complex real-world problem of vehicle modular kits. The aim is to investigate the potentials in this context, how modular kits and products can be efficiently modeled and finally how MBSE can support modular design. In order to investigate this in detail, two extensive studies are carried out in a company over a period of three years. The studies show that modular kits lead to an increased complexity of development. Across industries and companies, the demand for reference product models is shown, which facilitate the unification of inhomogeneous partial models and serve as a knowledge repository for the development of future product generations. On this basis, a framework is derived which enables the reuse of large proportions of the product models of previous product generations. This framework is evaluated on the basis of five case studies.
Mathematical proof is the primary form of justification for mathematical knowledge, but in order to count as a proper justification for a piece of mathematical knowledge, a mathematical proof must be rigorous. What does it mean then for a mathematical proof to be rigorous? According to what I shall call the standard view, a mathematical proof is rigorous if and only if it can be routinely translated into a formal proof. The standard view is almost an orthodoxy among contemporary mathematicians, and is endorsed by many logicians and philosophers, but it has also been heavily criticized in the philosophy of mathematics literature. Progress on the debate between the proponents and opponents of the standard view is, however, currently blocked by a major obstacle, namely, the absence of a precise formulation of it. To remedy this deficiency, I undertake in this paper to provide a precise formulation and a thorough evaluation of the standard view of mathematical rigor. The upshot of this study is that the standard view is more robust to criticisms than it transpires from the various arguments advanced against it, but that it also requires a certain conception of how mathematical proofs are judged to be rigorous in mathematical practice, a conception that can be challenged on empirical grounds by exhibiting rigor judgments of mathematical proofs in mathematical practice conflicting with it.