To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The subject of this book is the approximation of curves in two dimensions and surfaces in three dimensions from a set of sample points. This problem, called reconstruction, appears in various engineering applications and scientific studies. What is special about the problem is that it offers an application where mathematical disciplines such as differential geometry and topology interact with computational disciplines such as discrete and computational geometry. One of my goals in writing this book has been to collect and disseminate the results obtained by this confluence. The research on geometry and topology of shapes in the discrete setting has gained a momentum through the study of the reconstruction problem. This book, I hope, will serve as a prelude to this exciting new line of research.
To maintain the focus and brevity I chose a few algorithms that have provable guarantees. It happens to be, though quite naturally, they all use the well-known data structures of the Voronoi diagram and the Delaunay triangulation. Actually, these discrete geometric data structures offer discrete counterparts to many of the geometric and topological properties of shapes. Naturally, the Voronoi and Delaunay diagrams have been a common thread for the materials in the book.
This book originated from the class notes of a seminar course “Sample-Based Geometric Modeling” that I taught for four years at the graduate level in the computer science department of The Ohio State University.
Defeasible reasoning is a simple but efficient approach to nonmonotonic reasoning that has recently attracted considerable interest and that has found various applications. Defeasible logic and its variants are an important family of defeasible reasoning methods. So far no relationship has been established between defeasible logic and mainstream nonmonotonic reasoning approaches. In this paper we establish close links to known semantics of logic programs. In particular, we give a translation of a defeasible theory $D$ into a meta-program $P(D)$. We show that under a condition of decisiveness, the defeasible consequences of $D$ correspond exactly to the sceptical conclusions of $P(D)$ under the stable model semantics. Without decisiveness, the result holds only in one direction (all defeasible consequences of $D$ are included in all stable models of $P(D)$). If we wish a complete embedding for the general case, we need to use the Kunen semantics of $P(D)$, instead.
The surface reconstruction algorithm in the previous chapter assumes that the sample is sufficiently dense, that is, ε is sufficiently small. However, the cases of undersampling where this density condition is not met are prevalent in practice. The input data may be dense only in parts of the sampled surface. Regions with small features such as high curvatures are often not well sampled. When sampled with scanners, occluded regions are not sampled at all. Nonsmooth surfaces such as the ones considered in CAD are bound to have undersampling since no finite point set can sample nonsmooth regions to satisfy the ε-sampling condition for a strictly positive ε. Even some surfaces with boundaries can be viewed as a case of undersampling. If Σ is a surface without boundary and Σ′ ⊂ Σ is a surface with boundary, a sample of Σ′ is also a sample of Σ. This sample may be dense for Σ′ and not for Σ.
In this chapter we describe an algorithm that detects the regions of undersampling. This detection helps in reconstructing surfaces with boundaries. Later, we will see that this detection also helps in repairing the unwanted holes created in the reconstructed surface due to undersampling.
Samples and Boundaries
Let P be an input point set that samples a surface Σ where Σ does not have any boundary.
Taylor introduced a variable binding scheme for logic variables in his PARMA system, that uses cycles of bindings rather than the linear chains of bindings used in the standard WAM representation. Both the HAL and dProlog languages make use of the PARMA representation in their Herbrand constraint solvers. Unfortunately, PARMA's trailing scheme is considerably more expensive in both time and space consumption. The aim of this paper is to present several techniques that lower the cost. First, we introduce a trailing analysis for HAL using the classic PARMA trailing scheme that detects and eliminates unnecessary trailings. The analysis, whose accuracy comes from HAL's determinism and mode declarations, has been integrated in the HAL compiler and is shown to produce space improvements as well as speed improvements. Second, we explain how to modify the classic PARMA trailing scheme to halve its trailing cost. This technique is illustrated and evaluated both in the context of dProlog and HAL. Finally, we explain the modifications needed by the trailing analysis in order to be combined with our modified PARMA trailing scheme. Empirical evidence shows that the combination is more effective than any of the techniques when used in isolation.
The unification problem in algebras capable of describing sets has been tackled, directly or indirectly, by many researchers and it finds important applications in various research areas, e.g. deductive databases, theorem proving, static analysis, rapid software prototyping. The various solutions proposed are spread across a large literature. In this paper we provide a uniform presentation of unification of sets, formalizing it at the level of set theory. We address the problem of deciding existence of solutions at an abstract level. This provides also the ability to classify different types of set unification problems. Unification algorithms are uniformly proposed to solve the unification problem in each of such classes. The algorithms presented are partly drawn from the literature – and properly revisited and analyzed – and partly novel proposals. In particular, we present a new goal-driven algorithm for general $ACI1$ unification and a new simpler algorithm for general $(Ab)(C\ell)$ unification.
This paper looks at logic programming with three kinds of negation: default, weak and strict negations. A 3-valued logic model theory is discussed for logic programs with three kinds of negation. The procedure is constructed for negations so that a soundness of the procedure is guaranteed in terms of 3-valued logic model theory.
The simplest class of manifolds that pose nontrivial reconstruction problems are curves in the plane. We will describe two algorithms for curve reconstruction, Crust and NN-Crust in this chapter. First, we will develop some general results that will be applied to prove the correctness of the both algorithms.
A single curve in the plane is defined by a map ξ: [0, 1] → ℝ2 where [0, 1] is the closed interval between 0 and 1 on the real line. The function ξ is one-to-one everywhere except at the endpoints where ξ(0) = ξ(1). The curve is C1-smooth if ξ has a continuous nonzero first derivative in the interior of [0, 1] and the right derivative at 0 is same as the left derivative at 1 both being nonzero. If ξ has continuous ith derivatives, i ≥ 1, at each point as well, the curve is called Ci-smooth. When we refer to a curve Σ in the plane, we actually mean the image of one or more such maps. By definition Σ does not self-intersect though it can have multiple components each of which is a closed curve, that is, without any endpoint.
For a finite sample to be a ε-sample for some ε > 0, it is essential that the local feature size f is strictly positive everywhere.
The WiRo-6.3 is a six-degrees of freedom (six-DOF) robotic parallel structure actuated by nine wires, whose characteristics have been thoroughly analyzed in previous papers in reference. It is thought to be a master device for teleoperation; thus, it is moved by an operator through a handle and can convey a force reflection on the operator's hand. A completely new method for studying the workspace of this device, and of virtually any nine-wire parallel structure actuated by wire is presented and discussed, and its results are given in a graphical form.
We show that the Borel hierarchy of the class of context free $\omega$-languages, or even of the class of $\omega$-languages accepted by Büchi 1-counter automata, is the same as the Borel hierarchy of the class of $\omega$-languages accepted by Turing machines with a Büchi acceptance condition. In particular, for each recursive non-null ordinal $\alpha$, there exist some ${\bf \Sigma}^0_\alpha$-complete and some ${\bf \Pi}^0_\alpha$-complete $\omega$-languages accepted by Büchi 1-counter automata. And the supremum of the set of Borel ranks of context free $\omega$-languages is an ordinal $\gamma_2^1$ that is strictly greater than the first non-recursive ordinal $\omega_1^{\mathrm{CK}}$. We then extend this result, proving that the Wadge hierarchy of context free $\omega$-languages, or even of $\omega$-languages accepted by Büchi 1-counter automata, is the same as the Wadge hierarchy of $\omega$-languages accepted by Turing machines with a Büchi or a Muller acceptance condition.
We investigate natural systems of fundamental sequences for ordinals below the Howard–Bachmann ordinal and study growth rates of the resulting slow growing hierarchies. We consider a specific assignment of fundamental sequences that depends on a non-negative real number $\varepsilon$. We show that the resulting slow growing hierarchy is eventually dominated by a fixed elementary recursive function if $\varepsilon$ is equal to zero. We show further that the resulting slow growing hierarchy exhausts the provably recursive functions of $\sfb{ID}_1$ if $\varepsilon$ is strictly greater than zero. Finally, we show that the resulting fast growing hierarchies exhaust the provably recursive functions of $\sfb{ID}_1$ for all non-negative values of $\varepsilon$. Our result is somewhat surprising since usually the slow growing hierarchy along the Howard–Bachmann ordinal exhausts precisely the provably recursive functions of $\sfb{PA}$. Note that the elementary functions are a very small subclass of the provably recursive functions of $\sfb{PA}$, and the provably recursive functions of $\sfb{PA}$ are a very small subclass of the provably recursive functions of $\sfb{ID}_1$. Thus the jump from $\varepsilon$ equal to zero to $\varepsilon$ greater than zero is one of the biggest jumps in growth rates for subrecursive hierarchies one might think of.
Following Lutz's approach to effective (constructive) dimension, we define a notion of dimension for individual sequences based on Schnorr's concept(s) of randomness. In contrast to computable randomness and Schnorr randomness, the dimension concepts defined via computable martingales and Schnorr tests coincide, that is, the Schnorr Hausdorff dimension of a sequence always equals its computable Hausdorff dimension. Furthermore, we give a machine characterisation of the Schnorr dimension, based on prefix-free machines whose domain has computable measure. Finally, we show that there exist computably enumerable sets that are Schnorr (computably) irregular: while every c.e. set has Schnorr Hausdorff dimension 0, there are c.e. sets of computable packing dimension 1, which is, from Barzdiņš' Theorem, an impossible property for the case of effective (constructive) dimension. In fact, we prove that every hyperimmune Turing degree contains a set of computable packing dimension 1.
In the paper we introduce and study the uniform regular enumerations for arbitrary recursive ordinals. As an application of the technique, we obtain a uniform generalisation of a theorem of Ash and a characterisation of a class of uniform operators on transfinite sequences of sets of natural numbers.
The notion of ordinal computability is defined by generalising standard Turing computability on tapes of length $\omega$ to computations on tapes of arbitrary ordinal length. The fundamental theorem on ordinal computability states that a set $x$ of ordinals is ordinal computable from ordinal parameters if and only if $x$ is an element of the constructible universe $\mathbf{L}$. In this paper we present a new proof of this theorem that makes use of a theory SO axiomatising the class of sets of ordinals in a model of set theory. The theory SO and the standard Zermelo–Fraenkel axiom system ZFC can be canonically interpreted in each other. The proof of the fundamental theorem is based on showing that the class of sets that are ordinal computable from ordinal parameters forms a model of SO.
We study the complexity of computable and $\Sigma^0_1$ inductive definitions of sets of natural numbers. For example, we show how to assign natural indices to monotone $\Sigma^0_1$-definitions and then use these to calculate the complexity of the set of all indices of monotone $\Sigma^0_1$-definitions that are computable. We also examine the complexity of a new type of inductive definition, which we call weakly finitary monotone inductive definitions. Applications are given in proof theory and in logic programming.
We show that for any 2-computably enumerable Turing degree ${\bf l}$, any computably enumerable degree ${\bf a}$ and any Turing degree ${\bf s}$, if ${\bf l'=\boldsymbol{0}'}$, ${\bf l<a}$, ${\bf s\geq \boldsymbol{0}'}$, and ${\bf s}$ is c.e. in ${\bf a}$, then there is a 2-computably enumerable degree ${\bf x}$ with the following properties:
The ten papers in this special issue arose from the conference CiE 2005: New Computational Paradigms, held at the University of Amsterdam in June, 2005. CiE 2005 was the first of a new series of conferences associated with the interdisciplinary network Computability in Europe focused on computability in theoretical computer science and mathematical logic, and ranging over a broad spectrum of research areas from the application of novel approaches to computation, through computability-theoretic aspects of physical systems to set-theoretic analyses of infinitary computing models.
Recent work in constructive mathematics shows that Hilbert's program works for a large part of abstract algebra. Using in an essential way the ideas contained in the classical arguments, we can transform most of the highly abstract proofs of ‘concrete’ statements into elementary proofs. Surprisingly, the arguments we produce are not only elementary but also mathematically clearer, and not necessarily longer. We present an example where the simplification was significant enough to suggest an improved version of a classical theorem. For this we use a general method to transform some logically complex first-order formulae into a geometrical form, which may be interesting in itself.