To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Although the problem of the existence of a resolution of singularities in characteristic zero was already proved by Hironaka in the 1960s and although algorithmic proofs of it have been given independently by the groups of Bierstone and Milman and of Encinas and Villamayor in the early 1990s, the explicit construction of a resolution of singularities of a given variety is still a very complicated computational task. In this article, we would like to outline the algorithmic approach of Encinas and Villamayor and simultaneously discuss the practical problems connected to the task of implementing the algorithm.
Introduction
The problem of existence and construction of a resolution of singularities is one of the central tasks in algebraic geometry. In its shortest formulation it can be stated as: Given a variety X over a field K, a resolution of singularities of X is a proper birational morphism π : Y → X such that Y is a non-singular variety.
Historically, a question of this type has first been considered in the second half of the 19th century – in the context of curves over the field of complex numbers. It was already a very active area of research at that time with a large number of contributions (of varying extent of rigor) and eventually lead to a proof of existence of resolution of singularities in this special situation at the end of the century.
In this survey, we review part of the theory of superisolated surface singularities (SIS) and its applications including some new and recent devlopments. The class of SIS singularities is, in some sense, the simplest class of germs of normal surface singularities. Namely, their tangent cones are reduced curves and the geometry and topology of the SIS singularities can be deduced from them. Thus this class contains, in a canonical way, all the complex projective plane curve theory, which gives a series of nice examples and counterexamples. They were introduced by I. Luengo to show the non-smoothness of the μ-constant stratum and have been used to answer negatively some other interesting open questions. We review them and the new results on normal surface singularities whose link are rational homology spheres. We also discuss some positive results which have been proved for SIS singularities.
Introduction
A superisolated surface, SIS for short, singularity (V, 0) ⊂ (ℂ3, 0) is a generic perturbation of the cone over a (singular) reduced projective plane curve C of degree d, C = {fd(x, y, z)= 0} ⊂ ℙ2, by monomials of higher degree. The geometry, resolution and topology of (V, 0) is determined by the singularities of C and the pair (ℙ2,C). This provides a canonical way to embed the classical and rich theory of complex projective plane curves into the theory of normal surface singularities of (ℂ3; 0).
This article gives some overview on Singular, a computer algebra system for polynomial computations with special emphasis on the needs of commutative algebra, algebraic geometry and singularity theory, which has been developed under the guidance of G.-M. Greuel, G. Pfister and the second author [31]. We draw the bow from Singular's early years to its latest features. Moreover, we present some explicit calculations, focusing on applications in singularity theory.
Introduction
By the development of effective computer algebra algorithms and of powerful computers, algebraic geometry and singularity theory (like many other disciplines of pure mathematics) have become accessible to experiments. Computer algebra may help
to discover unexpected mathematical evidence, leading to new conjectures or theorems, later proven by traditional means,
to construct interesting objects and determine their structure (in particular, to find counter-examples to conjectures),
to verify negative results such as the non-existence of certain objects with prescribed invariants,
to verify theorems whose proof is reduced to straightforward but tedious calculations,
to solve enumerative problems, and
to create data bases.
In fact, in the last decades, there is a growing number of research articles in algebraic geometry and singularity theory originating from explicit computations (such as [1] and [46] in this volume).
What abilities of a computer algebra system are needed to become a valuable tool for algebraic geometry and, in particular, for singularity theory? First of all, the system needs an efficient representation of polynomials with exact coefficients.
In continuing joint work with Walter Neumann, we consider the relationship between three different points of view in describing a (germ of a) complex normal surface singularity. The explicit equations of a singularity allow one to talk about hypersurfaces, complete inter-sections, weighted homogeneity, Hilbert function, etc. The geometry of the singularity could involve analytic aspects of a good resolution, or existence and properties of Milnor fibres; one speaks of geometric genus, Milnor number, rational singularities, the Gorenstein and ℚ-Gorenstein properties, etc. The topology of the singularity means the description of its link, or equivalently (by a theorem of Neumann) the configuration of the exceptional curves in a resolution. We survey ongoing work ([15],[16]) with Neumann to study the possible geometry and equations when the topology of the link is particularly simple, i.e. the link has no rational homology, or equivalently the exceptional configuration in a resolution is a tree of rational curves. Given such a link, we ask whether there exist “nice” singularities with this topology. In our situation, that would ask if the singularity is a quotient of a special kind of explicitly given complete intersection (said to be “of splice type”) by an explicitly given abelian group; on the topological level, this quotient gives the universal abelian cover of the link.
We present a model in which, due to the quantum nature of the signals controlling the implementation time of successive unitary computational steps, physical irreversibility appears in the execution of a logically reversible computation.
Increasing integer sequences include many instances of interestingsequences and combinatorial structures, ranging from tournaments to additionchains, from permutations to sequences having the Goldbach property that any integer greater than 1 can be obtained as the sum of two elementsin the sequence. The paper introduces and compares several of these classesof sequences, discussing recurrence relations, enumerative problems andquestions concerning shortest sequences.
We present several solutions tothe Firing Squad Synchronization Problem on grid networks ofdifferent shapes.The nodes are finite state processors thatwork in unison with other processors and in synchronized discrete steps. Thenetworks we deal with are: the line, the ring and the square.For all of these models we consider one- and two-waycommunication modes and we also constrain the quantity of informationthat adjacent processors can exchange at each step.We first present synchronization algorithms that work in time n2, nlogn, $n\sqrt n$,2n, where n is a total number of processors.Synchronization methods are described through so called signals that are then usedas building blocks to compose synchronization solutions for the cases that synchronization times are expressedby polynomials with nonnegative coefficients.
The main goal of this paper is the investigation of a relevantproperty which appears in the various definition of deterministictopological chaos for discrete time dynamical system:transitivity. Starting from the standard Devaney's notion of topological chaosbased on regularity, transitivity, and sensitivity to the initialconditions, the critique formulated by Knudsen is taken intoaccount in order to exclude periodic chaos from this definition.Transitivity (or some stronger versions of it) turns out to be therelevant condition of chaos and its role is discussed by a surveyof some important results about it with the presentation of somenew results. In particular, we study topological mixing, strong transitivity,and full transitivity. Their applications to symbolic dynamics areinvestigated with respect to the relationships with the associatedlanguages.
We present a novel eye localization method which can be used inface recognition applications. It is based on two SVM classifierswhich localize the eyes at different resolution levels exploitingthe Haar wavelet representation of the images. We present anextensive analysis of its performance on images of very differentpublic databases, showing very good results.
PageRank is a ranking method that assigns scores to web pages using the limitdistribution of a random walk on the web graph. A fibration of graphs is amorphism that is a local isomorphism of in-neighbourhoods, much in the same waya covering projection is a local isomorphism of neighbourhoods. We show that adeep connection relates fibrations and Markov chains with restart, aparticular kind of Markov chains that include the PageRank one as aspecial case. This fact provides constraints on the values that PageRank canassume. Using our results, we show that a recently defined class of graphs thatadmit a polynomial-time isomorphism algorithm based on the computation ofPageRank is really a subclass of fibration-prime graphs, which possesssimple, entirely discrete polynomial-time isomorphism algorithms based onclassical techniques for graph isomorphism. We discuss efficiency issues in theimplementation of such algorithms for the particular case of web graphs, in whichO(n) space occupancy (where n is the number of nodes) may be acceptable, butO(m) is not (where m is the number of arcs).
We consider the family UREC of unambiguous recognizabletwo-dimensional languages. We prove that there are recognizablelanguages that are inherently ambiguous, that is UREC family is aproper subclass of REC family. The result is obtained by showing anecessary condition for unambiguous recognizable languages.Further UREC family coincides with the class of picture languagesdefined by unambiguous 2OTA and it strictly contains itsdeterministic counterpart. Some closure and non-closure propertiesof UREC are presented. Finally we show that it is undecidablewhether a given tiling system is unambiguous.
Bertoni et al. introduced in Lect. Notes Comput. Sci.2710 (2003) 1–20 a new model of 1-way quantum finite automaton (1qfa) called 1qfa with control language (1qfc). This model, whose recognizing power is exactly the class of regular languages, generalizes main models of 1qfa's proposed in the literature. Here, we investigate some properties of 1qfc's. In particular, we provide algorithms for constructing 1qfc's accepting the inverse homomorphic images and quotients of languages accepted by 1qfc's. Moreover, we give instances of binary regular languages on which 1qfc's are proved to be more succinct (i.e. , to have less states) than the corresponding classical (deterministic) automata.
We compare various computational complexity classes defined within the framework of membrane systems, a distributed parallel computing device which is inspired from the functioning of the cell,with usual computational complexity classes for Turing machines. In particular, we focus our attention on the comparison among complexity classes for membrane systems with active membranes (where new membranes can be created by division of existing membranes) and the classes PSPACE, EXP, and EXPSPACE.
In this paper we analyze some intrusion detection strategiesproposed in the literature and we show that they represent thevarious facets of a well known formal languages problem: computingthe distance between a string x and a language L. Inparticular, the main differences among the various approachesadopted for building intrusion detection systems can be reduced tothe characteristics of the language L and to the notion ofdistance adopted. As a further contribution we will also show thatfrom the computational point of view all these strategies areequivalent and they are amenable to efficient parallelization.
A number of methodological papers published during the last yearstestify that a need for a thorough revision of the researchmethodology is felt by the operations research community – see, forexample, [Barr et al., J. Heuristics1 (1995) 9–32; Eiben and Jelasity,Proceedings of the 2002 Congress on Evolutionary Computation (CEC'2002) 582–587; Hooker,J. Heuristics1 (1995) 33–42; Rardin and Uzsoy,J. Heuristics7 (2001) 261–304]. In particular, theperformance evaluation of nondeterministic methods, including widelystudied metaheuristics such as evolutionary computation and ant colonyoptimization, requires the definition of new experimental protocols.A careful and thorough analysis of the problem of evaluatingmetaheuristics reveals strong similarities between this problem andthe problem of evaluating learning methods in the machine learningfield.In this paper, we show that several conceptual tools commonly used inmachine learning – such as, for example, the probabilistic notion ofclass of instances and the separation between the training and thetesting datasets – fit naturally in the context of metaheuristicsevaluation.Accordingly, we propose and discuss some principles inspired by theexperimental practice in machine learning for guiding the performanceevaluation of optimization algorithms.Among these principles, a clear separation between the instances thatare used for tuning algorithms and those that are used in the actualevaluation is particularly important for a proper assessment.
In this work we study some probabilistic models for the random generation of words over a given alphabetused in the literature in connection with pattern statistics.Our goal is to compare models based on Markovian processes (where the occurrence of a symbol in a given positiononly depends on a finite number of previous occurrences) and the stochastic models that can generate a word of given length from a regular language under uniform distribution.We present some results that show the differences between these two stochastic models and theirrelationship with the rational probabilistic measures.