To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Suppose that $q$ is a prime power exceeding five. For every integer $N$ there exists a 3-connected GF($q$)-representable matroid that has at least $N$ inequivalent GF($q$)-representations. In contrast to this, Geelen, Oxley, Vertigan and Whittle have conjectured that, for any integer $r > 2$, there exists an integer $n(q,\, r)$ such that if $M$ is a 3-connected GF($q$)-representable matroid and $M$ has no rank-$r$ free-swirl or rank-$r$ free-spike minor, then $M$ has at most $n(q,\, r)$ inequivalent GF($q$)-representations. The main result of this paper is a proof of this conjecture for Zaslavsky's class of bias matroids.
We give new formulas for the asymptotics of the number of spanning trees of a large graph. A special case answers a question of McKay [Europ. J. Combin. 4 149–160] for regular graphs. The general answer involves a quantity for infinite graphs that we call ‘tree entropy’, which we show is a logarithm of a normalized determinant of the graph Laplacian for infinite graphs. Tree entropy is also expressed using random walks. We relate tree entropy to the metric entropy of the uniform spanning forest process on quasi-transitive amenable graphs, extending a result of Burton and Pemantle [Ann. Probab. 21 1329–1371].
We study the Lovász number $\vartheta$ along with two related SDP relaxations $\vartheta_{1/2}$, $\vartheta_2$ of the independence number and the corresponding relaxations $\bar\vartheta$, $\bar\vartheta_{1/2}$, $\bar\vartheta_2$ of the chromatic number on random graphs $G_{n,p}$. We prove that $\vartheta,\vartheta_{1/2},\vartheta_2(G_{n,p})$ are concentrated about their means, and that $\bar\vartheta,\bar\vartheta_{1/2},\bar\vartheta_2(G_{n,p})$ in the case $p<n^{-1/2-\varepsilon}$ are concentrated in intervals of constant length. Moreover, extending a result of Juhász [28], we estimate the probable value of $\vartheta,\vartheta_{1/2},\vartheta_2(G_{n,p})$ for edge probabilities $c_0/n\leq p\leq 1-c_0/n$, where $c_0>0$ is a constant. As an application, we give improved algorithms for approximating the independence number of $G_{n,p}$ and for deciding $k$-colourability in polynomial expected time.
We show that symmetry, represented by a graph's automorphism group, can be used to greatly reduce the computational work for the substitution method. This allows application of the substitution method over larger regions of the problem lattices, resulting in tighter bounds on the percolation threshold $p_c$. We demonstrate the symmetry reduction technique using bond percolation on the $(3,12^2)$ lattice, where we improve the bounds on $p_c$ from (0.738598,0.744900) to (0.739399,0.741757), a reduction of more than 62% in width, from 0.006302 to 0.002358.
Let $G$ be a finite group of order $n$ and let $k$ be a natural number. Let $\{x_i : i\in I\}$ be a family of elements of $G$ such that $|I|= n+k-1$. Let $v$ be the most repeated value of the family. Let $ \{ \sigma_i : 1\leq i \leq k \} $ be a family of permutations of $G$ such that $\sigma_i(1)=1$ for all $i$. We obtain the following result.
There are pairwise distinct elements $i_1, i_2, \dots ,i_k\in I$ such that \[ \prod_{1\leq j\leq k } \sigma_j \big(v^{-1}x_ {i_j }\big) =1.\]
We study self-avoiding walks (SAWs) on non-Euclidean lattices that correspond to regular tilings of the hyperbolic plane (‘hyperbolic graphs’). We prove that on all but at most eight such graphs, (i) there are exponentially fewer $N$-step self-avoiding polygons than there are $N$-step SAWs, (ii) the number of $N$-step SAWs grows as $\mu_w^N$ within a constant factor, and (iii) the average end-to-end distance of an $N$-step SAW is approximately proportional to $N$. In terms of critical exponents from statistical physics, (ii) says that $\gamma=1$ and (iii) says that $\nu=1$. We also prove that $\gamma$ is finite on all hyperbolic graphs, and we prove a general identity about non-reversing walks that had previously been discovered for certain special cases.
We investigate a new denotational model of linear logic based on the purely relational model. In this semantics, webs are equipped with a notion of ‘finitary’ subsets satisfying a closure condition and proofs are interpreted as finitary sets. In spite of a formal similarity, this model is quite different from the usual models of linear logic (coherence semantics, hypercoherence semantics, the various existing game semantics…). In particular, the standard fix-point operators used for defining the general recursive functions are not finitary, although the primitive recursion operators are. This model can be considered as a discrete analogue of the Köthe space semantics introduced in a previous paper: we show how, given a field, each finiteness space gives rise to a vector space endowed with a linear topology, a notion introduced by Lefschetz in 1942, and we study the corresponding model where morphisms are linear continuous maps (a version of Girard's quantitative semantics with coefficients in the field). In this way we obtain a new model of the recently introduced differential lambda-calculus.
Constructive type theory is an expressive programming language in which both algorithms and proofs can be represented. A limitation of constructive type theory as a programming language is that only terminating programs can be defined in it. Hence, general recursive algorithms have no direct formalisation in type theory since they contain recursive calls that satisfy no syntactic condition guaranteeing termination. In this work, we present a method to formalise general recursive algorithms in type theory. Given a general recursive algorithm, our method is to define an inductive special-purpose accessibility predicate that characterises the inputs on which the algorithm terminates. The type-theoretic version of the algorithm is then defined by structural recursion on the proof that the input values satisfy this predicate. The method separates the computational and logical parts of the definitions and thus the resulting type-theoretic algorithms are clear, compact and easy to understand. They are as simple as their equivalents in a functional programming language, where there is no restriction on recursive calls. Here, we give a formal definition of the method and discuss its power and its limitations.
We propose a new category-theoretic formulation of relational parametricity based on a logic for reasoning about parametricity given by Abadi and Plotkin. The logic can be used to reason about parametric models, such that we may prove consequences of parametricity that to our knowledge have not been proved before for existing category-theoretic notions of relational parametricity. We provide examples of parametric models and describe a way of constructing parametric models from given models of the second-order lambda calculus.
The existence of semi-pullbacks for stochastic relations over analytic spaces is addressed. It is shown by means of a measure extension that the theory of measurable selectors for measurable relations may be employed for its solution. The category of stochastic relations is shown not to have (weak) pullbacks.
In this paper we introduce a new hierarchical graph model to structure large graphs into small components by distributing the nodes (and, likewise, edges) into a hierarchy of packages. In contrast to other known approaches, we do not fix the type of underlying graphs. Moreover, our model is equipped with a rule-based transformation concept such that hierarchical graphs are not restricted to being used only for the static representation of complex system states, but can also be used to describe dynamic system behaviour.
Algorithmic skeletons are abstractions from common patterns of parallel activity which offer a high degree of reusability for developers of parallel algorithms. Their close association with higher order functions (HOFs) makes functional languages, with their strong transformational properties, excellent vehicles for skeleton-based parallel program development. However, using HOFs in this way raises substantial problems of identification of useful HOFs within a given application and of resource allocation on target architectures. We present the design and implementation of a parallelising compiler for Standard ML which exploits parallelism in the familiar $map$ and $fold$ HOFs through skeletons for processor farms and processor trees, respectively. The compiler extracts parallelism automatically and is target architecture independant. HOF execution within a functional language can be nested in the sense that one HOF may be passed and evaluated during the execution of another HOF. We are able to exploit this by nesting our parallel skeletons in a processor topology which matches the structure of the Standard ML source. However, where HOF arguments result from partially applied functions, free variable bindings must be identified and communicated through the corresponding skeleton hierarchy to where those arguments are actually applied. We describe the analysis leading from input Standard ML through HOF instantiation and backend compilation to an executable parallel program. We also present an overview of the runtime system and the execution model. Finally, we give parallel performance figures for several example programs, of varying computational loads, on the Linux-based Beowulf, IBM SP/2, Fujitsu AP3000 and Sun StarCat 15000 MIMD parallel machines. These demonstrate good cross-platform consistency of parallel code behaviour.
Fundamental notions of combinatorics on words underlie natural language processing. This is not surprising, since combinatorics on words can be seen as the formal study of sets of strings, and sets of strings are fundamental objects in language processing.
Indeed, language processing is obviously a matter of strings. A text or a discourse is a sequence of sentences; a sentence is a sequence of words; a word is a sequence of letters. The most universal levels are those of sentence, word, and letter (or phoneme), but intermediate levels exist, and can be crucial in some languages, between word and letter: a level of morphological elements (e.g. suffixes), and the level of syllables. The discovery of this piling up of levels, and in particular of word level and phoneme level, delighted structuralist linguists in the twentieth century. They termed this inherent, universal feature of human language “double articulation”.
It is a little more intricate to see how sets of strings are involved. There are two main reasons. First, at a point in a linguistic flow of data being processed, you must be able to predict the set of possible continuations after what is already known, or at least to expect any continuation among some set of strings that depends on the language. Second, natural languages are ambiguous, that is a written or spoken portion of text can often be understood or analysed in several ways, and the analyses are handled as a set of strings as long as they cannot be reduced to a single analysis.
The chapter presents data structures used to memorize the suffixes of a text and some of their applications. These structures are designed to give a fast access to all factors of the text, and this is the reason why they have a fairly large number of applications in text processing.
Two types of objects are considered in this chapter, digital trees and automata, together with their compact versions. Trees put together common prefixes of the words in the set. Automata gather in addition their common suffixes. The structures are presented in order of decreasing size.
The representation of all the suffixes of a word by an ordinary digital tree called a suffix trie (Section 2.1) has the advantage of being simple but can lead to a memory size that is quadratic in the length of the considered word. The compact tree of suffixes (Section 2.2) is guaranteed to hold in linear memory space.
The minimization (related to automata) of the suffix trie gives the minimal automaton accepting the suffixes and is described in Section 2.4. Compaction and minimization yield the compact suffix automaton of Section 2.5.
Most algorithms that build the structures presented in this chapter work in time O(n × log Card A), for a text of length n, assuming that there is an ordering on the alphabet A. Their execution time is thus linear when the alphabet is finite and fixed. Locating a word of length m in the text then takes O(m × log Card A) time.
This chapter is an introductory chapter to the book. It gives general notions, notation, and technical background. It covers, in a tutorial style, the main notions in use in algorithms on words. In this sense, it is a comprehensive exposition of basic elements concerning algorithms on words, automata and transducers, and probability on words.
The general goal of “stringology” we pursue here is to manipulate strings of symbols, to compare them, to count them, to check some properties, and perform simple transformations in an effective and efficient way.
A typical illustrative example of our approach is the action of circular permutations on words, because several of the aspects we mentioned above are present in this example. First, the operation of circular shift is a transduction which can be realized by a transducer. We include in this chapter a section (Section 1.5) on transducers. Transducers will be used in Chapter 3. The orbits of the transformation induced by the circular permutation are the so-called conjugacy classes. Conjugacy classes are a basic notion in combinatorics on words. The minimal element in a conjugacy class is a good representative of a class. It can be computed by an efficient algorithm (actually in linear time). This is one of the algorithms which appear in Section 1.2. Algorithms for conjugacy are again considered in Chapter 2. These words give rise to Lyndon words which have remarkable combinatorial properties already emphasized in Lothaire (1997). We describe in Section 1.2.5 the Lyndon factorization algorithm.