To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A grammar is intuitively a set of rules which are used to construct a language contained in Σ* for some alphabet Σ. These rules allow us to replace symbols or strings of symbols with other symbols or strings of symbols until we finally have strings of symbols contained in Σ allowing us to form an element of the language. By placing restrictions on the rules, we shall see that we can develop different types of languages. In particular we can restrict our rules to produce desirable qualities in our language. For example in our examples below we would not want 3 + ÷4 − ×6. We also would not want a sentence Slowly cowboy the leaped sunset. Suppose that we begin with a word add, and that we have a rule that allows us to replace add with A + B and that both A and B can be replaced with any nonnegative integer less that ten. Using this rule, we can replace A with 5 and B with 3 to get 5 + 3. There might also be an additional rule that allows us to replace add with a different string of symbols.
If we add further rules that A can be replaced by A + B and B can be replaced by A × B, we can start by replacing add with A + B. If we then replace A with A + B and B with A × B, we get A + B + A × B.
An automaton is a device which recognizes or accepts certain elements of Σ*, where Σ is a finite alphabet. Since the elements accepted by the automaton are a subset of Σ*, they form a language. Therefore each automaton will recognize or accept a language contained in Σ*. The language of Σ* consisting of the words accepted by an automaton M is the language over Σ* accepted byM and denoted M(L). We will be interested in the types of language an automaton accepts.
Definition 3.1A deterministic automaton, denoted by (Σ, Q, s0, ϒ, F), consists of a finite alphabet Σ, a finite set Q of states, and a function ϒ : Q × Σ → Q, called the transition function and a set F of acceptance states. The set Q contains an element s0 and a subset F, the set of acceptance states.
The input of ϒ is a letter of Σ and a state belonging to Q. The output is a state of Q (possibly the same one). If the automaton is in state s and “reads” the letter a, then (s, a) is the input for ϒ and ϒ(s, a) is the next state. Given a string in Σ* the automaton “reads” the string or word as follows. Beginning at the initial state s0, and beginning with the first letter in the string (if the string is nonempty), it reads the first letter of the string. If the first letter is the letter a of Σ, then it “moves” to state s = ϒ(s0, a).
This book serves two purposes, the first is as a text and the second is for someone wishing to explore topics not found in other automata theory texts. It was originally written as a text book for anyone seeking to learn the basic theories of automata, languages, and Turing machines. In the first five chapters, the book presents the necessary basic material for the study of these theories. Examples of topics included are: regular languages and Kleene's Theorem; minimal automata and syntactic monoids; the relationship between context-free languages and pushdown automata; and Turing machines and decidability. The exposition is gentle but rigorous, with many examples and exercises (teachers using the book with their course may obtain a copy of the solution manual by sending an email to solutions@cambridge.org). It includes topics not found in other texts such as codes, retracts, and semiretracts.
Thanks primarily to Tom Head, the book has been expanded so that it should be of interest to people in mathematics, computer science, biology, and possibly other areas. Thus, the second purpose of the book is to provide material for someone already familiar with the basic topics mentioned above, but seeking to explore topics not found in other automata theory books.
The two final chapters introduce two programs of research not previously included in beginning expositions. Chapter 6 introduces a visually inspired approach to languages allowed by the unique representation of each word as a power of a primitive word. The required elements of the theory of combinatorics on words are included in the exposition of this chapter.
The usual definition of average degree for a non-regular lattice has the disadvantage that it takes the same value for many lattices with clearly different connectivity. We introduce an alternative definition of average degree, which better separates different lattices.
These measures are compared on a class of lattices and are analysed using a Markov chain describing a random walk on the lattice. Using the new measure, we conjecture the order of both the critical probabilities for bond percolation and the connective constants for self-avoiding walks on these lattices.
Let $s$ and $t$ be integers satisfying $s \geq 2$ and $t \geq 2$. Let $S$ be a tree of size $s$, and let $P_t$ be the path of length $t$. We show in this paper that, for every edge-colouring of the complete graph on $n$ vertices, where $n=224(s-1)^2t$, there is either a monochromatic copy of $S$ or a rainbow copy of $P_t$. So, in particular, the number of vertices needed grows only linearly in $t$.
For any partition of $\{1, 2, \ldots{,}\, n\}$ we define its increments$X_i, 1 \leq i \leq n$ by $X_i = 1$ if $i$ is the smallest element in the partition block that contains it, $X_i = 0$ otherwise. We prove that for partially exchangeable random partitions (where the probability of a partition depends only on its block sizes in order of appearance), the law of the increments uniquely determines the law of the partition. One consequence is that the Chinese Restaurant Process CRP($\theta$) (the partition with distribution given by the Ewens sampling formula with parameter $\theta$) is the only exchangeable random partition with independent increments.
We consider the problem of reorienting an oriented matroid so that all its cocircuits are ‘as balanced as possible in ratio’. It is well known that any oriented matroid having no coloops has a totally cyclic reorientation, a reorientation in which every signed cocircuit $B = \{B^+, B^-\}$ satisfies $B^+, B^- \neq \emptyset$. We show that, for some reorientation, every signed cocircuit satisfies \[1/f(r) \leq |B^+|/|B^-| \leq f(r)\], where $f(r) \leq 14\,r^2\ln(r)$, and $r$ is the rank of the oriented matroid.
In geometry, this problem corresponds to bounding the discrepancies (in ratio) that occur among the Radon partitions of a dependent set of vectors. For graphs, this result corresponds to bounding the chromatic number of a connected graph by a function of its Betti number (corank) $|E|-|V|+1$.
A dominating set $\cal D$ of a graph $G$ is a subset of $V(G)$ such that, for every vertex $v\in V(G)$, either in $v\in {\cal D}$ or there exists a vertex $u \in {\cal D}$ that is adjacent to $v$. We are interested in finding dominating sets of small cardinality. A dominating set $\cal I$ of a graph $G$ is said to be independent if no two vertices of ${\cal I}$ are connected by an edge of $G$. The size of a smallest independent dominating set of a graph $G$ is the independent domination number of $G$. In this paper we present upper bounds on the independent domination number of random regular graphs. This is achieved by analysing the performance of a randomized greedy algorithm on random regular graphs using differential equations.
In this paper, we study percolation on finite Cayley graphs. A conjecture of Benjamini says that the critical percolation $p_c$ of any vertex-transitive graph satisfying a certain diameter condition can be bounded away from one. We prove Benjamini's conjecture for some special classes of Cayley graphs. We also establish a reduction theorem, which allows us to build Cayley graphs for large groups without increasing $p_c$.
Let $c$ be a constant and $(e_1,f_1), (e_2,f_2), \dots, (e_{cn},f_{cn})$ be a sequence of ordered pairs of edges on vertex set $[n]$ chosen uniformly and independently at random. Let $A$ be an algorithm for the on-line choice of one edge from each presented pair, and for $i= 1,\hellip,cn$ let $G_A(i)$ be the graph on vertex set $[n]$ consisting of the first $i$ edges chosen by $A$. We prove that all algorithms in a certain class have a critical value $c_A$ for the emergence of a giant component in $G_A(cn) (ie$, if $c \gt c_A$, then with high probability the largest component in $G_A(cn)$ has $o(n)$ vertices, and if $c > c_A$ then with high probability there is a component of size $\Omega(n)$ in $G_A(cn))$. We show that a particular algorithm in this class with high probability produces a giant component before $0.385 n$ steps in the process ($ie$, we exhibit an algorithm that creates a giant component relatively quickly). The fact that another specific algorithm that is in this class has a critical value resolves a conjecture of Spencer.
In addition, we establish a lower bound on the time of emergence of a giant component in any process produced by an on-line algorithm and show that there is a phase transition for the off-line version of the problem of creating a giant component.
A 3-connected graph $G$ is weakly 3-connected if, for every edge $e$ of $G$, at most one of $G\backslash e$ and $G/e$ is 3-connected. The main result of this paper is that any weakly 3-connected graph can be reduced to $K_4$ by a sequence of simple operations. This extends a result of Dawes [5] on minimally 3-connected graphs.
The notion of conductance introduced by Jerrum and Sinclair [8] has been widely used to prove rapid mixing of Markov chains. Here we introduce a bound that extends this in two directions. First, instead of measuring the conductance of the worst subset of states, we bound the mixing time by a formula that can be thought of as a weighted average of the Jerrum–Sinclair bound (where the average is taken over subsets of states with different sizes). Furthermore, instead of just the conductance, which in graph theory terms measures edge expansion, we also take into account node expansion. Our bound is related to the logarithmic Sobolev inequalities, but it appears to be more flexible and easier to compute.
In the case of random walks in convex bodies, we show that this new bound is better than the known bounds for the worst case. This saves a factor of $O(n)$ in the mixing time bound, which is incurred in all proofs as a ‘penalty’ for a ‘bad start’. We show that in a convex body in $\mathbb{R}^n$, with diameter $D$, random walk with steps in a ball with radius $\delta$ mixes in $O^*(nD^2/\delta^2)$ time (if idle steps at the boundary are not counted). This gives an $O^*(n^3)$ sampling algorithm after appropriate preprocessing, improving the previous bound of $O^*(n^4)$.
The application of the general conductance bound in the geometric setting depends on an improved isoperimetric inequality for convex bodies.
By Pick's invariant form of Schwarz's lemma, an analytic function B (z) which is bounded by one in the unit disk D = {z: |z| < 1} satisfies the inequality
at each point α of D. Recently, several authors [2, 10, 11] have obtained more general estimates for higher order derivatives. Best possible estimates are due to Ruscheweyh [12]. Below in §2 we use a Hilbert space method to derive Ruscheweyh's results. The operator method applies equally well to operator-valued functions, and this generalization is outlined in §3.