To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We introduce a notion of complexity of diagrams (and, in particular, of objects and morphisms) in an arbitrary category, as well as a notion of complexity of functors between categories equipped with complexity functions. We discuss several examples of this new definition in categories of wide common interest such as finite sets, Boolean functions, topological spaces, vector spaces, semilinear and semialgebraic sets, graded algebras, affine and projective varieties and schemes, and modules over polynomial rings. We show that on one hand categorical complexity recovers in several settings classical notions of nonuniform computational complexity (such as circuit complexity), while on the other hand it has features that make it mathematically more natural. We also postulate that studying functor complexity is the categorical analog of classical questions in complexity theory about separating different complexity classes.
This paper starts from the observation that the standard arguments for compositionality are really arguments for the computability of semantics. Since computability does not entail compositionality, the question of what justifies compositionality recurs. The paper then elaborates on the idea of recursive semantics as corresponding to computable semantics. It is then shown by means of time complexity theory and with the use of term rewriting as systems of semantic computation, that syntactically unrestricted, noncompositional recursive semantics leads to computational explosion (factorial complexity). Hence, with combinatorially unrestricted syntax, semantics with tractable time complexity is compositional.
The proofs of Gödel (1931), Rosser (1936), Kleene (first 1936 and second 1950), Chaitin (1970), and Boolos (1989) for the first incompleteness theorem are compared with each other, especially from the viewpoint of the second incompleteness theorem. It is shown that Gödel’s (first incompleteness theorem) and Kleene’s first theorems are equivalent with the second incompleteness theorem, Rosser’s and Kleene’s second theorems do deliver the second incompleteness theorem, and Boolos’ theorem is derived from the second incompleteness theorem in the standard way. It is also shown that none of Rosser’s, Kleene’s second, or Boolos’ theorems is equivalent with the second incompleteness theorem, and Chaitin’s incompleteness theorem neither delivers nor is derived from the second incompleteness theorem. We compare (the strength of) these six proofs with one another.
In this paper we investigate the computational complexity of deciding if the variety generated by a given finite idempotent algebra satisfies a special type of Maltsev condition that can be specified using a certain kind of finite labelled path. This class of Maltsev conditions includes several well known conditions, such as congruence permutability and having a sequence of n Jónsson terms, for some given n. We show that for such “path defined” Maltsev conditions, the decision problem is polynomial-time solvable.
We define a new class of shift spaces which contains a number of classes of interest, like Sturmian shifts used in discrete geometry. We show that this class is closed under two natural transformations. The first one is called conjugacy and is obtained by sliding block coding. The second one is called the complete bifix decoding, and typically includes codings by non-overlapping blocks of fixed length.
We prove an essentially sharp $\tilde \Omega (n/k)$ lower bound on the k-round distributional complexity of the k-step pointer chasing problem under the uniform distribution, when Bob speaks first. This is an improvement over Nisan and Wigderson’s $\tilde \Omega (n/{k^2})$ lower bound, and essentially matches the randomized lower bound proved by Klauck. The proof is information-theoretic, and a key part of it is using asymmetric triangular discrimination instead of total variation distance; this idea may be useful elsewhere.
There has been substantial interest in estimating the value of a graph parameter, i.e. of a real-valued function defined on the set of finite graphs, by querying a randomly sampled substructure whose size is independent of the size of the input. Graph parameters that may be successfully estimated in this way are said to be testable or estimable, and the sample complexity qz = qz(ε) of an estimable parameter z is the size of a random sample of a graph G required to ensure that the value of z(G) may be estimated within an error of ε with probability at least 2/3. In this paper, for any fixed monotone graph property $\mathcal{P}= \text{Forb}\!(\mathcal{F}),$ we study the sample complexity of estimating a bounded graph parameter z that, for an input graph G, counts the number of spanning subgraphs of G that satisfy$\mathcal{P}$. To improve upon previous upper bounds on the sample complexity, we show that the vertex set of any graph that satisfies a monotone property $\mathcal{P}$ may be partitioned equitably into a constant number of classes in such a way that the cluster graph induced by the partition is not far from satisfying a natural weighted graph generalization of $\mathcal{P}$. Properties for which this holds are said to be recoverable, and the study of recoverable properties may be of independent interest.
Random constraint satisfaction problems play an important role in computer science and combinatorics. For example, they provide challenging benchmark examples for algorithms, and they have been harnessed in probabilistic constructions of combinatorial structures with peculiar features. In an important contribution (Krzakala et al. 2007, Proc. Nat. Acad. Sci.), physicists made several predictions on the precise location and nature of phase transitions in random constraint satisfaction problems. Specifically, they predicted that their satisfiability thresholds are quite generally preceded by several other thresholds that have a substantial impact both combinatorially and computationally. These include the condensation phase transition, where long-range correlations between variables emerge, and the reconstruction threshold. In this paper we prove these physics predictions for a broad class of random constraint satisfaction problems. Additionally, we obtain contiguity results that have implications for Bayesian inference tasks, a subject that has received a great deal of interest recently (e.g. Banks et al. 2016, Proc. 29th COLT).
Numerous learning tasks can be described as the process of extrapolating patterns from observed data. One of the driving intuitions behind the theory of algorithmic randomness is that randomness amounts to the absence of any effectively detectable patterns: it is thus natural to regard randomness as antithetical to inductive learning. Osherson and Weinstein [11] draw upon the identification of randomness with unlearnability to introduce a learning-theoretic framework (in the spirit of formal learning theory) for modelling algorithmic randomness. They define two success criteria—specifying under what conditions a pattern may be said to have been detected by a computable learning function—and prove that the collections of data sequences on which these criteria cannot be satisfied correspond to the set of weak 1-randoms and the set of weak 2-randoms, respectively. This learning-theoretic approach affords an intuitive perspective on algorithmic randomness, and it invites the question of whether restricting attention to learning-theoretic success criteria comes at an expressivity cost. In other words, is the framework expressive enough to capture most core algorithmic randomness notions and, in particular, Martin-Löf randomness—arguably, the most prominent algorithmic randomness notion in the literature? In this article, we answer the latter question in the affirmative by providing a learning-theoretic characterisation of Martin-Löf randomness. We then show that Schnorr randomness, another central algorithmic randomness notion, also admits a learning-theoretic characterisation in this setting.
For a rumour spreading protocol, the spread time is defined as the first time everyone learns the rumour. We compare the synchronous push&pull rumour spreading protocol with its asynchronous variant, and show that for any n-vertex graph and any starting vertex, the ratio between their expected spread times is bounded by $O({n^{1/3}}{\log ^{2/3}}n)$. This improves the $O(\sqrt n)$ upper bound of Giakkoupis, Nazari and Woelfel (2016). Our bound is tight up to a factor of O(log n), as illustrated by the string of diamonds graph. We also show that if, for a pair α, β of real numbers, there exist infinitely many graphs for which the two spread times are nα and nβ in expectation, then $0 \le \alpha \le 1$ and $\alpha \le \beta \le {1 \over 3} + {2 \over 3} \alpha $; and we show each such pair α, β is achievable.
This paper investigates the computational complexity of deciding if a given finite idempotent algebra has a ternary term operation $m$ that satisfies the minority equations $m(y,x,x)\approx m(x,y,x)\approx m(x,x,y)\approx y$. We show that a common polynomial-time approach to testing for this type of condition will not work in this case and that this decision problem lies in the class NP.
Let r ⩾ 2 be a fixed constant and let $ {\cal H} $ be an r-uniform, D-regular hypergraph on N vertices. Assume further that D → ∞ as N → ∞ and that degrees of pairs of vertices in $ {\cal H} $ are at most L where L = D/( log N)ω(1). We consider the random greedy algorithm for forming a matching in $ {\cal H} $. We choose a matching at random by iteratively choosing edges uniformly at random to be in the matching and deleting all edges that share at least one vertex with a chosen edge before moving on to the next choice. This process terminates when there are no edges remaining in the graph. We show that with high probability the proportion of vertices of $ {\cal H} $ that are not saturated by the final matching is at most (L/D)(1/(2(r−1)))+o(1). This point is a natural barrier in the analysis of the random greedy hypergraph matching process.
Given complex numbers w1,…,wn, we define the weight w(X) of a set X of 0–1 vectors as the sum of $w_1^{x_1} \cdots w_n^{x_n}$ over all vectors (x1,…,xn) in X. We present an algorithm which, for a set X defined by a system of homogeneous linear equations with at most r variables per equation and at most c equations per variable, computes w(X) within relative error ∊ > 0 in (rc)O(lnn-ln∊) time provided $|w_j| \leq \beta (r \sqrt{c})^{-1}$ for an absolute constant β > 0 and all j = 1,…,n. A similar algorithm is constructed for computing the weight of a linear code over ${\mathbb F}_p$. Applications include counting weighted perfect matchings in hypergraphs, counting weighted graph homomorphisms, computing weight enumerators of linear codes with sparse code generating matrices, and computing the partition functions of the ferromagnetic Potts model at low temperatures and of the hard-core model at high fugacity on biregular bipartite graphs.
An abelian processor is an automaton whose output is independent of the order of its inputs. Bond and Levine have proved that a network of abelian processors performs the same computation regardless of processing order (subject only to a halting condition). We prove that any finite abelian processor can be emulated by a network of certain very simple abelian processors, which we call gates. The most fundamental gate is a toppler, which absorbs input particles until their number exceeds some given threshold, at which point it topples, emitting one particle and returning to its initial state. With the exception of an adder gate, which simply combines two streams of particles, each of our gates has only one input wire, which sends letters (‘particles’) from a unary alphabet. Our results can be reformulated in terms of the functions computed by processors, and one consequence is that any increasing function from ℕk to ℕℓ that is the sum of a linear function and a periodic function can be expressed in terms of (possibly nested) sums of floors of quotients by integers.
This paper provides short proofs of two fundamental theorems of finite semigroup theory whose previous proofs were significantly longer, namely the two-sided Krohn-Rhodes decomposition theorem and Henckell’s aperiodic pointlike theorem. We use a new algebraic technique that we call the merge decomposition. A prototypical application of this technique decomposes a semigroup $T$ into a two-sided semidirect product whose components are built from two subsemigroups $T_{1}$, $T_{2}$, which together generate $T$, and the subsemigroup generated by their setwise product $T_{1}T_{2}$. In this sense we decompose $T$ by merging the subsemigroups $T_{1}$ and $T_{2}$. More generally, our technique merges semigroup homomorphisms from free semigroups.
We improve some previously known deterministic algorithms for finding integer solutions $x,y$ to the exponential equation of the form $af^{x}+bg^{y}=c$ over finite fields.
We give the first polynomial upper bound on the mixing time of the edge-flip Markov chain for unbiased dyadic tilings, resolving an open problem originally posed by Janson, Randall and Spencer in 2002 [14]. A dyadic tiling of size n is a tiling of the unit square by n non-overlapping dyadic rectangles, each of area 1/n, where a dyadic rectangle is any rectangle that can be written in the form [a2−s, (a + 1)2−s] × [b2−t, (b + 1)2−t] for a, b, s, t ∈ ℤ⩾ 0. The edge-flip Markov chain selects a random edge of the tiling and replaces it with its perpendicular bisector if doing so yields a valid dyadic tiling. Specifically, we show that the relaxation time of the edge-flip Markov chain for dyadic tilings is at most O(n4.09), which implies that the mixing time is at most O(n5.09). We complement this by showing that the relaxation time is at least Ω(n1.38), improving upon the previously best lower bound of Ω(n log n) coming from the diameter of the chain.
We present an average-case analysis of a variant of dual-pivot quicksort. We show that the algorithmic partitioning strategy used is optimal, that is, it minimizes the expected number of key comparisons. For the analysis, we calculate the expected number of comparisons exactly as well as asymptotically; in particular, we provide exact expressions for the linear, logarithmic and constant terms.
An essential step is the analysis of zeros of lattice paths in a certain probability model. Along the way a combinatorial identity is proved.
The paper introduces a graph theory variation of the general position problem: given a graph $G$, determine a largest set $S$ of vertices of $G$ such that no three vertices of $S$ lie on a common geodesic. Such a set is a max-gp-set of $G$ and its size is the gp-number $\text{gp}(G)$ of $G$. Upper bounds on $\text{gp}(G)$ in terms of different isometric covers are given and used to determine the gp-number of several classes of graphs. Connections between general position sets and packings are investigated and used to give lower bounds on the gp-number. It is also proved that the general position problem is NP-complete.