To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We study a natural model of a random $2$-dimensional cubical complex which is a subcomplex of an n-dimensional cube, and where every possible square $2$-face is included independently with probability p. Our main result exhibits a sharp threshold $p=1/2$ for homology vanishing as $n \to \infty $. This is a $2$-dimensional analogue of the Burtin and Erdoős–Spencer theorems characterising the connectivity threshold for random graphs on the $1$-skeleton of the n-dimensional cube.
Our main result can also be seen as a cubical counterpart to the Linial–Meshulam theorem for random $2$-dimensional simplicial complexes. However, the models exhibit strikingly different behaviours. We show that if $p> 1 - \sqrt {1/2} \approx 0.2929$, then with high probability the fundamental group is a free group with one generator for every maximal $1$-dimensional face. As a corollary, homology vanishing and simple connectivity have the same threshold, even in the strong ‘hitting time’ sense. This is in contrast with the simplicial case, where the thresholds are far apart. The proof depends on an iterative algorithm for contracting cycles – we show that with high probability, the algorithm rapidly and dramatically simplifies the fundamental group, converging after only a few steps.
In [20] Krajíček and Pudlák discovered connections between problems in computational complexity and the lengths of first-order proofs of finite consistency statements. Later Pudlák [25] studied more statements that connect provability with computational complexity and conjectured that they are true. All these conjectures are at least as strong as $\mathsf {P}\neq \mathsf {NP}$ [23–25].One of the problems concerning these conjectures is to find out how tightly they are connected with statements about computational complexity classes. Results of this kind had been proved in [20, 22].In this paper, we generalize and strengthen these results. Another question that we address concerns the dependence between these conjectures. We construct two oracles that enable us to answer questions about relativized separations asked in [19, 25] (i.e., for the pairs of conjectures mentioned in the questions, we construct oracles such that one conjecture from the pair is true in the relativized world and the other is false and vice versa). We also show several new connections between the studied conjectures. In particular, we show that the relation between the finite reflection principle and proof systems for existentially quantified Boolean formulas is similar to the one for finite consistency statements and proof systems for non-quantified propositional tautologies.
In this survey we discuss work of Levin and V’yugin on collections of sequences that are non-negligible in the sense that they can be computed by a probabilistic algorithm with positive probability. More precisely, Levin and V’yugin introduced an ordering on collections of sequences that are closed under Turing equivalence. Roughly speaking, given two such collections $\mathcal {A}$ and $\mathcal {B}$, $\mathcal {A}$ is below $\mathcal {B}$ in this ordering if $\mathcal {A}\setminus \mathcal {B}$ is negligible. The degree structure associated with this ordering, the Levin–V’yugin degrees (or $\mathrm {LV}$-degrees), can be shown to be a Boolean algebra, and in fact a measure algebra. We demonstrate the interactions of this work with recent results in computability theory and algorithmic randomness: First, we recall the definition of the Levin–V’yugin algebra and identify connections between its properties and classical properties from computability theory. In particular, we apply results on the interactions between notions of randomness and Turing reducibility to establish new facts about specific LV-degrees, such as the LV-degree of the collection of 1-generic sequences, that of the collection of sequences of hyperimmune degree, and those collections corresponding to various notions of effective randomness. Next, we provide a detailed explanation of a complex technique developed by V’yugin that allows the construction of semi-measures into which computability-theoretic properties can be encoded. We provide two examples of the use of this technique by explicating a result of V’yugin’s about the LV-degree of the collection of Martin-Löf random sequences and extending the result to the LV-degree of the collection of sequences of DNC degree.
We study from the proof complexity perspective the (informal) proof search problem (cf. [17, Sections 1.5 and 21.5]):
•Is there an optimal way to search for propositional proofs?
We note that, as a consequence of Levin’s universal search, for any fixed proof system there exists a time-optimal proof search algorithm. Using classical proof complexity results about reflection principles we prove that a time-optimal proof search algorithm exists without restricting proof systems iff a p-optimal proof system exists.
To characterize precisely the time proof search algorithms need for individual formulas we introduce a new proof complexity measure based on algorithmic information concepts. In particular, to a proof system P we attach information-efficiency function$i_P(\tau )$ assigning to a tautology a natural number, and we show that:
•$i_P(\tau )$ characterizes time any P-proof search algorithm has to use on $\tau $,
• for a fixed P there is such an information-optimal algorithm (informally: it finds proofs of minimal information content),
• a proof system is information-efficiency optimal (its information-efficiency function is minimal up to a multiplicative constant) iff it is p-optimal,
• for non-automatizable systems P there are formulas $\tau $ with short proofs but having large information measure $i_P(\tau )$.
We isolate and motivate the problem to establish unconditional super-logarithmic lower bounds for $i_P(\tau )$ where no super-polynomial size lower bounds are known. We also point out connections of the new measure with some topics in proof complexity other than proof search.
Shallit and Wang showed that the automatic complexity $A(x)$ satisfies $A(x)\ge n/13$ for almost all $x\in {\{\mathtt {0},\mathtt {1}\}}^n$. They also stated that Holger Petersen had informed them that the constant $13$ can be reduced to $7$. Here we show that it can be reduced to $2+\epsilon $ for any $\epsilon>0$. The result also applies to nondeterministic automatic complexity $A_N(x)$. In that setting the result is tight inasmuch as $A_N(x)\le n/2+1$ for all x.
In this paper we analyse the limiting conditional distribution (Yaglom limit) for stochastic fluid models (SFMs), a key class of models in the theory of matrix-analytic methods. So far, only transient and stationary analyses of SFMs have been considered in the literature. The limiting conditional distribution gives useful insights into what happens when the process has been evolving for a long time, given that its busy period has not ended yet. We derive expressions for the Yaglom limit in terms of the singularity˜$s^*$ such that the key matrix of the SFM, ${\boldsymbol{\Psi}}(s)$, is finite (exists) for all $s\geq s^*$ and infinite for $s<s^*$. We show the uniqueness of the Yaglom limit and illustrate the application of the theory with simple examples.
We obtain a polynomial upper bound on the mixing time $T_{CHR}(\epsilon)$ of the coordinate Hit-and-Run (CHR) random walk on an $n-$dimensional convex body, where $T_{CHR}(\epsilon)$ is the number of steps needed to reach within $\epsilon$ of the uniform distribution with respect to the total variation distance, starting from a warm start (i.e., a distribution which has a density with respect to the uniform distribution on the convex body that is bounded above by a constant). Our upper bound is polynomial in n, R and $\frac{1}{\epsilon}$, where we assume that the convex body contains the unit $\Vert\cdot\Vert_\infty$-unit ball $B_\infty$ and is contained in its R-dilation $R\cdot B_\infty$. Whether CHR has a polynomial mixing time has been an open question.
Fix an abelian group $\Gamma $ and an injective endomorphism $F\colon \Gamma \to \Gamma $. Improving on the results of [2], new characterizations are here obtained for the existence of spanning sets, F-automaticity, and F-sparsity. The model theoretic status of these sets is also investigated, culminating with a combinatorial description of the F-sparse sets that are stable in $(\Gamma ,+)$, and a proof that the expansion of $(\Gamma ,+)$ by any F-sparse set is NIP. These methods are also used to show for prime $p\ge 7$ that the expansion of $(\mathbb {F}_p[t],+)$ by multiplication restricted to $t^{\mathbb {N}}$ is NIP.
We apply the power-of-two-choices paradigm to a random walk on a graph: rather than moving to a uniform random neighbour at each step, a controller is allowed to choose from two independent uniform random neighbours. We prove that this allows the controller to significantly accelerate the hitting and cover times in several natural graph classes. In particular, we show that the cover time becomes linear in the number n of vertices on discrete tori and bounded degree trees, of order $${\mathcal O}(n\log \log n)$$ on bounded degree expanders, and of order $${\mathcal O}(n{(\log \log n)^2})$$ on the Erdős–Rényi random graph in a certain sparsely connected regime. We also consider the algorithmic question of computing an optimal strategy and prove a dichotomy in efficiency between computing strategies for hitting and cover times.
We prove that most permutations of degree $n$ have some power which is a cycle of prime length approximately $\log n$. Explicitly, we show that for $n$ sufficiently large, the proportion of such elements is at least $1-5/\log \log n$ with the prime between $\log n$ and $(\log n)^{\log \log n}$. The proportion of even permutations with this property is at least $1-7/\log \log n$.
We extend work of Berdinsky and Khoussainov [‘Cayley automatic representations of wreath products’, International Journal of Foundations of Computer Science27(2) (2016), 147–159] to show that being Cayley automatic is closed under taking the restricted wreath product with a virtually infinite cyclic group. This adds to the list of known examples of Cayley automatic groups.
We present a polynomial-time Markov chain Monte Carlo algorithm for estimating the partition function of the antiferromagnetic Ising model on any line graph. The analysis of the algorithm exploits the ‘winding’ technology devised by McQuillan [CoRR abs/1301.2880 (2013)] and developed by Huang, Lu and Zhang [Proc. 27th Symp. on Disc. Algorithms (SODA16), 514–527]. We show that exact computation of the partition function is #P-hard, even for line graphs, indicating that an approximation algorithm is the best that can be expected. We also show that Glauber dynamics for the Ising model is rapidly mixing on line graphs, an example being the kagome lattice.
As a new type of epistemic logics, the logics of knowing how capture the high-level epistemic reasoning about the knowledge of various plans to achieve certain goals. Existing work on these logics focuses on axiomatizations; this paper makes the first study of their model theoretical properties. It does so by introducing suitable notions of bisimulation for a family of five knowing how logics based on different notions of plans. As an application, we study and compare the expressive power of these logics.
A theorem of Brudno says that the Kolmogorov–Sinai entropy of an ergodic subshift over $\mathbb {N}$ equals the asymptotic Kolmogorov complexity of almost every word in the subshift. The purpose of this paper is to extend this result to subshifts over computable groups that admit computable regular symmetric Følner monotilings, which we introduce in this work. For every $d \in \mathbb {N}$, the groups $\mathbb {Z}^d$ and $\mathsf{UT}_{d+1}(\mathbb {Z})$ admit computable regular symmetric Følner monotilings for which the required computing algorithms are provided.
Drift analysis is one of the state-of-the-art techniques for the runtime analysis of randomized search heuristics (RSHs) such as evolutionary algorithms (EAs), simulated annealing, etc. The vast majority of existing drift theorems yield bounds on the expected value of the hitting time for a target state, for example the set of optimal solutions, without making additional statements on the distribution of this time. We address this lack by providing a general drift theorem that includes bounds on the upper and lower tail of the hitting time distribution. The new tail bounds are applied to prove very precise sharp-concentration results on the running time of a simple EA on standard benchmark problems, including the class of general linear functions. On all these problems, the probability of deviating by an r-factor in lower-order terms of the expected time decreases exponentially with r. The usefulness of the theorem outside the theory of RSHs is demonstrated by deriving tail bounds on the number of cycles in random permutations. All these results handle a position-dependent (variable) drift that was not covered by previous drift theorems with tail bounds. Finally, user-friendly specializations of the general drift theorem are given.
The aim of this paper is to shed light on our understanding of large scale properties of infinite strings. We say that one string $\alpha $ has weaker large scale geometry than that of $\beta $ if there is color preserving bi-Lipschitz map from $\alpha $ into $\beta $ with small distortion. This definition allows us to define a partially ordered set of large scale geometries on the classes of all infinite strings. This partial order compares large scale geometries of infinite strings. As such, it presents an algebraic tool for classification of global patterns. We study properties of this partial order. We prove, for instance, that this partial order has a greatest element and also possess infinite chains and antichains. We also investigate the sets of large scale geometries of strings accepted by finite state machines such as Büchi automata. We provide an algorithm that describes large scale geometries of strings accepted by Büchi automata. This connects the work with the complexity theory. We also prove that the quasi-isometry problem is a $\Sigma _2^0$-complete set, thus providing a bridge with computability theory. Finally, we build algebraic structures that are invariants of large scale geometries. We invoke asymptotic cones, a key concept in geometric group theory, defined via model-theoretic notion of ultra-product. Partly, we study asymptotic cones of algorithmically random strings, thus connecting the topic with algorithmic randomness.
We set up a general context in which one can prove Sauer–Shelah type lemmas. We apply our general results to answer a question of Bhaskar [1] and give a slight improvement to a result of Malliaris and Terry [7]. We also prove a new Sauer–Shelah type lemma in the context of $ \operatorname {\textrm{op}}$-rank, a notion of Guingona and Hill [4].
A directed space is a topological space $X$ together with a subspace $\vec {P}(X)\subset X^I$ of directed paths on $X$. A symmetry of a directed space should therefore respect both the topology of the underlying space and the topology of the associated spaces $\vec {P}(X)_-^+$ of directed paths between a source ($-$) and a target ($+$)—up to homotopy. If it is, moreover, homotopic to the identity map—in a directed sense—such a symmetry will be called an inessential d-map, and the paper explores the algebra and topology of inessential d-maps. Comparing two d-spaces $X$ and $Y$ ‘up to symmetry’ yields the notion of a directed homotopy equivalence between them. Under appropriate conditions, all directed homotopy equivalences are shown to satisfy a 2-out-of-3 property. Our notion of directed homotopy equivalence does not agree completely with the one defined in Goubault (2017, arxiv:1709:05702v2) and Goubault, Farber and Sagnier (2020, J. Appl. Comput. Topol. 4, 11–27); the deviation is motivated by examples. Nevertheless, directed topological complexity, introduced in Goubault, Farber and Sagnier (2020) is shown to be invariant under our notion of directed homotopy equivalence. Finally, we show that directed homotopy equivalences result in isomorphisms on the pair component categories of directed spaces introduced in Goubault, Farber and Sagnier (2020).
In the past four decades, the notion of quantum polynomial-time computability has been mathematically modeled by quantum Turing machines as well as quantum circuits. This paper seeks the third model, which is a quantum analogue of the schematic (inductive or constructive) definition of (primitive) recursive functions. For quantum functions mapping finite-dimensional Hilbert spaces to themselves, we present such a schematic definition, composed of a small set of initial quantum functions and a few construction rules that dictate how to build a new quantum function from the existing ones. We prove that our schematic definition precisely characterizes all functions that can be computable with high success probabilities on well-formed quantum Turing machines in polynomial time, or equivalently uniform families of polynomial-size quantum circuits. Our new, schematic definition is quite simple and intuitive and, more importantly, it avoids the cumbersome introduction of the well-formedness condition imposed on a quantum Turing machine model as well as of the uniformity condition necessary for a quantum circuit model. Our new approach can further open a door to the descriptional complexity of quantum functions, to the theory of higher-type quantum functionals, to the development of new first-order theories for quantum computing, and to the designing of programming languages for real-life quantum computer
State spaces are, in the most general sense, sets of entities that contain information. Examples include states of dynamical systems, processes of observations, or possible worlds. We use domain theory to describe the structure of positive and negative information in state spaces. We present examples ranging from the space of trajectories of a dynamical system, over Dunn’s aboutness interpretation of fde, to the space of open sets of a spectral space. We show that these information structures induce so-called hype models which were recently developed by Leitgeb (2019). Conversely, we prove a representation theorem: roughly, hype models can be represented as induced by an information structure. Thus, the well-behaved logic hype is a sound and complete logic for reasoning about information in state spaces.
As application of this framework, we investigate information fusion. We motivate two kinds of fusion. We define a groundedness and a separation property that allow a hype model to be closed under the two kinds of fusion. This involves a Dedekind–MacNeille completion and a fiber-space like construction. The proof-techniques come from pointless topology and universal algebra.