We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper we study a variation of the random $k$-SAT problem, called polarised random $k$-SAT, which contains both the classical random $k$-SAT model and the random version of monotone $k$-SAT another well-known NP-complete version of SAT. In this model there is a polarisation parameter $p$, and in half of the clauses each variable occurs negated with probability $p$ and pure otherwise, while in the other half the probabilities are interchanged. For $p=1/2$ we get the classical random $k$-SAT model, and at the other extreme we have the fully polarised model where $p=0$, or 1. Here there are only two types of clauses: clauses where all $k$ variables occur pure, and clauses where all $k$ variables occur negated. That is, for $p=0$, and $p=1$, we get an instance of random monotone$k$-SAT.
We show that the threshold of satisfiability does not decrease as $p$ moves away from $\frac{1}{2}$ and thus that the satisfiability threshold for polarised random $k$-SAT with $p\neq \frac{1}{2}$ is an upper bound on the threshold for random $k$-SAT. Hence the satisfiability threshold for random monotone $k$-SAT is at least as large as for random $k$-SAT, and we conjecture that asymptotically, for a fixed $k$, the two thresholds coincide.
Let
$M_{\langle \mathbf {u},\mathbf {v},\mathbf {w}\rangle }\in \mathbb C^{\mathbf {u}\mathbf {v}}{\mathord { \otimes } } \mathbb C^{\mathbf {v}\mathbf {w}}{\mathord { \otimes } } \mathbb C^{\mathbf {w}\mathbf {u}}$
denote the matrix multiplication tensor (and write
$M_{\langle \mathbf {n} \rangle }=M_{\langle \mathbf {n},\mathbf {n},\mathbf {n}\rangle }$
), and let
$\operatorname {det}_3\in (\mathbb C^9)^{{\mathord { \otimes } } 3}$
denote the determinant polynomial considered as a tensor. For a tensor T, let
$\underline {\mathbf {R}}(T)$
denote its border rank. We (i) give the first hand-checkable algebraic proof that
$\underline {\mathbf {R}}(M_{\langle 2\rangle })=7$
, (ii) prove
$\underline {\mathbf {R}}(M_{\langle 223\rangle })=10$
and
$\underline {\mathbf {R}}(M_{\langle 233\rangle })=14$
, where previously the only nontrivial matrix multiplication tensor whose border rank had been determined was
$M_{\langle 2\rangle }$
, (iii) prove
$\underline {\mathbf {R}}( M_{\langle 3\rangle })\geq 17$
, (iv) prove
$\underline {\mathbf {R}}(\operatorname {det}_3)=17$
, improving the previous lower bound of
$12$
, (v) prove
$\underline {\mathbf {R}}(M_{\langle 2\mathbf {n}\mathbf {n}\rangle })\geq \mathbf {n}^2+1.32\mathbf {n}$
for all
$\mathbf {n}\geq 25$
, where previously only
$\underline {\mathbf {R}}(M_{\langle 2\mathbf {n}\mathbf {n}\rangle })\geq \mathbf {n}^2+1$
was known, as well as lower bounds for
$4\leq \mathbf {n}\leq 25$
, and (vi) prove
$\underline {\mathbf {R}}(M_{\langle 3\mathbf {n}\mathbf {n}\rangle })\geq \mathbf {n}^2+1.6\mathbf {n}$
for all
$\mathbf {n} \ge 18$
, where previously only
$\underline {\mathbf {R}}(M_{\langle 3\mathbf {n}\mathbf {n}\rangle })\geq \mathbf {n}^2+2$
was known. The last two results are significant for two reasons: (i) they are essentially the first nontrivial lower bounds for tensors in an “unbalanced” ambient space and (ii) they demonstrate that the methods we use (border apolarity) may be applied to sequences of tensors.
The methods used to obtain the results are new and “nonnatural” in the sense of Razborov and Rudich, in that the results are obtained via an algorithm that cannot be effectively applied to generic tensors. We utilize a new technique, called border apolarity developed by Buczyńska and Buczyński in the general context of toric varieties. We apply this technique to develop an algorithm that, given a tensor T and an integer r, in a finite number of steps, either outputs that there is no border rank r decomposition for T or produces a list of all normalized ideals which could potentially result from a border rank decomposition. The algorithm is effectively implementable when T has a large symmetry group, in which case it outputs potential decompositions in a natural normal form. The algorithm is based on algebraic geometry and representation theory.
We show that the first-order logical theory of the binary overlap-free words (and, more generally, the $\alpha $-free words for rational $\alpha $, $2 < \alpha \leq 7/3$), is decidable. As a consequence, many results previously obtained about this class through tedious case-based proofs can now be proved “automatically,” using a decision procedure, and new claims can be proved or disproved simply by restating them as logical formulas.
For linear nonuniform cellular automata (NUCA) which are local perturbations of linear CA over a group universe G and a finite-dimensional vector space alphabet V over an arbitrary field k, we investigate their Dedekind finiteness property, also known as the direct finiteness property, i.e., left or right invertibility implies invertibility. We say that the group G is $L^1$-surjunctive, resp. finitely $L^1$-surjunctive, if all such linear NUCA are automatically surjective whenever they are stably injective, resp. when in addition k is finite. In parallel, we introduce the ring $D^1(k[G])$ which is the Cartesian product $k[G] \times (k[G])[G]$ as an additive group but the multiplication is twisted in the second component. The ring $D^1(k[G])$ contains naturally the group ring $k[G]$ and we obtain a dynamical characterization of its stable finiteness for every field k in terms of the finite $L^1$-surjunctivity of the group G, which holds, for example, when G is residually finite or initially subamenable. Our results extend known results in the case of CA.
Strong Turing Determinacy, or ${\mathrm {sTD}}$, is the statement that for every set A of reals, if $\forall x\exists y\geq _T x (y\in A)$, then there is a pointed set $P\subseteq A$. We prove the following consequences of Turing Determinacy (${\mathrm {TD}}$) and ${\mathrm {sTD}}$ over ${\mathrm {ZF}}$—the Zermelo–Fraenkel axiomatic set theory without the Axiom of Choice:
(1)${\mathrm {ZF}}+{\mathrm {TD}}$ implies $\mathrm {wDC}_{\mathbb {R}}$—a weaker version of $\mathrm {DC}_{\mathbb {R}}$.
(2)${\mathrm {ZF}}+{\mathrm {sTD}}$ implies that every set of reals is measurable and has Baire property.
(3)${\mathrm {ZF}}+{\mathrm {sTD}}$ implies that every uncountable set of reals has a perfect subset.
(4)${\mathrm {ZF}}+{\mathrm {sTD}}$ implies that for every set of reals A and every $\epsilon>0$:
(a) There is a closed set $F\subseteq A$ such that $\mathrm {Dim_H}(F)\geq \mathrm {Dim_H}(A)-\epsilon $, where $\mathrm {Dim_H}$ is the Hausdorff dimension.
(b) There is a closed set $F\subseteq A$ such that $\mathrm {Dim_P}(F)\geq \mathrm {Dim_P}(A)-\epsilon $, where $\mathrm {Dim_P}$ is the packing dimension.
This work studies the average complexity of solving structured polynomial systems that are characterised by a low evaluation cost, as opposed to the dense random model previously used. Firstly, we design a continuation algorithm that computes, with high probability, an approximate zero of a polynomial system given only as black-box evaluation program. Secondly, we introduce a universal model of random polynomial systems with prescribed evaluation complexity L. Combining both, we show that we can compute an approximate zero of a random structured polynomial system with n equations of degree at most
${D}$
in n variables with only
$\operatorname {poly}(n, {D}) L$
operations with high probability. This exceeds the expectations implicit in Smale’s 17th problem.
We study (asymmetric) $U$-statistics based on a stationary sequence of $m$-dependent variables; moreover, we consider constrained $U$-statistics, where the defining multiple sum only includes terms satisfying some restrictions on the gaps between indices. Results include a law of large numbers and a central limit theorem, together with results on rate of convergence, moment convergence, functional convergence, and a renewal theory version.
Special attention is paid to degenerate cases where, after the standard normalization, the asymptotic variance vanishes; in these cases non-normal limits occur after a different normalization.
The results are motivated by applications to pattern matching in random strings and permutations. We obtain both new results and new proofs of old results.
This article studies the properties of word-hyperbolic semigroups and monoids, that is, those having context-free multiplication tables with respect to a regular combing, as defined by Duncan and Gilman [‘Word hyperbolic semigroups’, Math. Proc. Cambridge Philos. Soc.136(3) (2004), 513–524]. In particular, the preservation of word-hyperbolicity under taking free products is considered. Under mild conditions on the semigroups involved, satisfied, for example, by monoids or regular semigroups, we prove that the semigroup free product of two word-hyperbolic semigroups is again word-hyperbolic. Analogously, with a mild condition on the uniqueness of representation for the identity element, satisfied, for example, by groups, we prove that the monoid free product of two word-hyperbolic monoids is word-hyperbolic. The methods are language-theoretically general, and apply equally well to semigroups, monoids or groups with a $\mathbf {C}$-multiplication table, where $\mathbf {C}$ is any reversal-closed super-$\operatorname {\mathrm {AFL}}$. In particular, we deduce that the free product of two groups with $\mathbf {ET0L}$ with respect to indexed multiplication tables again has an $\mathbf {ET0L}$ with respect to an indexed multiplication table.
We present an efficient algorithm to generate a discrete uniform distribution on a set of p elements using a biased random source for p prime. The algorithm generalizes Von Neumann’s method and improves the computational efficiency of Dijkstra’s method. In addition, the algorithm is extended to generate a discrete uniform distribution on any finite set based on the prime factorization of integers. The average running time of the proposed algorithm is overall sublinear: $\operatorname{O}\!(n/\log n)$.
There exist two notions of equivalence of behavior between states of a Labelled Markov Process (LMP): state bisimilarity and event bisimilarity. The first one can be considered as an appropriate generalization to continuous spaces of Larsen and Skou’s probabilistic bisimilarity, whereas the second one is characterized by a natural logic. C. Zhou expressed state bisimilarity as the greatest fixed point of an operator $\mathcal {O}$, and thus introduced an ordinal measure of the discrepancy between it and event bisimilarity. We call this ordinal the Zhou ordinal of $\mathbb {S}$, $\mathfrak {Z}(\mathbb {S})$. When $\mathfrak {Z}(\mathbb {S})=0$, $\mathbb {S}$ satisfies the Hennessy–Milner property. The second author proved the existence of an LMP $\mathbb {S}$ with $\mathfrak {Z}(\mathbb {S}) \geq 1$ and Zhou showed that there are LMPs having an infinite Zhou ordinal. In this paper we show that there are LMPs $\mathbb {S}$ over separable metrizable spaces having arbitrary large countable $\mathfrak {Z}(\mathbb {S})$ and that it is consistent with the axioms of $\mathit {ZFC}$ that there is such a process with an uncountable Zhou ordinal.
In numerical linear algebra, a well-established practice is to choose a norm that exploits the structure of the problem at hand to optimise accuracy or computational complexity. In numerical polynomial algebra, a single norm (attributed to Weyl) dominates the literature. This article initiates the use of
$L_p$
norms for numerical algebraic geometry, with an emphasis on
$L_{\infty }$
. This classical idea yields strong improvements in the analysis of the number of steps performed by numerous iterative algorithms. In particular, we exhibit three algorithms where, despite the complexity of computing
$L_{\infty }$
-norm, the use of
$L_p$
-norms substantially reduces computational complexity: a subdivision-based algorithm in real algebraic geometry for computing the homology of semialgebraic sets, a well-known meshing algorithm in computational geometry and the computation of zeros of systems of complex quadratic polynomials (a particular case of Smale’s 17th problem).
We show that the image of a subshift X under various injective morphisms of symbolic algebraic varieties over monoid universes with algebraic variety alphabets is a subshift of finite type, respectively a sofic subshift, if and only if so is X. Similarly, let G be a countable monoid and let A, B be Artinian modules over a ring. We prove that for every closed subshift submodule $\Sigma \subset A^G$ and every injective G-equivariant uniformly continuous module homomorphism $\tau \colon \! \Sigma \to B^G$, a subshift $\Delta \subset \Sigma $ is of finite type, respectively sofic, if and only if so is the image $\tau (\Delta )$. Generalizations for admissible group cellular automata over admissible Artinian group structure alphabets are also obtained.
Many tasks in statistical and causal inference can be construed as problems of entailment in a suitable formal language. We ask whether those problems are more difficult, from a computational perspective, for causal probabilistic languages than for pure probabilistic (or “associational”) languages. Despite several senses in which causal reasoning is indeed more complex—both expressively and inferentially—we show that causal entailment (or satisfiability) problems can be systematically and robustly reduced to purely probabilistic problems. Thus there is no jump in computational complexity. Along the way we answer several open problems concerning the complexity of well-known probability logics, in particular demonstrating the ${\exists \mathbb {R}}$-completeness of a polynomial probability calculus, as well as a seemingly much simpler system, the logic of comparative conditional probability.
The ring
$\mathbb Z_{d}$
of d-adic integers has a natural interpretation as the boundary of a rooted d-ary tree
$T_{d}$
. Endomorphisms of this tree (that is, solenoidal maps) are in one-to-one correspondence with 1-Lipschitz mappings from
$\mathbb Z_{d}$
to itself. In the case when
$d=p$
is prime, Anashin [‘Automata finiteness criterion in terms of van der Put series of automata functions’,p-Adic Numbers Ultrametric Anal. Appl.4(2) (2012), 151–160] showed that
$f\in \mathrm {Lip}^{1}(\mathbb Z_{p})$
is defined by a finite Mealy automaton if and only if the reduced coefficients of its van der Put series constitute a p-automatic sequence over a finite subset of
$\mathbb Z_{p}\cap \mathbb Q$
. We generalize this result to arbitrary integers
$d\geq 2$
and describe the explicit connection between the Moore automaton producing such a sequence and the Mealy automaton inducing the corresponding endomorphism of a rooted tree. We also produce two algorithms converting one automaton to the other and vice versa. As a demonstration, we apply our algorithms to the Thue–Morse sequence and to one of the generators of the lamplighter group acting on the binary rooted tree.
The Thue–Morse sequence is a prototypical automatic sequence found in diverse areas of mathematics, and in computer science. We study occurrences of factors w within this sequence, or more precisely, the sequence of gaps between consecutive occurrences. This gap sequence is morphic; we prove that it is not automatic as soon as the length of w is at least
$2$
, thereby answering a question by J. Shallit in the affirmative. We give an explicit method to compute the discrepancy of the number of occurrences of the block
$\mathtt {01}$
in the Thue–Morse sequence. We prove that the sequence of discrepancies is the sequence of output sums of a certain base-
$2$
transducer.
An oracle A is low-for-speed if it is unable to speed up the computation of a set which is already computable: if a decidable language can be decided in time
$t(n)$
using A as an oracle, then it can be decided without an oracle in time
$p(t(n))$
for some polynomial p. The existence of a set which is low-for-speed was first shown by Bayer and Slaman who constructed a non-computable computably enumerable set which is low-for-speed. In this paper we answer a question previously raised by Bienvenu and Downey, who asked whether there is a minimal degree which is low-for-speed. The standard method of constructing a set of minimal degree via forcing is incompatible with making the set low-for-speed; but we are able to use an interesting new combination of forcing and full approximation to construct a set which is both of minimal degree and low-for-speed.
Combinatorial samplers are algorithmic schemes devised for the approximate- and exact-size generation of large random combinatorial structures, such as context-free words, various tree-like data structures, maps, tilings, RNA molecules. They can be adapted to combinatorial specifications with additional parameters, allowing for a more flexible control over the output profile of parametrised combinatorial patterns. One can control, for instance, the number of leaves, profile of node degrees in trees or the number of certain sub-patterns in generated strings. However, such a flexible control requires an additional and nontrivial tuning procedure. Using techniques of convex optimisation, we present an efficient tuning algorithm for multi-parametric combinatorial specifications. Our algorithm works in polynomial time in the system description length, the number of tuning parameters, the number of combinatorial classes in the specification, and the logarithm of the total target size. We demonstrate the effectiveness of our method on a series of practical examples, including rational, algebraic, and so-called Pólya specifications. We show how our method can be adapted to a broad range of less typical combinatorial constructions, including symmetric polynomials, labelled sets and cycles with cardinality lower bounds, simple increasing trees or substitutions. Finally, we discuss some practical aspects of our prototype tuner implementation and provide its benchmark results.
We study a natural model of a random $2$-dimensional cubical complex which is a subcomplex of an n-dimensional cube, and where every possible square $2$-face is included independently with probability p. Our main result exhibits a sharp threshold $p=1/2$ for homology vanishing as $n \to \infty $. This is a $2$-dimensional analogue of the Burtin and Erdoős–Spencer theorems characterising the connectivity threshold for random graphs on the $1$-skeleton of the n-dimensional cube.
Our main result can also be seen as a cubical counterpart to the Linial–Meshulam theorem for random $2$-dimensional simplicial complexes. However, the models exhibit strikingly different behaviours. We show that if $p> 1 - \sqrt {1/2} \approx 0.2929$, then with high probability the fundamental group is a free group with one generator for every maximal $1$-dimensional face. As a corollary, homology vanishing and simple connectivity have the same threshold, even in the strong ‘hitting time’ sense. This is in contrast with the simplicial case, where the thresholds are far apart. The proof depends on an iterative algorithm for contracting cycles – we show that with high probability, the algorithm rapidly and dramatically simplifies the fundamental group, converging after only a few steps.
In [20] Krajíček and Pudlák discovered connections between problems in computational complexity and the lengths of first-order proofs of finite consistency statements. Later Pudlák [25] studied more statements that connect provability with computational complexity and conjectured that they are true. All these conjectures are at least as strong as
$\mathsf {P}\neq \mathsf {NP}$
[23–25].One of the problems concerning these conjectures is to find out how tightly they are connected with statements about computational complexity classes. Results of this kind had been proved in [20, 22].In this paper, we generalize and strengthen these results. Another question that we address concerns the dependence between these conjectures. We construct two oracles that enable us to answer questions about relativized separations asked in [19, 25] (i.e., for the pairs of conjectures mentioned in the questions, we construct oracles such that one conjecture from the pair is true in the relativized world and the other is false and vice versa). We also show several new connections between the studied conjectures. In particular, we show that the relation between the finite reflection principle and proof systems for existentially quantified Boolean formulas is similar to the one for finite consistency statements and proof systems for non-quantified propositional tautologies.
In this survey we discuss work of Levin and V’yugin on collections of sequences that are non-negligible in the sense that they can be computed by a probabilistic algorithm with positive probability. More precisely, Levin and V’yugin introduced an ordering on collections of sequences that are closed under Turing equivalence. Roughly speaking, given two such collections
$\mathcal {A}$
and
$\mathcal {B}$
,
$\mathcal {A}$
is below
$\mathcal {B}$
in this ordering if
$\mathcal {A}\setminus \mathcal {B}$
is negligible. The degree structure associated with this ordering, the Levin–V’yugin degrees (or
$\mathrm {LV}$
-degrees), can be shown to be a Boolean algebra, and in fact a measure algebra. We demonstrate the interactions of this work with recent results in computability theory and algorithmic randomness: First, we recall the definition of the Levin–V’yugin algebra and identify connections between its properties and classical properties from computability theory. In particular, we apply results on the interactions between notions of randomness and Turing reducibility to establish new facts about specific LV-degrees, such as the LV-degree of the collection of 1-generic sequences, that of the collection of sequences of hyperimmune degree, and those collections corresponding to various notions of effective randomness. Next, we provide a detailed explanation of a complex technique developed by V’yugin that allows the construction of semi-measures into which computability-theoretic properties can be encoded. We provide two examples of the use of this technique by explicating a result of V’yugin’s about the LV-degree of the collection of Martin-Löf random sequences and extending the result to the LV-degree of the collection of sequences of DNC degree.