We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Finding a hidden partition in a random environment is a general and important problem which contains as subproblems many important questions, such as finding a hidden clique, finding a hidden colouring, finding a hidden bipartition, etc.
In this paper we provide a simple SVD algorithm for this purpose, addressing a question of McSherry. This algorithm is easy to implement and works for sparse graphs under optimal density assumptions. We also consider an approximating algorithm, which on one hand works under very mild assumptions, but on other hand can sometimes be upgraded to give the exact solution.
We provide a quantum algorithm for simulating the dynamics of sparse Hamiltonians with complexity sublogarithmic in the inverse error, an exponential improvement over previous methods. Specifically, we show that a $d$-sparse Hamiltonian $H$ acting on $n$ qubits can be simulated for time $t$ with precision $\unicode[STIX]{x1D716}$ using $O(\unicode[STIX]{x1D70F}(\log (\unicode[STIX]{x1D70F}/\unicode[STIX]{x1D716})/\log \log (\unicode[STIX]{x1D70F}/\unicode[STIX]{x1D716})))$ queries and $O(\unicode[STIX]{x1D70F}(\log ^{2}(\unicode[STIX]{x1D70F}/\unicode[STIX]{x1D716})/\log \log (\unicode[STIX]{x1D70F}/\unicode[STIX]{x1D716}))n)$ additional 2-qubit gates, where $\unicode[STIX]{x1D70F}=d^{2}\Vert H\Vert _{\max }t$. Unlike previous approaches based on product formulas, the query complexity is independent of the number of qubits acted on, and for time-varying Hamiltonians, the gate complexity is logarithmic in the norm of the derivative of the Hamiltonian. Our algorithm is based on a significantly improved simulation of the continuous- and fractional-query models using discrete quantum queries, showing that the former models are not much more powerful than the discrete model even for very small error. We also simplify the analysis of this conversion, avoiding the need for a complex fault-correction procedure. Our simplification relies on a new form of ‘oblivious amplitude amplification’ that can be applied even though the reflection about the input state is unavailable. Finally, we prove new lower bounds showing that our algorithms are optimal as a function of the error.
We call `bits' a sequence of devices indexed by positive integers, where every device can be in two states: 0 (idle) and 1 (active). Start from the `ground state' of the system when all bits are in 0-state. In our first binary flipping (BF) model the evolution of the system behaves as follows. At each time step choose one bit from a given distribution P on the positive integers independently of anything else, then flip the state of this bit to the opposite state. In our second damaged bits (DB) model a `damaged' state is added: each selected idling bit changes to active, but selecting an active bit changes its state to damaged in which it then stays forever. In both models we analyse the recurrence of the system's ground state when no bits are active. We present sufficient conditions for both the BF and DB models to show recurrent or transient behaviour, depending on the properties of the distribution P. We provide a bound for fractional moments of the return time to the ground state for the BF model, and prove a central limit theorem for the number of active bits for both models.
NTRU is a public-key cryptosystem introduced at ANTS-III. The two most used techniques in attacking the NTRU private key are meet-in-the-middle attacks and lattice-basis reduction attacks. Howgrave-Graham combined both techniques in 2007 and pointed out that the largest obstacle to attacks is the memory capacity that is required for the meet-in-the-middle phase. In the present paper an algorithm is presented that applies low-memory techniques to find ‘golden’ collisions to Odlyzko’s meet-in-the-middle attack against the NTRU private key. Several aspects of NTRU secret keys and the algorithm are analysed. The running time of the algorithm with a maximum storage capacity of $w$ is estimated and experimentally verified. Experiments indicate that decreasing the storage capacity $w$ by a factor $1<c<\sqrt{w}$ increases the running time by a factor $\sqrt{c}$.
Let $\mathbf{f}$ and $\mathbf{g}$ be polynomials of a bounded Euclidean norm in the ring $\mathbb{Z}[X]/\langle X^{n}+1\rangle$. Given the polynomial $[\mathbf{f}/\mathbf{g}]_{q}\in \mathbb{Z}_{q}[X]/\langle X^{n}+1\rangle$, the NTRU problem is to find $\mathbf{a},\mathbf{b}\in \mathbb{Z}[X]/\langle X^{n}+1\rangle$ with a small Euclidean norm such that $[\mathbf{a}/\mathbf{b}]_{q}=[\mathbf{f}/\mathbf{g}]_{q}$. We propose an algorithm to solve the NTRU problem, which runs in $2^{O(\log ^{2}\unicode[STIX]{x1D706})}$ time when $\Vert \mathbf{g}\Vert ,\Vert \mathbf{f}\Vert$, and $\Vert \mathbf{g}^{-1}\Vert$ are within some range. The main technique of our algorithm is the reduction of a problem on a field to one on a subfield. The GGH scheme, the first candidate of an (approximate) multilinear map, was recently found to be insecure by the Hu–Jia attack using low-level encodings of zero, but no polynomial-time attack was known without them. In the GGH scheme without low-level encodings of zero, our algorithm can be directly applied to attack this scheme if we have some top-level encodings of zero and a known pair of plaintext and ciphertext. Using our algorithm, we can construct a level-$0$ encoding of zero and utilize it to attack a security ground of this scheme in the quasi-polynomial time of its security parameter using the parameters suggested by Garg, Gentry and Halevi [‘Candidate multilinear maps from ideal lattices’, Advances in cryptology — EUROCRYPT 2013 (Springer, 2013) 1–17].
We consider sets ${\it\Gamma}(n,s,k)$ of narrow clauses expressing that no definition of a size $s$ circuit with $n$ inputs is refutable in resolution R in $k$ steps. We show that every CNF with a short refutation in extended R, ER, can be easily reduced to an instance of ${\it\Gamma}(0,s,k)$ (with $s,k$ depending on the size of the ER-refutation) and, in particular, that ${\it\Gamma}(0,s,k)$ when interpreted as a relativized NP search problem is complete among all such problems provably total in bounded arithmetic theory $V_{1}^{1}$. We use the ideas of implicit proofs from Krajíček [J. Symbolic Logic, 69 (2) (2004), 387–397; J. Symbolic Logic, 70 (2) (2005), 619–630] to define from ${\it\Gamma}(0,s,k)$ a nonrelativized NP search problem $i{\it\Gamma}$ and we show that it is complete among all such problems provably total in bounded arithmetic theory $V_{2}^{1}$. The reductions are definable in theory $S_{2}^{1}$. We indicate how similar results can be proved for some other propositional proof systems and bounded arithmetic theories and how the construction can be used to define specific random unsatisfiable formulas, and we formulate two open problems about them.
We show that every finitely generated algebra that is a finitely generated module over a finitely generated commutative subalgebra is an automaton algebra in the sense of Ufnarovskii.
Recent inapproximability results of Sly (2010), together with an approximation algorithm presented by Weitz (2006), establish a beautiful picture of the computational complexity of approximating the partition function of the hard-core model. Let λc($\mathbb{T}_{\Delta}$) denote the critical activity for the hard-model on the infinite Δ-regular tree. Weitz presented an FPTAS for the partition function when λ < λc($\mathbb{T}_{\Delta}$) for graphs with constant maximum degree Δ. In contrast, Sly showed that for all Δ ⩾ 3, there exists εΔ > 0 such that (unless RP = NP) there is no FPRAS for approximating the partition function on graphs of maximum degree Δ for activities λ satisfying λc($\mathbb{T}_{\Delta}$) < λ < λc($\mathbb{T}_{\Delta}$) + εΔ.
We prove that a similar phenomenon holds for the antiferromagnetic Ising model. Sinclair, Srivastava and Thurley (2014) extended Weitz's approach to the antiferromagnetic Ising model, yielding an FPTAS for the partition function for all graphs of constant maximum degree Δ when the parameters of the model lie in the uniqueness region of the infinite Δ-regular tree. We prove the complementary result for the antiferromagnetic Ising model without external field, namely, that unless RP = NP, for all Δ ⩾ 3, there is no FPRAS for approximating the partition function on graphs of maximum degree Δ when the inverse temperature lies in the non-uniqueness region of the infinite tree $\mathbb{T}_{\Delta}$. Our proof works by relating certain second moment calculations for random Δ-regular bipartite graphs to the tree recursions used to establish the critical points on the infinite tree.
Suppose that X1, X2, . . . are independent identically distributed Bernoulli random variables with mean p. A Bernoulli factory for a function f takes as input X1, X2, . . . and outputs a random variable that is Bernoulli with mean f(p). A fast algorithm is a function that only depends on the values of X1, . . ., XT, where T is a stopping time with small mean. When f(p) is a real analytic function the problem reduces to being able to draw from linear functions Cp for a constant C > 1. Also it is necessary that Cp ⩽ 1 − ε for known ε > 0. Previous methods for this problem required extensive modification of the algorithm for every value of C and ε. These methods did not have explicit bounds on $\mathbb{E}[T]$ as a function of C and ε. This paper presents the first Bernoulli factory for f(p) = Cp with bounds on $\mathbb{E}[T]$ as a function of the input parameters. In fact, supp∈[0,(1−ε)/C]$\mathbb{E}[T]$ ≤ 9.5ε−1C. In addition, this method is very simple to implement. Furthermore, a lower bound on the average running time of any Cp Bernoulli factory is shown. For ε ⩽ 1/2, supp∈[0,(1−ε)/C]$\mathbb{E}[T]$≥0.004Cε−1, so the new method is optimal up to a constant in the running time.
The aim of the discrete logarithm problem with auxiliary inputs is to solve for ${\it\alpha}$, given the elements $g,g^{{\it\alpha}},\ldots ,g^{{\it\alpha}^{d}}$ of a cyclic group $G=\langle g\rangle$, of prime order $p$. The best-known algorithm, proposed by Cheon in 2006, solves for ${\it\alpha}$ in the case where $d\mid (p\pm 1)$, with a running time of $O(\sqrt{p/d}+d^{i})$ group exponentiations ($i=1$ or $1/2$ depending on the sign). There have been several attempts to generalize this algorithm to the case of ${\rm\Phi}_{k}(p)$ where $k\geqslant 3$. However, it has been shown by Kim, Cheon and Lee that a better complexity cannot be achieved than that of the usual square root algorithms.
We propose a new algorithm for solving the DLPwAI. We show that this algorithm has a running time of $\widetilde{O}(\sqrt{p/{\it\tau}_{f}}+d)$ group exponentiations, where ${\it\tau}_{f}$ is the number of absolutely irreducible factors of $f(x)-f(y)$. We note that this number is always smaller than $\widetilde{O}(p^{1/2})$.
In addition, we present an analysis of a non-uniform birthday problem.
A k-uniform hypergraph H = (V, E) is called ℓ-orientable if there is an assignment of each edge e ∈ E to one of its vertices v ∈ e such that no vertex is assigned more than ℓ edges. Let Hn,m,k be a hypergraph, drawn uniformly at random from the set of all k-uniform hypergraphs with n vertices and m edges. In this paper we establish the threshold for the ℓ-orientability of Hn,m,k for all k ⩾ 3 and ℓ ⩾ 2, that is, we determine a critical quantity c*k,ℓ such that with probability 1 − o(1) the graph Hn,cn,k has an ℓ-orientation if c < c*k,ℓ, but fails to do so if c > c*k,ℓ.
Our result has various applications, including sharp load thresholds for cuckoo hashing, load balancing with guaranteed maximum load, and massive parallel access to hard disk arrays.
A prime sieve is an algorithm that finds the primes up to a bound $n$. We say that a prime sieve is incremental, if it can quickly determine if $n+1$ is prime after having found all primes up to $n$. We say a sieve is compact if it uses roughly $\sqrt{n}$ space or less. In this paper, we present two new results.
–
We describe the rolling sieve, a practical, incremental prime sieve that takes $O(n\log \log n)$ time and $O(\sqrt{n}\log n)$ bits of space.
–
We also show how to modify the sieve of Atkin and Bernstein from 2004 to obtain a sieve that is simultaneously sublinear, compact, and incremental.
The second result solves an open problem given by Paul Pritchard in 1994.
We consider ‘unconstrained’ random k-XORSAT, which is a uniformly random system of m linear non-homogeneous equations in $\mathbb{F}$2 over n variables, each equation containing k ⩾ 3 variables, and also consider a ‘constrained’ model where every variable appears in at least two equations. Dubois and Mandler proved that m/n = 1 is a sharp threshold for satisfiability of constrained 3-XORSAT, and analysed the 2-core of a random 3-uniform hypergraph to extend this result to find the threshold for unconstrained 3-XORSAT.
We show that m/n = 1 remains a sharp threshold for satisfiability of constrained k-XORSAT for every k ⩾ 3, and we use standard results on the 2-core of a random k-uniform hypergraph to extend this result to find the threshold for unconstrained k-XORSAT. For constrained k-XORSAT we narrow the phase transition window, showing that m − n → −∞ implies almost-sure satisfiability, while m − n → +∞ implies almost-sure unsatisfiability.
We develop a theory of complexity for numerical computations that takes into account the condition of the input data and allows for roundoff in the computations. We follow the lines of the theory developed by Blum, Shub and Smale for computations over $\mathbb{R}$ (which in turn followed those of the classical, discrete, complexity theory as laid down by Cook, Karp, and Levin, among others). In particular, we focus on complexity classes of decision problems and, paramount among them, on appropriate versions of the classes $\mathsf{P}$, $\mathsf{NP}$, and $\mathsf{EXP}$ of polynomial, nondeterministic polynomial, and exponential time, respectively. We prove some basic relationships between these complexity classes, and provide natural NP-complete problems.
This paper revisits the solution of the word problem for ${\it\omega}$-terms interpreted over finite aperiodic semigroups, obtained by J. McCammond. The original proof of correctness of McCammond’s algorithm, based on normal forms for such terms, uses McCammond’s solution of the word problem for certain Burnside semigroups. In this paper, we establish a new, simpler, correctness proof of McCammond’s algorithm, based on properties of certain regular languages associated with the normal forms. This method leads to new applications.
We compute coherent presentations of Artin monoids, that is, presentations by generators, relations, and relations between the relations. For that, we use methods of higher-dimensional rewriting that extend Squier’s and Knuth–Bendix’s completions into a homotopical completion–reduction, applied to Artin’s and Garside’s presentations. The main result of the paper states that the so-called Tits–Zamolodchikov 3-cells extend Artin’s presentation into a coherent presentation. As a byproduct, we give a new constructive proof of a theorem of Deligne on the actions of an Artin monoid on a category.
We answer the following question posed by Lechuga: given a simply connected space X with both H* (X; ℚ) and π*(X) ⊗ ℚ being finite dimensional, what is the computational complexity of an algorithm computing the cup length and the rational Lusternik—Schnirelmann category of X?
Basically, by a reduction from the decision problem of whether a given graph is k-colourable for k ≥ 3, we show that even stricter versions of the problems above are NP-hard.
In this work we consider the mean-field traveling salesman problem, where the intercity distances are taken to be independent and identically distributed with some distribution F. We consider the simplest approximation algorithm, namely, the nearest-neighbor algorithm, where the rule is to move to the nearest nonvisited city. We show that the limiting behavior of the total length of the nearest-neighbor tour depends on the scaling properties of the density of F at 0 and derive the limits for all possible cases of general F.
A probabilistic cellular automaton (PCA) can be viewed as a Markov chain. The cells are updated synchronously and independently, according to a distribution depending on a finite neighborhood. We investigate the ergodicity of this Markov chain. A classical cellular automaton is a particular case of PCA. For a one-dimensional cellular automaton, we prove that ergodicity is equivalent to nilpotency, and is therefore undecidable. We then propose an efficient perfect sampling algorithm for the invariant measure of an ergodic PCA. Our algorithm does not assume any monotonicity property of the local rule. It is based on a bounding process which is shown to also be a PCA. Last, we focus on the PCA majority, whose asymptotic behavior is unknown, and perform numerical experiments using the perfect sampling procedure.
From power series expansions of functions on curves over finite fields, one can obtain sequences with perfect or almost perfect linear complexity profile. It has been suggested by various authors to use such sequences as key streams for stream ciphers. In this work, we show how long parts of such sequences can be computed efficiently from short ones. Such sequences should therefore be considered to be cryptographically weak. Our attack leads in a natural way to a new measure of the complexity of sequences which we call expansion complexity.