We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We introduce PyCFTBoot, a wrapper designed to reduce the barrier to entry in conformal bootstrap calculations that require semidefinite programming. Symengine and SDPB are used for the most intensive symbolic and numerical steps respectively. After reviewing the built-in algorithms for conformal blocks, we explain how to use the code through a number of examples that verify past results. As an application, we show that the multi-correlator bootstrap still appears to single out the Wilson-Fisher fixed points as special theories in dimensions between 3 and 4 despite the recent proof that they violate unitarity.
A novel mesh deformation technique is developed based on the Delaunay graph mapping method and the inverse distance weighting (IDW) interpolation. The algorithm maintains the advantages of the efficiency of Delaunay graph mapping mesh deformation while it also possesses the ability of better controlling the near surface mesh quality. The Delaunay graph is used to divide the mesh domain into a number of sub-domains. On each sub-domain, the inverse distance weighting interpolation is applied, resulting in a similar efficiency as compared to the fast Delaunay graph mapping method. The paper will show how the near-wall mesh quality is controlled and improved by the new method
A class of games for finding a leader among a group of candidates is studied in detail. This class covers games based on coin tossing and rock-paper-scissors as special cases and its complexity exhibits similar stochastic behaviors: either of logarithmic mean and bounded variance or of exponential mean and exponential variance. Many applications are also discussed.
We propose and investigate a novel solution strategy to efficiently and accurately compute approximate solutions to semilinear optimal control problems, focusing on the optimal control of phase field formulations of geometric evolution laws. The optimal control of geometric evolution laws arises in a number of applications in fields including material science, image processing, tumour growth and cell motility. Despite this, many open problems remain in the analysis and approximation of such problems. In the current work we focus on a phase field formulation of the optimal control problem, hence exploiting the well developed mathematical theory for the optimal control of semilinear parabolic partial differential equations. Approximation of the resulting optimal control problemis computationally challenging, requiring massive amounts of computational time and memory storage. The main focus of this work is to propose, derive, implement and test an efficient solution method for such problems. The solver for the discretised partial differential equations is based upon a geometric multigrid method incorporating advanced techniques to deal with the nonlinearities in the problem and utilising adaptive mesh refinement. An in-house two-grid solution strategy for the forward and adjoint problems, that significantly reduces memory requirements and CPU time, is proposed and investigated computationally. Furthermore, parallelisation as well as an adaptive-step gradient update for the control are employed to further improve efficiency. Along with a detailed description of our proposed solution method together with its implementation we present a number of computational results that demonstrate and evaluate our algorithms with respect to accuracy and efficiency. A highlight of the present work is simulation results on the optimal control of phase field formulations of geometric evolution laws in 3-D which would be computationally infeasible without the solution strategies proposed in the present work.
In this paper, the idea of a combination of variable separation approach and the extended homoclinic test approach is proposed to seek non-travelling wave solutions of Calogero equation. The equation is reduced to some (1+1)-dimensional nonlinear equations by applying the variable separation approach and solves reduced equations with the extended homoclinic test technique. Based on this idea and with the aid of symbolic computation, some new explicit solutions can be obtained.
Let $\mathbf{f}$ and $\mathbf{g}$ be polynomials of a bounded Euclidean norm in the ring $\mathbb{Z}[X]/\langle X^{n}+1\rangle$. Given the polynomial $[\mathbf{f}/\mathbf{g}]_{q}\in \mathbb{Z}_{q}[X]/\langle X^{n}+1\rangle$, the NTRU problem is to find $\mathbf{a},\mathbf{b}\in \mathbb{Z}[X]/\langle X^{n}+1\rangle$ with a small Euclidean norm such that $[\mathbf{a}/\mathbf{b}]_{q}=[\mathbf{f}/\mathbf{g}]_{q}$. We propose an algorithm to solve the NTRU problem, which runs in $2^{O(\log ^{2}\unicode[STIX]{x1D706})}$ time when $\Vert \mathbf{g}\Vert ,\Vert \mathbf{f}\Vert$, and $\Vert \mathbf{g}^{-1}\Vert$ are within some range. The main technique of our algorithm is the reduction of a problem on a field to one on a subfield. The GGH scheme, the first candidate of an (approximate) multilinear map, was recently found to be insecure by the Hu–Jia attack using low-level encodings of zero, but no polynomial-time attack was known without them. In the GGH scheme without low-level encodings of zero, our algorithm can be directly applied to attack this scheme if we have some top-level encodings of zero and a known pair of plaintext and ciphertext. Using our algorithm, we can construct a level-$0$ encoding of zero and utilize it to attack a security ground of this scheme in the quasi-polynomial time of its security parameter using the parameters suggested by Garg, Gentry and Halevi [‘Candidate multilinear maps from ideal lattices’, Advances in cryptology — EUROCRYPT 2013 (Springer, 2013) 1–17].
A gravitational search algorithm (GSA) is a meta-heuristic development that is modelled on the Newtonian law of gravity and mass interaction. Here we propose a new hybrid algorithm called the Direct Gravitational Search Algorithm (DGSA), which combines a GSA that can perform a wide exploration and deep exploitation with the Nelder-Mead method, as a promising direct method capable of an intensification search. The main drawback of a meta-heuristic algorithm is slow convergence, but in our DGSA the standard GSA is run for a number of iterations before the best solution obtained is passed to the Nelder-Mead method to refine it and avoid running iterations that provide negligible further improvement. We test the DGSA on 7 benchmark integer functions and 10 benchmark minimax functions to compare the performance against 9 other algorithms, and the numerical results show the optimal or near optimal solution is obtained faster.
Neighbour search (NS) is the core of any implementations of smoothed particle hydrodynamics (SPH). In this paper,we present an efficient neighbour search method based on the plane sweep (PW) algorithm with N being the number of SPH particles. The resulting method, dubbed the PWNS method, is totally independent of grids (i.e., purely meshfree) and capable of treating variable smoothing length, arbitrary particle distribution and heterogenous kernels. Several state-of-the-art data structures and algorithms, e.g., the segment tree and the Morton code, are optimized and implemented. By simply allowingmultiple lines to sweep the SPH particles simultaneously from different initial positions, a parallelization of the PWNS method with satisfactory speedup and load-balancing can be easily achieved. That is, the PWNS SPH solver has a great potential for large scale fluid dynamics simulations.
A k-uniform hypergraph H = (V, E) is called ℓ-orientable if there is an assignment of each edge e ∈ E to one of its vertices v ∈ e such that no vertex is assigned more than ℓ edges. Let Hn,m,k be a hypergraph, drawn uniformly at random from the set of all k-uniform hypergraphs with n vertices and m edges. In this paper we establish the threshold for the ℓ-orientability of Hn,m,k for all k ⩾ 3 and ℓ ⩾ 2, that is, we determine a critical quantity c*k,ℓ such that with probability 1 − o(1) the graph Hn,cn,k has an ℓ-orientation if c < c*k,ℓ, but fails to do so if c > c*k,ℓ.
Our result has various applications, including sharp load thresholds for cuckoo hashing, load balancing with guaranteed maximum load, and massive parallel access to hard disk arrays.
Consider the problem of drawing random variates (X1, …, Xn) from a distribution where the marginal of each Xi is specified, as well as the correlation between every pair Xi and Xj. For given marginals, the Fréchet-Hoeffding bounds put a lower and upper bound on the correlation between Xi and Xj. Any achievable correlation between Xi and Xj
is a convex combination of these bounds. We call the value λ(Xi, Xj) ∈ [0, 1] of this convex combination the convexity parameter of (Xi, Xj) with λ(Xi, Xj) = 1 corresponding to the upper bound and maximal correlation. For given marginal distributions functions F1, …, Fn of (X1, …, Xn), we show that λ(Xi, Xj) = λij
if and only if there exist symmetric Bernoulli random variables (B1, …, Bn) (that is {0, 1} random variables with mean ½) such that λ(Bi, Bj) = λij. In addition, we characterize completely the set of convexity parameters for symmetric Bernoulli marginals in two, three, and four dimensions.
This paper studies a special type of binomial splitting process. Such a process can be used to model a high dimensional corner parking problem as well as determining the depth of random PATRICIA (practical algorithm to retrieve information coded in alphanumeric) tries, which are a special class of digital tree data structures. The latter also has natural interpretations in terms of distinct values in independent and identically distributed geometric random variables and the occupancy problem in urn models. The corresponding distribution is marked by a logarithmic mean and a bounded variance, which is oscillating, if the binomial parameter p is not equal to ½, and asymptotic to one in the unbiased case. Also, the limiting distribution does not exist as a result of the periodic fluctuations.
We show the existence of a large family of representations supported by the orbit closure of the determinant. However, the validity of our result is based on the validity of the celebrated ‘Latin square conjecture’ due to Alon and Tarsi or, more precisely, on the validity of an equivalent ‘column Latin square conjecture’ due to Huang and Rota.
We present a higher-dimensional generalization of the Gama–Nguyen algorithm (STOC ’08) for approximating the shortest vector problem in a lattice. This generalization approximates the densest sublattice by using a subroutine solving the exact problem in low dimension, such as the Dadush–Micciancio algorithm (SODA ’13). Our approximation factor corresponds to a natural inequality on Rankin’s constant derived from Rankin’s inequality.
We study the radius of absolute monotonicity $R$ of rational functions with numerator and denominator of degree $s$ that approximate the exponential function to order $p$. Such functions arise in the application of implicit $s$-stage, order $p$ Runge–Kutta methods for initial value problems, and the radius of absolute monotonicity governs the numerical preservation of properties like positivity and maximum-norm contractivity. We construct a function with $p=2$ and $R>2s$, disproving a conjecture of van de Griend and Kraaijevanger. We determine the maximum attainable radius for functions in several one-parameter families of rational functions. Moreover, we prove earlier conjectured optimal radii in some families with two or three parameters via uniqueness arguments for systems of polynomial inequalities. Our results also prove the optimality of some strong stability preserving implicit and singly diagonally implicit Runge–Kutta methods. Whereas previous results in this area were primarily numerical, we give all constants as exact algebraic numbers.
We consider the classical problem of finding the best uniform approximation by polynomials of $1/(x-a)^2,$ where $a>1$ is given, on the interval $[-\! 1,1]$. First, using symbolic computation tools we derive the explicit expressions of the polynomials of best approximation of low degrees and then give a parametric solution of the problem in terms of elliptic functions. Symbolic computation is invoked then once more to derive a recurrence relation for the coefficients of the polynomials of best uniform approximation based on a Pell-type equation satisfied by the solutions.
The problem of finding a nontrivial factor of a polynomial $f(x)$ over a finite field ${\mathbb{F}}_q$ has many known efficient, but randomized, algorithms. The deterministic complexity of this problem is a famous open question even assuming the generalized Riemann hypothesis (GRH). In this work we improve the state of the art by focusing on prime degree polynomials; let $n$ be the degree. If $(n-1)$ has a ‘large’ $r$-smooth divisor $s$, then we find a nontrivial factor of $f(x)$ in deterministic $\mbox{poly}(n^r,\log q)$ time, assuming GRH and that $s=\Omega (\sqrt{n/2^r})$. Thus, for $r=O(1)$ our algorithm is polynomial time. Further, for $r=\Omega (\log \log n)$ there are infinitely many prime degrees $n$ for which our algorithm is applicable and better than the best known, assuming GRH. Our methods build on the algebraic-combinatorial framework of $m$-schemes initiated by Ivanyos, Karpinski and Saxena (ISSAC 2009). We show that the $m$-scheme on $n$ points, implicitly appearing in our factoring algorithm, has an exceptional structure, leading us to the improved time complexity. Our structure theorem proves the existence of small intersection numbers in any association scheme that has many relations, and roughly equal valencies and indistinguishing numbers.
We consider a broad class of fair leader election algorithms, and study the duration of contestants (the number of rounds a randomly selected contestant stays in the competition) and the overall cost of the algorithm. We give sufficient conditions for the duration to have a geometric limit distribution (a perpetuity built from Bernoulli random variables), and for the limiting distribution of the total cost (after suitable normalization) to be a perpetuity. For the duration, the proof is established via convergence (to 0) of the first-order Wasserstein distance from the geometric limit. For the normalized overall cost, the method of proof is also convergence of the first-order Wasserstein distance, augmented with an argument based on a contraction mapping in the first-order Wasserstein metric space to show that the limit approaches a unique fixed-point solution of a perpetuity distributional equation. The use of these two steps is commonly referred to as the contraction method.
In this work we consider the mean-field traveling salesman problem, where the intercity distances are taken to be independent and identically distributed with some distribution F. We consider the simplest approximation algorithm, namely, the nearest-neighbor algorithm, where the rule is to move to the nearest nonvisited city. We show that the limiting behavior of the total length of the nearest-neighbor tour depends on the scaling properties of the density of F at 0 and derive the limits for all possible cases of general F.
When the search algorithm QuickSelect compares keys during its execution in order to find a key of target rank, it must operate on the keys' representations or internal structures, which were ignored by the previous studies that quantified the execution cost for the algorithm in terms of the number of required key comparisons. In this paper we analyze running costs for the algorithm that take into account not only the number of key comparisons, but also the cost of each key comparison. We suppose that keys are represented as sequences of symbols generated by various probabilistic sources and that QuickSelect operates on individual symbols in order to find the target key. We identify limiting distributions for the costs, and derive integral and series expressions for the expectations of the limiting distributions. These expressions are used to recapture previously obtained results on the number of key comparisons required by the algorithm.
Let $S$ be a polynomial ring over a field $K$ and let $I$ be a monomial ideal of $S$. We say that $I$ is MHC (that is, $I$ satisfies the maximal height condition for the associated primes of $I$) if there exists a prime ideal $\mathfrak{p}\in {\mathrm{Ass} }_{S} \hspace{0.167em} S/ I$ for which $\mathrm{ht} (\mathfrak{p})$ equals the number of indeterminates that appear in the minimal set of monomials generating $I$. Let $I= { \mathop{\bigcap }\nolimits}_{i= 1}^{k} {Q}_{i} $ be the irreducible decomposition of $I$ and let $m(I)= \max \{ \vert Q_{i}\vert - \mathrm{ht} ({Q}_{i} ): 1\leq i\leq k\} $, where $\vert {Q}_{i} \vert $ denotes the total degree of ${Q}_{i} $. Then it can be seen that when $I$ is primary, $\mathrm{reg} (S/ I)= m(I)$. In this paper we improve this result and show that whenever $I$ is MHC, then $\mathrm{reg} (S/ I)= m(I)$ provided $\vert {\mathrm{Ass} }_{S} \hspace{0.167em} S/ I\vert \leq 2$. We also prove that $m({I}^{n} )\leq n\max \{ \vert Q_{i}\vert : 1\leq i\leq ~k\} - \mathrm{ht} (I)$, for all $n\geq 1$. In addition we show that if $I$ is MHC and $w$ is an indeterminate which is not in the monomials generating $I$, then $\mathrm{reg} (S/ \mathop{(I+ {w}^{d} S)}\nolimits ^{n} )\leq \mathrm{reg} (S/ I)+ nd- 1$ for all $n\geq 1$ and $d$ large enough. Finally, we implement an algorithm for the computation of $m(I)$.