To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we present a heuristic algorithm for solving exact, as well as approximate, shortest vector and closest vector problems on lattices. The algorithm can be seen as a modified sieving algorithm for which the vectors of the intermediate sets lie in overlattices or translated cosets of overlattices. The key idea is hence no longer to work with a single lattice but to move the problems around in a tower of related lattices. We initiate the algorithm by sampling very short vectors in an overlattice of the original lattice that admits a quasi-orthonormal basis and hence an efficient enumeration of vectors of bounded norm. Taking sums of vectors in the sample, we construct short vectors in the next lattice. Finally, we obtain solution vector(s) in the initial lattice as a sum of vectors of an overlattice. The complexity analysis relies on the Gaussian heuristic. This heuristic is backed by experiments in low and high dimensions that closely reflect these estimates when solving hard lattice problems in the average case.
This new approach allows us to solve not only shortest vector problems, but also closest vector problems, in lattices of dimension $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}n$ in time $2^{0.3774\, n}$ using memory $2^{0.2925\, n}$. Moreover, the algorithm is straightforward to parallelize on most computer architectures.
In this paper we give a new formula for adding $2$-coverings and $3$-coverings of elliptic curves that avoids the need for any field extensions. We show that the $6$-coverings obtained can be represented by pairs of cubic forms. We then prove a theorem on the existence of such models with integer coefficients and the same discriminant as a minimal model for the Jacobian elliptic curve. This work has applications to finding rational points of large height on elliptic curves.
We present an efficient algorithm to compute the Hasse–Witt matrix of a hyperelliptic curve $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}C/\mathbb{Q}$ modulo all primes of good reduction up to a given bound $N$, based on the average polynomial-time algorithm recently proposed by the first author. An implementation for hyperelliptic curves of genus 2 and 3 is more than an order of magnitude faster than alternative methods for $N = 2^{26}$.
In this paper we consider ordinary elliptic curves over global function fields of characteristic $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}2$. We present a method for performing a descent by using powers of the Frobenius and the Verschiebung. An examination of the local images of the descent maps together with a duality theorem yields information about the global Selmer groups. Explicit models for the homogeneous spaces representing the elements of the Selmer groups are given and used to construct independent points on the elliptic curve. As an application we use descent maps to prove an upper bound for the naive height of an $S$-integral point on $A$. To illustrate our methods, a detailed example is presented.
Let $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}f\in S_2(\Gamma _0(N))$ be a normalized newform such that the abelian variety $A_f$ attached by Shimura to $f$ is the Jacobian of a genus-two curve. We give an efficient algorithm for computing Galois representations associated to such newforms.
This paper introduces ‘hyper-and-elliptic-curve cryptography’, in which a single high-security group supports fast genus-2-hyperelliptic-curve formulas for variable-base-point single-scalar multiplication (for example, Diffie–Hellman shared-secret computation) and at the same time supports fast elliptic-curve formulas for fixed-base-point scalar multiplication (for example, key generation) and multi-scalar multiplication (for example, signature verification).
We describe how to compute the ideal class group and the unit group of an order in a number field in subexponential time. Our method relies on the generalized Riemann hypothesis and other usual heuristics concerning the smoothness of ideals. It applies to arbitrary classes of number fields, including those for which the degree goes to infinity.
We describe algorithms for computing central values of twists of $L$-functions associated to Hilbert modular forms, carry out such computations for a number of examples, and compare the results of these computations to some heuristics and predictions from random matrix theory.
Let $\mathcal{O}$ be a maximal order in a definite quaternion algebra over $\mathbb{Q}$ of prime discriminant $p$, and $\ell $ a small prime. We describe a probabilistic algorithm which, for a given left $\mathcal{O}$-ideal, computes a representative in its left ideal class of $\ell $-power norm. In practice the algorithm is efficient and, subject to heuristics on expected distributions of primes, runs in expected polynomial time. This solves the underlying problem for a quaternion analog of the Charles–Goren–Lauter hash function, and has security implications for the original CGL construction in terms of supersingular elliptic curves.
We construct explicit $K3$ surfaces over $\mathbb{Q}$ having real multiplication. Our examples are of geometric Picard rank 16. The standard method for the computation of the Picard rank provably fails for the surfaces constructed.
We study new families of curves that are suitable for efficiently parametrizing their moduli spaces. We explicitly construct such families for smooth plane quartics in order to determine unique representatives for the isomorphism classes of smooth plane quartics over finite fields. In this way, we can visualize the distributions of their traces of Frobenius. This leads to new observations on fluctuations with respect to the limiting symmetry imposed by the theory of Katz and Sarnak.
Let $\mathfrak{R}$ be a complete discrete valuation ring, $S=\mathfrak{R}[[u]]$ and $d$ a positive integer. The aim of this paper is to explain how to efficiently compute usual operations such as sum and intersection of sub-$S$-modules of $S^d$. As $S$ is not principal, it is not possible to have a uniform bound on the number of generators of the modules resulting from these operations. We explain how to mitigate this problem, following an idea of Iwasawa, by computing an approximation of the result of these operations up to a quasi-isomorphism. In the course of the analysis of the $p$-adic and $u$-adic precisions of the computations, we have to introduce more general coefficient rings that may be interesting for their own sake. Being able to perform linear algebra operations modulo quasi-isomorphism with $S$-modules has applications in Iwasawa theory and $p$-adic Hodge theory. It is used in particular in Caruso and Lubicz (Preprint, 2013,arXiv:1309.4194) to compute the semi-simplified modulo $p$ of a semi-stable representation.
We develop algorithms to turn quotients of rings of integers into effective Euclidean rings by giving polynomial algorithms for all fundamental ring operations. In addition, we study normal forms for modules over such rings and their behavior under certain quotients. We illustrate the power of our ideas in a new modular normal form algorithm for modules over rings of integers, vastly outperforming classical algorithms.
In this paper we study the discrete logarithm problem in medium- and high-characteristic finite fields. We propose a variant of the number field sieve (NFS) based on numerous number fields. Our improved algorithm computes discrete logarithms in $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}\mathbb{F}_{p^n}$ for the whole range of applicability of the NFS and lowers the asymptotic complexity from $L_{p^n}({1/3},({128/9})^{1/3})$ to $L_{p^n}({1/3},(2^{13}/3^6)^{1/3})$ in the medium-characteristic case, and from $L_{p^n}({1/3},({64/9})^{1/3})$ to $L_{p^n}({1/3},((92 + 26 \sqrt{13})/27)^{1/3})$ in the high-characteristic case.
In order to use Python we need to install at least two distinct types of software. The first type is relatively straightforward, consisting of core Python itself, and associated packages, and is discussed in Section A.1 below. The second type addresses a much more complex issue, the interaction of human and machine, i.e., how to instruct the machine to do what we want it to do. An optimal resolution depends in part on what other computer software you wish to use. Some pointers are offered in Section A.2.
Most users of Matlab or Mathematica never see these issues. Invoking either application sets up an editor window or notebook, and an integrated editor accepts instructions from the keyboard. Then a key press invokes the interpreter, and delivers the desired output with an automatic return to the editor window. They obviously get top marks for immediate convenience and simplicity, but in the long term they may not win out on efficiency and versatility.
Installing Python packages
As has been outlined in Section 1.2, we shall need not only core Python, but also the add-on packages IPython (see Chapter 2), numpy, scipy (packages discussed in Chapter 4), matplotlib (see Chapter 5) and potentially mayavi (discussed in Chapter 6). Although data analysis is merely mentioned in Section 4.5, that section recommends the pandas package, and for many this will be a must-have.
The title of this book is “Python for Scientists”, but what does that mean? The dictionary defines “Python” as either (a) a non-venomous snake from Asia or Saharan Africa or (b) a computer scripting language, and it is the second option which is intended here. (What exactly this second definition means will be explained later.) By “scientist”, I mean anyone who uses quantitative models either to obtain conclusions by processing pre-collected experimental data or to model potentially observable results from a more abstract theory, and who asks “what if?”. What if I analyze the data in a different way? What if I change the model? Thus the term also includes economists, engineers, mathematicians among others, as well as the usual concept of scientists. Given the volume of potential data or the complexity (non-linearity) of many theoretical models, the use of computers to answer these questions is fast becoming mandatory.
Advances in computer hardware mean that immense amounts of data or evermore complex models can be processed at increasingly rapid speeds. These advances also mean reduced costs so that today virtually every scientist has access to a “personal computer”, either a desktop work station or a laptop, and the distinction between these two is narrowing quickly. It might seem to be a given that suitable software will also be available so that the “what if” questions can be answered readily. However, this turns out not always to be the case.
In this final chapter, we present an extended example or “case study” of a topic which is relevant to almost all of the theoretical sciences, called multigrid. For many, multigrid is a closed and forbidding book, and so we first look at the type of problems it can be used to solve, and then outline how it works, finally describing broadly how it can be implemented very easily in Python. The rest of the chapter fleshes out the details.
In very many problems, we associate data with points on a spatial grid. For simplicity, we assume that the grid is uniform. In a realistic case, we might want a resolution of say 100 points per dimension, and for a three-dimensional grid we would have 106 grid points. Even if we store only one piece of data per grid point, this is a lot of data which we can pack into a vector (one-dimensional array) u of dimension N = O(106). These data are not free but will be restricted either by algebraic or differential equations. Using finite difference (or finite element) approximations, we can ensure that we are dealing with algebraic equations. Even if the underlying equations are non-linear, we have to linearize them (using, e.g., a Newton–Raphson procedure, see Section 9.3) for there is no hope of solving such a large set of non-linear equations.
This sounds like software produced by Apple, but it is in fact a Python interpreter on steroids. It has been designed and written by scientists with the aim of offering very fast exploration and construction of code with minimal typing effort, and offering appropriate, even maximal, on-screen help when required. Documentation and much more is available on the website. This chapter is a brief introduction to the essentials of using IPython. A more extended discursive treatment can be found in, e.g., Rossant (2013).
IPython comes with three different user interfaces, terminal, qtconsole and notebook. If you have installed the recommended EPD distribution, then all three should be available to you. For the last two, additional software might be needed if you have used a different distribution. You can check what versions are available for your installation by issuing (at the command line) first ipython followed by the “return” key (RET). You can escape from IPython by typing exit followed by RET in the interpreter. Next try out the command ipython qtconsole following a similar strategy. Finally, try out the command ipython notebook. This should open in a new browser window. To escape from the third, you need CTL-C at the command line, plus closing the browser window. What is the difference between them?
Built into IPython is the GNU readline utility. This means that on the interpreter's current line, the left- and right-arrow keys move the cursor appropriately, and deletion and insertion are straightforward.