To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter is about algorithms to solve the discrete logarithm problem (DLP) and some variants of it. We focus mainly on deterministic methods that work in any group; later chapters will present the Pollard rho and kangaroo methods, and index calculus algorithms. In this chapter, we also present the concept of generic algorithms and prove lower bounds on the running time of a generic algorithm for the DLP. The starting point is the following definition (already given as Definition 2.1.1).
Definition 13.0.1 Let G be a group written in multiplicative notation. The discrete logarithm problem (DLP) is: given g, h ∈ G find a, if it exists, such that h = ga. We sometimes denote a by logg(h).
As discussed after Definition 2.1.1, we intentionally do not specify a distribution on g or h or a above, although it is common to assume that g is sampled uniformly at random in G, and a is sampled uniformly from {1, …, #G}.
Typically, G will be an algebraic group over a finite field Fq and the order of g will be known. If one is considering cryptography in an algebraic group quotient then we assume that the DLP has been lifted to the covering group G. A solution to the DLP exists if and only if h ∈ 〈g〉 (i.e., h lies in the subgroup generated by g). We have discussed methods to test this in Section 11.6.
The goal of lattice basis reduction is to transform a given lattice basis into a “nice” lattice basis consisting of vectors that are short and close to orthogonal. To achieve this, one needs both a suitable mathematical definition of “nice basis” and an efficient algorithm to compute a basis satisfying this definition.
Reduction of lattice bases of rank 2 in ℝ2 was given by Lagrange and Gauss. The algorithm is closely related to Euclid's algorithm and we briefly present it in Section 17.1. The main goal of this section is to present the lattice basis reduction algorithm of Lenstra, Lenstra and Lovász, known as the LLL or L3 algorithm. This is a very important algorithm for practical applications. Some basic references for the LLL algorithm are Section 14.3 of Smart [513], Section 2.6 of Cohen [127] and Chapter 17 of Trappe and Washington [547]. More detailed treatments are given in von zur Gathen and Gerhard [220], Grötschel, Lovász and Schrijver [245], Section 1.2 of Lovász [356], and Nguyen and Vallée [416]. I also highly recommend the original paper [335].
The LLL algorithm generalises the Lagrange–Gauss algorithm and exploits the Gram–Schmidt orthogonalisation. Note that the Gram–Schmidt process is not useful, in general, for lattices since the coefficients μi,j do not usually lie in ℤ and so the resulting vectors are not usually elements of the lattice. The LLL algorithm uses the Gram–Schmidt vectors to determine the quality of the lattice basis, but ensures that the linear combinations used to update the lattice vectors are all over ℤ.
The aim of this chapter is to briefly present some cryptosystems whose security is based on computational assumptions related to the integer factorisation problem. In particular, we study the RSA and Rabin cryptosystems. We also present some security arguments and techniques for efficient implementation.
Throughout the chapter we take 3072 bits as the benchmark length for an RSA modulus. We make the assumption that the cost of factoring a 3072-bit RSA modulus is 2128 bit operations. These figures should be used as a very rough guideline only.
The textbook RSA cryptosystem
Box 24.1 recalls the “textbook” RSA cryptosystem, which was already presented in Section 1.2. We remind the reader that the main application of RSA encryption is to transport symmetric keys, rather than to encrypt actual documents. For digital signatures we always sign a hash of the message, and it is necessary that the hash function used in signatures is collision resistant.
In Section 1.3 we noted that the security parameter κ is not necessarily the same as the bit-length of the RSA modulus. In this chapter it will be convenient to ignore this, and use the symbol κ to denote the bit-length of an RSA modulus N. We always assume that κ is even.
As we have seen in Section 1.2 certain security properties can only be satisfied if the encryption process is randomised.
Historically, encryption has been considered the most important part of cryptography. So it is not surprising that there is a vast literature about public key encryption. It is important to note that, in practice, public key encryption is not usually used to encrypt documents. Instead, one uses public key encryption to securely send keys, and the data is encrypted using symmetric encryption.
It is beyond the scope of this book to discuss all known results on public key encryption, or even to sketch all known approaches to designing public key encryption schemes. The goal of this chapter is very modest. We simply aim to give some definitions and to provide two efficient encryption schemes (one secure in the random oracle model and one secure in the standard model). The encryption schemes in this chapter are all based on Elgamal encryption, the “textbook” version of which has already been discussed in Sections 20.3 and 20.4.
Finally, we emphasise that this chapter only discusses confidentiality and not simultaneous confidentiality and authentication. The reader is warned that naively combining signatures and encryption does not necessarily provide the expected security (see, for example, the discussion in Section 1.2.3 of Joux [283]).
CCA secure Elgamal encryption
Recall that security notions for public key encryption were given in Section 1.3.1. As we have seen, the textbook Elgamal encryption scheme does not have OWE-CCA security, since one can easily construct a related ciphertext whose decryption yields the original message.
In Section 4.1 a number of basic computational tasks for an algebraic group G were listed. Some of these topics have been discussed already, especially providing efficient group operations and compact representations for group elements. But some other topics (such as efficient exponentiation, generating random elements in G and hashing from or into G) require further attention. The goal of this chapter is to briefly give some details about these tasks for the algebraic groups of most interest in the book.
The main goal of the chapter is to discuss exponentiation and multi-exponentiation. These operations are crucial for efficient discrete logarithm cryptography and there are a number of techniques available for specific groups that give performance improvements.
It is beyond the scope of this book to present a recipe for the best possible exponentiation algorithm in a specific application. Instead, our focus is on explaining the mathematical ideas that are used. For an “implementors guide” in the case of elliptic curves we refer to Bernstein and Lange [51].
Let G be a group (written in multiplicative notation). Given g ∈ G and a ∈ ℕ we wish to compute ga. We assume in this chapter that a is a randomly chosen integer of size approximately the same as the order of g, and so a varies between executions of the exponentiation algorithm. If g does not change between executions of the algorithm then we call it a fixed base and otherwise it is a variable base.
Isogenies are a fundamental object of study in the theory of elliptic curves. The definition and basic properties were given in Sections 9.6 and 9.7. In particular, they are group homomorphisms.
Isogenies are used in algorithms for point counting on elliptic curves and for computing class polynomials for the complex multiplication (CM) method. They have applications to cryptanalysis of elliptic curve cryptosystems. They also have constructive applications: prevention of certain side-channel attacks; computing distortion maps for pairing-based cryptography; designing cryptographic hash functions; relating the discrete logarithm problem on elliptic curves with the same number of points. We do not have space to discuss all these applications.
The purpose of this chapter is to present algorithms to compute isogenies from an elliptic curve. The most important result is Vélu's formulae, which compute an isogeny given an elliptic curve and a kernel subgroup G. We also sketch the various ways to find an isogeny given an elliptic curve and the j-invariant of an elliptic curve ℓ-isogenous to E. Once these algorithms are in place we briefly sketch Kohel's results, the isogeny graph and some applications of isogenies. Due to lack of space we are unable to give proofs of most results.
Algorithms for computing isogenies on Jacobians of curves of genus 2 or more are much more complicated than in the elliptic case. Hence, we do not discuss them in this book.
This paper proposes a novel application of topic models to do entity relation detection (ERD). In order to make use of the latent semantics of text, we formulate the task of relation detection as a topic modeling problem. The motivation is to find underlying topics that are indicative of relations between named entities (NEs). Our approach considers pairs of NEs and features associated with them as mini documents, and aims to utilize the underlying topic distributions as indicators for the types of relations that may exist between the NE pair. Our system, ERD-MedLDA, adapts Maximum Entropy Discriminant Latent Dirichlet Allocation (MedLDA) with mixed membership for relation detection. By using supervision, ERD-MedLDA is able to learn topic distributions indicative of relation types. Further, ERD-MedLDA is a topic model that combines the benefits of both, maximum likelihood estimation (MLE) and maximum margin estimation (MME), and the mixed-membership formulation enables the system to incorporate heterogeneous features. We incorporate different features into the system and perform experiments on the ACE 2005 corpus. Our approach achieves better overall performance for precision, recall, and F-measure metrics as compared to baseline SVM-based and LDA-based models. We also find that our system shows better and consistent improvements with the addition of complex informative features as compared to baseline systems.
Determining whether two terms have an ancestor relation (e.g. Toyota Camry and car) or a sibling relation (e.g. Toyota and Honda) is an essential component of textual inference in Natural Language Processing applications such as Question Answering, Summarization, and Textual Entailment. Significant work has been done on developing knowledge sources that could support these tasks, but these resources usually suffer from low coverage, noise, and are inflexible when dealing with ambiguous and general terms that may not appear in any stationary resource, making their use as general purpose background knowledge resources difficult. In this paper, rather than building a hierarchical structure of concepts and relations, we describe an algorithmic approach that, given two terms, determines the taxonomic relation between them using a machine learning-based approach that makes use of existing resources. Moreover, we develop a global constraint-based inference process that leverages an existing knowledge base to enforce relational constraints among terms and thus improves the classifier predictions. Our experimental evaluation shows that our approach significantly outperforms other systems built upon the existing well-known knowledge sources.