To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We report on some recent achievements in parts of cryptology: Wiener's attack on short secret RSA exponents, McCurley's composite Diffie and Hellman key distribution scheme, and Niederreiter's continued fraction tests for pseudorandom sequences. The topics are selected because of their emphasis on mathematical concepts such as continued fractions and discrete logarithms.
Introduction
Cryptology is a very active and expanding field tha t brings forward new developments in cryptography and crypt analysis every year. A comprehensive survey and up-to-date report (as of 1988) on the advances in designing ciphers and cryptographic schemes and on recent successes in breaking cryptosystems has been published in [18] as a collection of important papers on cryptology. This special section of the Proceedings of the IEEE, from May 1988, may serve th e interested reader as a first introduction to a variety of topics in contemporary cryptology, ranging from conventional cyptosystems and the Dat a Encryption Standard, over public-key cryptography and recent advances in crypt analysis, to a survey of information authentification. Brassard [1] gives a brief survey of cryptography, which is kept at the nontechnical level. Mathematical concepts in cryptography are emphasised in the books by van Tilborg [19], Patterson [13] and Lidl and Niederreiter [6]. For those interested in details of the design and analysis of ciphers, we refer to the proceedings volumes of the annual meetings EUROCRYPT and CRYPTO, see for example [3] and [14].
In this paper, we survey the complexity status of some fundamental algorithmic problems concerning finite fields. In particular, we consider the following two questions: given a prime number p and a positive integer n, construct explicitly a field that is of degree n over the prime field of p elements; and given two such fields, construct an explicit field isomorphism between them. For both problems there exist good probabilistic algorithms. The situation is more complicated if deterministic algorithms are required.
Introduction
Every finite field has cardinality pn for some prime number p and some positive integer n. Conversely, if p is a prime number and n a positive integer, then there exists a field of cardinality pn, and any two fields of cardinality pn are isomorphic. These results are due to E. H. Moore (1893) [15]. In this paper, we discuss the complexity aspects of two algorithmic problems that are suggested by this theorem, and of two related problems.
Constructing finite fields. We say that a finite field is explicitly given if, for some basis of the field over its prime field, we know the product of any two basis elements, expressed in the same basis. Let, more precisely, p be a prime number and n a positive integer.
O'Connor proposed an approximation to the one-time pad which has similarities to the knapsack cryptosystems, but which he hoped would circumvent the low density attacks on these systems. In this report it will be shown the method proposed is insecure, being susceptible to attacks based on the short lattice vector algorithm.
Introduction
Public key cryptosystems based on the knapsack trapdoor have been considered insecure since a polynomial time algorithm was found for breaking instances of the code. In particular, the Lenstra, Lenstra and Lovasz algorithm for producing short basis vectors of a lattice has proved useful in attacking the diophantine approximation problems which arise in attempts to break the codes. O'Connor [3] has proposed an interesting variant of the knapsack cryptosystem. Unfortunately, as will be shown, the diophantine equations associated with the system fall to the short basis vector attack. All instances of the proposed system that have been generated to test the system have been broken. It seems that there are many sets of parameters that will generate any instance of the code, and it is too easy to find such a set.
The purpose of this paper is to give an overview of the main recent advances concerning Gauss's class number one problem for real quadratic fields, to describe the connections with prime-producing polynomials, continued fraction theory and the theory of reduced ideals, and to make the comparison with the development of the solution of Gauss's class number one problem for complex quadratic fields. This includes a description of the search for a real quadratic field analogue of the well-known Rabinowitsch result for complex quadratic fields.
Furthermore, we describe a criterion for class number 2 (in terms of continued fractions and reduced ideals) for general real quadratic fields. We also provide (for a specific class of real quadratric fields called Richaud-Degert types) class number 2 criteria in terms of prime-producing quadratic polynomials. This is the real quadratic field analogue of Hendy's result [9] for complex quadratic fields. Other related results including a solution of a problem of L. Bernstein [2], [3] are delineated as well.
Reed-Solomon error correction codes are powerful codes which allow efficient correction of multiple burst errors. The (n, k) Reed-Solomon code takes strings of n symbols, each containing k information symbols and n – k check symbols and can correct any occurrence of at most ½(n – k) incorrectly received symbols ([4]). The Reed-Solomon code is usually implemented via computation in a finite field of characteristic 2. However, this requires special circuits for encoding and decoding which have only recently become economic.
There is an alternative ‘spectral’ interpretation which uses ordinary complex arithmetic. This has the advantage that conventional floating point devices can be used for encoding and decoding. Its compensating disadvantage is that the arithmetic is no longer exact. For the purposes of this example, let us assume that we would like to code k = ½n message symbols and that we wish to correct up to ¼n incorrectly received symbols. First, pad the ½n length message sequence with a ‘prefix’ of ½n zero symbols. We view the resulting sequence as a sequence of spectral magnitudes. Thus it has a characteristic ‘high-pass’ spectrum, that is all low frequency spectral magnitudes are zero. Next, take the inverse fourier transform of the high-pass spectrum and transmit the resulting time domain sequence. The characteristic high-pass spectrum, that is zero symbols in the first ½n spectral bands, is spread over the entire n time domain components.
Stream ciphers use the output of a Pseudo-Random (PR) generator to mask the information stream. The security of these cipher systems ultimately depends on the structure of the PR generator. There are some minimum necessary criteria such as long period, flat statistical distribution and high linear complexity that the PR generator of a stream cipher system should satisfy to resist the basic cryptanalytic attacks on such systems. We propose a class of PR generators using the coset elements of a Reed-Muller code. The linear complexity of these generators is analysed and conditions that assure the highest possible linear complexity for them are specified. It is shown that the above mentioned criteria do not gurantee the security of a stream cipher system and the proposed PR generator, although it satisfies all of them, is not secure.
Introduction
Stream ciphers assimilate the one time pad, the only provably perfect secure system. However with the replacement of the random generator by a pseudo-random (PR) one, the perfect security of the system vanishes. It is easy to see that the assessment of the security of these systems is directly related to the properties of the PR generator. There are some necessary criteria which must be satisfied by the PR generator of a secure stream cipher. It is recognised that these generators should satisfy Golomb's criteria and have high linear complexity [1], [3].
The properties of ANN's described in the preceding chapters should make them interesting candidates both for models of some brain functions as well as for technical applications in certain areas of computer development or artificial intelligence. In either case, one of the first questions that comes to mind is the storage capacity of such systems, namely the quantity of information that can be stored and effectively retrieved from the network. It is of primary interest to know, for example, how the number of possible memories, in single patterns or in sequences, varies with the number of elements, neurons and synapses, of the network.
The storage capacity of a network can be quantified in a number of possible ways. It must be expressed per unit network element. Here we mention a few such possible measures:
The number of stored bits per neuron.
The number of stored bits per synapse.
The number of stored patterns per neuron.
The number of stored patterns per synapse.
The number of stored bits per coded synaptic bit.
Any one of the items in this list must be supplemented by informational qualifications. It should be realized that the usefulness of any of these three quantifications is strongly dependent on the level of correlation between the stored bits.
It would have been wonderful if some distinguished thinker had written: ‘Thinking about learning is a real headache’ and we could have used this phrase as a quote. A glimpse at the type of difficulties that present themselves when a new experience should lead to learning is expressed by Minsky[1] as follows:
Which agents could be wise enough to guess what changes should then be made? The high level agents can't know such things; they scarcely know which lower-level processes exist. Nor can lower-level agents know which of their actions helped us to reach our high-level goals; they scarcely know that higher-level goals exist. The agencies that move our legs aren't concerned with whether we are walking toward home or toward work – nor do the agents involved with such destinations know anything of controlling individual muscle units…
[p.75]
Hebb[2] adds that ‘the more we learn about the nature of learning, the farther we seem to get from being able to give firm answers.’
The scope is truly awesome and very soon it overlaps the deep controversy between innateness and behaviorism. There are several courses of action which avoid, at this stage, the full dimension of the issue:
To consider artificial devices which can serve as associative memories with increasingly complex performances and which are decreed by the designer, and to restrict the question of learning to that of training such machines to their final, known performing organization.
Analog neurons, spike rates, two-state neural models
The two-state representation of neural output, which enjoys such a wide popularity among modelers of neural networks, is often considered oversimplified both by biologists and device designers. Biologists prefer to describe relevant neural activity by firing rates. These are continuous variables describing mean spike activity of neurons, rather than the discrete variables which describe the presence or the absence of an individual spike. Device designers prefer sometimes to think in terms of operational amplifiers, currents, capacitances, resistors, and continuous time equations. It turns out that in a wide range of parameters the performance of a network as an ANN is largely independent of the representation[1]. As we shall see in the following chapters, e.g., Chapter 5, the discrete representation provides a more transparent framework for structured manipulations of attractors.
Eventually the gap between the different descriptions, closes, because in our formulation of the output mechanism, Section 1.4.4, a significant event is said to occur when a time average over spike activity is found to be high, which is nothing but a measure of mean firing rates. On the other hand, in the analog description, in terms of electronic components, which is deterministic in structure, one eventually generates spikes, stochastically, at a mean instantaneous rate proportional to the continuous variable at hand, e.g., the potential excess over the threshold.