We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper introduces ‘hyper-and-elliptic-curve cryptography’, in which a single high-security group supports fast genus-2-hyperelliptic-curve formulas for variable-base-point single-scalar multiplication (for example, Diffie–Hellman shared-secret computation) and at the same time supports fast elliptic-curve formulas for fixed-base-point scalar multiplication (for example, key generation) and multi-scalar multiplication (for example, signature verification).
The cubic version of the Lucas cryptosystem is set up based on the cubic recurrence relation of the Lucas function by Said and Loxton [‘A cubic analogue of the RSA cryptosystem’, Bull. Aust. Math. Soc.68 (2003), 21–38]. To implement this type of cryptosystem in a limited environment, it is necessary to accelerate encryption and decryption procedures. Therefore, this paper concentrates on improving the computation time of encryption and decryption in cubic Lucas cryptosystems. The new algorithm is designed based on new properties of the cubic Lucas function and mathematical techniques. To illustrate the efficiency of our algorithm, an analysis was carried out with different size parameters and the performance of the proposed and previously existing algorithms was evaluated with experimental data and mathematical analysis.
We show that if a Barker sequence of length $n>13$ exists, then either n $=$ 3 979 201 339 721 749 133 016 171 583 224 100, or $n > 4\cdot 10^{33}$. This improves the lower bound on the length of a long Barker sequence by a factor of nearly $2000$. We also obtain eighteen additional integers $n<10^{50}$ that cannot be ruled out as the length of a Barker sequence, and find more than 237 000 additional candidates $n<10^{100}$. These results are obtained by completing extensive searches for Wieferich prime pairs and using them, together with a number of arithmetic restrictions on $n$, to construct qualifying integers below a given bound. We also report on some updated computations regarding open cases of the circulant Hadamard matrix problem.
In order to estimate the specific intrinsic volumes of a planar Boolean model from a binary image, we consider local digital algorithms based on weighted sums of 2×2 configuration counts. For Boolean models with balls as grains, explicit formulas for the bias of such algorithms are derived, resulting in a set of linear equations that the weights must satisfy in order to minimize the bias in high resolution. These results generalize to larger classes of random sets, as well as to the design-based situation, where a fixed set is observed on a stationary isotropic lattice. Finally, the formulas for the bias obtained for Boolean models are applied to existing algorithms in order to compare their accuracy.
From power series expansions of functions on curves over finite fields, one can obtain sequences with perfect or almost perfect linear complexity profile. It has been suggested by various authors to use such sequences as key streams for stream ciphers. In this work, we show how long parts of such sequences can be computed efficiently from short ones. Such sequences should therefore be considered to be cryptographically weak. Our attack leads in a natural way to a new measure of the complexity of sequences which we call expansion complexity.
Brizolis asked for which primes p greater than 3 there exists a pair (g,h) such that h is a fixed point of the discrete exponential map with base g, or equivalently h is a fixed point of the discrete logarithm with base g. Various authors have contributed to the understanding of this problem. In this paper, we use p-adic methods, primarily Hensel’s lemma and p-adic interpolation, to count fixed points, two-cycles, collisions, and solutions to related equations modulo powers of a prime p.
Some bounds in terms of Gâteaux lateral derivatives for the weighted f-Gini mean difference generated by convex and symmetric functions in linear spaces are established. Applications for norms and semi-inner products are also provided.
In this paper, we discuss the H1L-boundedness of commutators of Riesz transforms associated with the Schrödinger operator L=−△+V, where H1L(Rn) is the Hardy space associated with L. We assume that V (x) is a nonzero, nonnegative potential which belongs to Bq for some q>n/2. Let T1=V (x)(−△+V )−1, T2=V1/2(−△+V )−1/2 and T3 =∇(−△+V )−1/2. We prove that, for b∈BMO (Rn) , the commutator [b,T3 ]is not bounded from H1L(Rn)to L1 (Rn)as T3 itself. As an alternative, we obtain that [b,Ti ] , ( i=1,2,3 ) are of (H1L,L1weak) -boundedness.
We establish new sampling representations for linear integral transforms associated with arbitrary general Birkhoff regular boundary value problems. The new approach is developed in connection with the analytical properties of Green’s function, and does not require the root functions to be a basis or complete. Unlike most of the known sampling expansions associated with eigenvalue problems, the results obtained are, generally speaking, of Hermite interpolation type.
Here we derive a recursive formula for even-power moments of Kloosterman sums or equivalently for power moments of two-dimensional Kloosterman sums. This is done by using the Pless power-moment identity and an explicit expression of the Gauss sum for Sp(4,q).
We study sample covariance matrices of the form W = (1 / n)CCT, where C is a k x n matrix with independent and identically distributed (i.i.d.) mean 0 entries. This is a generalization of the so-called Wishart matrices, where the entries of C are i.i.d. standard normal random variables. Such matrices arise in statistics as sample covariance matrices, and the high-dimensional case, when k is large, arises in the analysis of DNA experiments. We investigate the large deviation properties of the largest and smallest eigenvalues of W when either k is fixed and n → ∞ or kn → ∞ with kn = o(n / log log n), in the case where the squares of the i.i.d. entries have finite exponential moments. Previous results, proving almost sure limits of the eigenvalues, require only finite fourth moments. Our most explicit results for large k are for the case where the entries of C are ∓ 1 with equal probability. We relate the large deviation rate functions of the smallest and largest eigenvalues to the rate functions for i.i.d. standard normal entries of C. This case is of particular interest since it is related to the problem of decoding of a signal in a code-division multiple-access (CDMA) system arising in mobile communication systems. In this example, k is the number of users in the system and n is the length of the coding sequence of each of the users. Each user transmits at the same time and uses the same frequency; the codes are used to distinguish the signals of the separate users. The results imply large deviation bounds for the probability of a bit error due to the interference of the various users.
Define the non-overlapping return time of a block of a random process to be the number of blocks that pass by before the block in question reappears. We prove a central limit theorem based on these return times. This result has applications to entropy estimation, and to the problem of determining if digits have come from an independent, equidistributed sequence. In the case of an equidistributed sequence, we use an argument based on negative association to prove convergence under conditions weaker than those required in the general case.
In this article, we review known results and present new ones concerning the power spectra of large classes of signals and random fields driven by an underlying point process, such as spatial shot noises (with random impulse response and arbitrary basic stationary point processes described by their Bartlett spectra) and signals or fields sampled at random times or points (where the sampling point process is again quite general). We also obtain the Bartlett spectrum for the general linear Hawkes spatial branching point process (with random fertility rate and general immigrant process described by its Bartlett spectrum). We then obtain the Bochner spectra of general spatial linear birth and death processes. Finally, we address the issues of random sampling and linear reconstruction of a signal from its random samples, reviewing and extending former results.
In this paper, we introduce the minimum dynamic discrimination information (MDDI) approach to probability modeling. The MDDI model relative to a given distribution G is that which has least Kullback-Leibler information discrepancy relative to G, among all distributions satisfying some information constraints given in terms of residual moment inequalities, residual moment growth inequalities, or hazard rate growth inequalities. Our results lead to MDDI characterizations of many well-known lifetime models and to the development of some new models. Dynamic information constraints that characterize these models are tabulated. A result for characterizing distributions based on dynamic Rényi information divergence is also given.
The third-generation (3G) mobile communication system uses a technique called code division multiple access (CDMA), in which multiple users use the same frequency and time domain. The data signals of the users are distinguished using codes. When there are many users, interference deteriorates the quality of the system. For more efficient use of resources, we wish to allow more users to transmit simultaneously, by using algorithms that utilize the structure of the CDMA system more effectively than the simple matched filter (MF) system used in the proposed 3G systems. In this paper, we investigate an advanced algorithm called hard-decision parallel interference cancellation (HD-PIC), in which estimates of the interfering signals are used to improve the quality of the signal of the desired user. We compare HD-PIC with MF in a simple case, where the only two parameters are the number of users and the length of the coding sequences. We focus on the exponential rate for the probability of a bit-error, explain the relevance of this parameter, and investigate how it scales when the number of users grows large. We also review extensions of our results, proved elsewhere, showing that in HD-PIC, more users can transmit without errors than in the MF system.
A formal approach to produce a model for the data-generating distribution based on partial knowledge is the well-known maximum entropy method. In this approach, partial knowledge about the data-generating distribution is formulated in terms of some information constraints and the model is obtained by maximizing the Shannon entropy under these constraints. Frequently, in reliability analysis the problem of interest is the lifetime beyond an age t. In such cases, the distribution of interest for computing uncertainty and information is the residual distribution. The information functions involving a residual life distribution depend on t, and hence are dynamic. The maximum dynamic entropy (MDE) model is the distribution with the density that maximizes the dynamic entropy for all t. We provide a result that relates the orderings of dynamic entropy and the hazard function for distributions with monotone densities. Applications include dynamic entropy ordering within some parametric families of distributions, orderings of distributions of lifetimes of systems and their components connected in series and parallel, record values, and formulation of constraints for the MDE model in terms of the evolution paths of the hazard function and mean residual lifetime function. In particular, we identify classes of distributions in which some well-known distributions, including the mixture of two exponential distributions and the mixture of two Pareto distributions, are the MDE models.
We consider minimum relative entropy calibration of a given prior distribution to a finite set of moment constraints. We show that the calibration algorithm is stable (in the Prokhorov metric) under a perturbation of the prior and the calibrated distributions converge in variation to the measure from which the moments have been taken as more constraints are added. These facts are used to explain the limiting properties of the minimum relative entropy Monte Carlo calibration algorithm.
We address the problem of tracking the time-varying linear subspaces (of a larger system) under a Bayesian framework. Variations in subspaces are treated as a piecewise-geodesic process on a complex Grassmann manifold and a Markov prior is imposed on it. This prior model, together with an observation model, gives rise to a hidden Markov model on a Grassmann manifold, and admits Bayesian inferences. A sequential Monte Carlo method is used for sampling from the time-varying posterior and the samples are used to estimate the underlying process. Simulation results are presented for principal subspace tracking in array signal processing.
We propose a new convergence criterion for the stochastic algorithm for the optimization of probabilities (SAOP) described in an earlier paper. The criterion is based on the dissection principle for irreducible finite Markov chains.