To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we introduce two notions of a relative operator (α, β)-entropy and a Tsallis relative operator (α, β)-entropy as two parameter extensions of the relative operator entropy and the Tsallis relative operator entropy. We apply a perspective approach to prove the joint convexity or concavity of these new notions, under certain conditions concerning α and β. Indeed, we give the parametric extensions, but in such a manner that they remain jointly convex or jointly concave.
Significance Statement. What is novel here is that we convincingly demonstrate how our techniques can be used to give simple proofs for the old and new theorems for the functions that are relevant to quantum statistics. Our proof strategy shows that the joint convexity of the perspective of some functions plays a crucial role to give simple proofs for the joint convexity (resp. concavity) of some relative operator entropies.
We describe how to approximate fractal transformations generated by a one-parameter family of dynamical systems $W:[0,1]\rightarrow [0,1]$ constructed from a pair of monotone increasing diffeomorphisms $W_{i}$ such that $W_{i}^{-1}:[0,1]\rightarrow [0,1]$ for $i=0,1$. An algorithm is provided for determining the unique parameter value such that the closure of the symbolic attractor $\overline{\unicode[STIX]{x1D6FA}}$ is symmetrical. Several examples are given, one in which the $W_{i}$ are affine and two in which the $W_{i}$ are nonlinear. Applications to digital imaging are also discussed.
In modelling joint probability distributions it is often desirable to incorporate standard marginal distributions and match a set of key observed mixed moments. At the same time it may also be prudent to avoid additional unwarranted assumptions. The problem is to find the least ordered distribution that respects the prescribed constraints. In this paper we will construct a suitable joint probability distribution by finding the checkerboard copula of maximum entropy that allows us to incorporate the appropriate marginal distributions and match the nominated set of observed moments.
The Shannon entropy based on the probability density function is a key information measure with applications in different areas. Some alternative information measures have been proposed in the literature. Two relevant ones are the cumulative residual entropy (based on the survival function) and the cumulative past entropy (based on the distribution function). Recently, some extensions of these measures have been proposed. Here, we obtain some properties for the generalized cumulative past entropy. In particular, we prove that it determines the underlying distribution. We also study this measure in coherent systems and a closely related generalized past cumulative Kerridge inaccuracy measure.
In this paper, we perform a detailed spectral study of the liberation process associated with two symmetries of arbitrary ranks: $(R,S)\mapsto (R,U_{t}SU_{t}^{\ast })_{t\geqslant 0}$, where $(U_{t})_{t\geqslant 0}$ is a free unitary Brownian motion freely independent from $\{R,S\}$. Our main tool is free stochastic calculus which allows to derive a partial differential equation (PDE) for the Herglotz transform of the unitary process defined by $Y_{t}:=RU_{t}SU_{t}^{\ast }$. It turns out that this is exactly the PDE governing the flow of an analytic function transform of the spectral measure of the operator $X_{t}:=PU_{t}QU_{t}^{\ast }P$ where $P,Q$ are the orthogonal projections associated to $R,S$. Next, we relate the two spectral measures of $RU_{t}SU_{t}^{\ast }$ and of $PU_{t}QU_{t}^{\ast }P$ via their moment sequences and use this relationship to develop a theory of subordination for the boundary values of the Herglotz transform. In particular, we explicitly compute the subordinate function and extend its inverse continuously to the unit circle. As an application, we prove the identity $i^{\ast }(\mathbb{C}P+\mathbb{C}(I-P);\mathbb{C}Q+\mathbb{C}(I-Q))=-\unicode[STIX]{x1D712}_{\text{orb}}(P,Q)$.
The proportional hazards (PH) model and its associated distributions provide suitable media for exploring connections between the Gini coefficient, Fisher information, and Shannon entropy. The connecting threads are Bayes risks of the mean excess of a random variable with the PH distribution and Bayes risks of the Fisher information of the equilibrium distribution of the PH model. Under various priors, these Bayes risks are generalized entropy functionals of the survival functions of the baseline and PH models and the expected asymptotic age of the renewal process with the PH renewal time distribution. Bounds for a Bayes risk of the mean excess and the Gini's coefficient are given. The Shannon entropy integral of the equilibrium distribution of the PH model is represented in derivative forms. Several examples illustrate implementation of the results and provide insights for potential applications.
Recently, Rao et al. (2004) introduced an alternative measure of uncertainty known as the cumulative residual entropy (CRE). It is based on the survival (reliability) function F̅ instead of the probability density function f used in classical Shannon entropy. In reliability based system design, the performance characteristics of the coherent systems are of great importance. Accordingly, in this paper, we study the CRE for coherent and mixed systems when the component lifetimes are identically distributed. Bounds for the CRE of the system lifetime are obtained. We use these results to propose a measure to study if a system is close to series and parallel systems of the same size. Our results suggest that the CRE can be viewed as an alternative entropy (dispersion) measure to classical Shannon entropy.
Image inpainting methods recover true images from partial noisy observations. Natural images usually have two layers consisting of cartoons and textures. Methods using simultaneous cartoon and texture inpainting are popular in the literature by using two combined tight frames: one (often built from wavelets, curvelets or shearlets) provides sparse representations for cartoons and the other (often built from discrete cosine transforms) offers sparse approximation for textures. Inspired by the recent development on directional tensor product complex tight framelets ($\text{TP}\text{-}\mathbb{C}\text{TF}$s) and their impressive performance for the image denoising problem, we propose an iterative thresholding algorithm using tight frames derived from $\text{TP}\text{-}\mathbb{C}\text{TF}$s for the image inpainting problem. The tight frame $\text{TP}\text{-}\mathbb{C}\text{TF}_{6}$ contains two classes of framelets; one is good for cartoons and the other is good for textures. Therefore, it can handle both the cartoons and the textures well. For the image inpainting problem with additive zero-mean independent and identically distributed Gaussian noise, our proposed algorithm does not require us to tune parameters manually for reasonably good performance. Experimental results show that our proposed algorithm performs comparatively better than several well-known frame systems for the image inpainting problem.
This paper presents a framework for compressed sensing that bridges a gap between existing theory and the current use of compressed sensing in many real-world applications. In doing so, it also introduces a new sampling method that yields substantially improved recovery over existing techniques. In many applications of compressed sensing, including medical imaging, the standard principles of incoherence and sparsity are lacking. Whilst compressed sensing is often used successfully in such applications, it is done largely without mathematical explanation. The framework introduced in this paper provides such a justification. It does so by replacing these standard principles with three more general concepts: asymptotic sparsity, asymptotic incoherence and multilevel random subsampling. Moreover, not only does this work provide such a theoretical justification, it explains several key phenomena witnessed in practice. In particular, and unlike the standard theory, this work demonstrates the dependence of optimal sampling strategies on both the incoherence structure of the sampling operator and on the structure of the signal to be recovered. Another key consequence of this framework is the introduction of a new structured sampling method that exploits these phenomena to achieve significant improvements over current state-of-the-art techniques.
Recently, many variational models involving high order derivatives have been widely used in image processing, because they can reduce staircase effects during noise elimination. However, it is very challenging to construct efficient algorithms to obtain the minimizers of original high order functionals. In this paper, we propose a new linearized augmented Lagrangian method for Euler's elastica image denoising model. We detail the procedures of finding the saddle-points of the augmented Lagrangian functional. Instead of solving associated linear systems by FFT or linear iterative methods (e.g., the Gauss-Seidel method), we adopt a linearized strategy to get an iteration sequence so as to reduce computational cost. In addition, we give some simple complexity analysis for the proposed method. Experimental results with comparison to the previous method are supplied to demonstrate the efficiency of the proposed method, and indicate that such a linearized augmented Lagrangian method is more suitable to deal with large-sized images.
We propose a new two-phase method for reconstruction of blurred images corrupted by impulse noise. In the first phase, we use a noise detector to identify the pixels that are contaminated by noise, and then, in the second phase, we reconstruct the noisy pixels by solving an equality constrained total variation minimization problem that preserves the exact values of the noise-free pixels. For images that are only corrupted by impulse noise (i.e., not blurred) we apply the semismooth Newton's method to a reduced problem, and if the images are also blurred, we solve the equality constrained reconstruction problem using a first-order primal-dual algorithm. The proposed model improves the computational efficiency (in the denoising case) and has the advantage of being regularization parameter-free. Our numerical results suggest that the method is competitive in terms of its restoration capabilities with respect to the other two-phase methods.
Consider a family of Boolean models, indexed by integers n≥1. The nth model features a Poisson point process in ℝn of intensity e{nρn}, and balls of independent and identically distributed radii distributed like X̅n√n. Assume that ρn→ρ as n→∞, and that X̅n satisfies a large deviations principle. We show that there then exist the three deterministic thresholds τd, the degree threshold, τp, the percolation probability threshold, and τv, the volume fraction threshold, such that, asymptotically as n tends to ∞, we have the following features. (i) For ρ<τd, almost every point is isolated, namely its ball intersects no other ball; (ii) for τd<ρ<τp, the mean number of balls intersected by a typical ball converges to ∞ and nevertheless there is no percolation; (iii) for τp<ρ<τv, the volume fraction is 0 and nevertheless percolation occurs; (iv) for τd<ρ<τv, the mean number of balls intersected by a typical ball converges to ∞ and nevertheless the volume fraction is 0; (v) for ρ>τv, the whole space is covered. The analysis of this asymptotic regime is motivated by problems in information theory, but it could be of independent interest in stochastic geometry. The relations between these three thresholds and the Shannon‒Poltyrev threshold are discussed.
NTRU is a public-key cryptosystem introduced at ANTS-III. The two most used techniques in attacking the NTRU private key are meet-in-the-middle attacks and lattice-basis reduction attacks. Howgrave-Graham combined both techniques in 2007 and pointed out that the largest obstacle to attacks is the memory capacity that is required for the meet-in-the-middle phase. In the present paper an algorithm is presented that applies low-memory techniques to find ‘golden’ collisions to Odlyzko’s meet-in-the-middle attack against the NTRU private key. Several aspects of NTRU secret keys and the algorithm are analysed. The running time of the algorithm with a maximum storage capacity of $w$ is estimated and experimentally verified. Experiments indicate that decreasing the storage capacity $w$ by a factor $1<c<\sqrt{w}$ increases the running time by a factor $\sqrt{c}$.
The security of several homomorphic encryption schemes depends on the hardness of variants of the approximate common divisor (ACD) problem. We survey and compare a number of lattice-based algorithms for the ACD problem, with particular attention to some very recently proposed variants of the ACD problem. One of our main goals is to compare the multivariate polynomial approach with other methods. We find that the multivariate polynomial approach is not better than the orthogonal lattice algorithm for practical cryptanalysis.
We also briefly discuss a sample-amplification technique for ACD samples and a pre-processing algorithm similar to the Blum–Kalai–Wasserman algorithm for learning parity with noise. The details of this work are given in the full version of the paper.
In this paper we consider an anisotropic convection-diffusion (ACD) filter for image denoising and compression simultaneously. The ACD filter is discretized by a tailored finite point method (TFPM), which can tailor some particular properties of the image in an irregular grid structure. A quadtree structure is implemented for the storage in multi-levels for the compression. We compare the performance of the proposed scheme with several well-known filters. The numerical results show that the proposed method is effective for removing a mixture of white Gaussian and salt-and-pepper noises.
Let G1 × G2 denote the strong product of graphs G1 and G2, that is, the graph on V(G1) × V(G2) in which (u1, u2) and (v1, v2) are adjacent if for each i = 1, 2 we have ui = vi or uivi ∈ E(Gi). The Shannon capacity of G is c(G) = limn → ∞ α(Gn)1/n, where Gn denotes the n-fold strong power of G, and α(H) denotes the independence number of a graph H. The normalized Shannon capacity of G is
$$C(G) = \ffrac {\log c(G)}{\log |V(G)|}.$$
Alon [1] asked whether for every ε < 0 there are graphs G and G′ satisfying C(G), C(G′) < ε but with C(G + G′) > 1 − ε. We show that the answer is no.
We consider dynamic versions of the mutual information of lifetime distributions, with a focus on past lifetimes, residual lifetimes, and mixed lifetimes evaluated at different instants. This allows us to study multicomponent systems, by measuring the dependence in conditional lifetimes of two components having possibly different ages. We provide some bounds, and investigate the mutual information of residual lifetimes within the time-transformed exponential model (under both the assumptions of unbounded and truncated lifetimes). Moreover, with reference to the order statistics of a random sample, we evaluate explicitly the mutual information between the minimum and the maximum, conditional on inspection at different times, and show that it is distribution-free in a special case. Finally, we develop a copula-based approach aiming to express the dynamic mutual information for past and residual bivariate lifetimes in an alternative way.
Given two absolutely continuous nonnegative independent random variables, we define the reversed relevation transform as dual to the relevation transform. We first apply such transforms to the lifetimes of the components of parallel and series systems under suitably proportionality assumptions on the hazard rates. Furthermore, we prove that the (reversed) relevation transform is commutative if and only if the proportional (reversed) hazard rate model holds. By repeated application of the reversed relevation transform we construct a decreasing sequence of random variables which leads to new weighted probability densities. We obtain various relations involving ageing notions and stochastic orders. We also exploit the connection of such a sequence to the cumulative entropy and to an operator that is dual to the Dickson-Hipp operator. Iterative formulae for computing the mean and the cumulative entropy of the random variables of the sequence are finally investigated.
In this paper, we consider signal recovery via $l_{1}$-analysis optimisation. The signals we consider are not sparse in an orthonormal basis or incoherent dictionary, but sparse or nearly sparse in terms of some tight frame $D$. The analysis in this paper is based on the restricted isometry property adapted to a tight frame $D$ (abbreviated as $D$-RIP), which is a natural extension of the standard restricted isometry property. Assuming that the measurement matrix $A\in \mathbb{R}^{m\times n}$ satisfies $D$-RIP with constant ${\it\delta}_{tk}$ for integer $k$ and $t>1$, we show that the condition ${\it\delta}_{tk}<\sqrt{(t-1)/t}$ guarantees stable recovery of signals through $l_{1}$-analysis. This condition is sharp in the sense explained in the paper. The results improve those of Li and Lin [‘Compressed sensing with coherent tight frames via $l_{q}$-minimization for $0<q\leq 1$’, Preprint, 2011, arXiv:1105.3299] and Baker [‘A note on sparsification by frames’, Preprint, 2013, arXiv:1308.5249].
Consider a real-valued discrete-time stationary and ergodic stochastic process, called the noise process. For each dimension n, we can choose a stationary point process in ℝn and a translation invariant tessellation of ℝn. Each point is randomly displaced, with a displacement vector being a section of length n of the noise process, independent from point to point. The aim is to find a point process and a tessellation that minimizes the probability of decoding error, defined as the probability that the displaced version of the typical point does not belong to the cell of this point. We consider the Shannon regime, in which the dimension n tends to ∞, while the logarithm of the intensity of the point processes, normalized by dimension, tends to a constant. We first show that this problem exhibits a sharp threshold: if the sum of the asymptotic normalized logarithmic intensity and of the differential entropy rate of the noise process is positive, then the probability of error tends to 1 with n for all point processes and all tessellations. If it is negative then there exist point processes and tessellations for which this probability tends to 0. The error exponent function, which denotes how quickly the probability of error goes to 0 in n, is then derived using large deviations theory. If the entropy spectrum of the noise satisfies a large deviations principle, then, below the threshold, the error probability goes exponentially fast to 0 with an exponent that is given in closed form in terms of the rate function of the noise entropy spectrum. This is obtained for two classes of point processes: the Poisson process and a Matérn hard-core point process. New lower bounds on error exponents are derived from this for Shannon's additive noise channel in the high signal-to-noise ratio limit that hold for all stationary and ergodic noises with the above properties and that match the best known bounds in the white Gaussian noise case.