Do you want to publish a course? Click here

Unifying computational entropies via Kullback-Leibler divergence

76   0   0.0 ( 0 )
 Added by Thibaut Horel
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We introduce hardness in relative entropy, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible entropy, two forms of computational entropy used in recent constructions of pseudorandom generators and statistically hiding commitment schemes, respectively. Thus, hardness in relative entropy unifies the latter two notions of computational entropy and sheds light on the apparent duality between them. Additionally, it yields a more modular and illuminating proof that one-way functions imply next-block inaccessible entropy, similar in structure to the proof that one-way functions imply next-block pseudoentropy (Vadhan and Zheng, STOC 12).

rate research

Read More

Renyi divergence is related to Renyi entropy much like Kullback-Leibler divergence is related to Shannons entropy, and comes up in many settings. It was introduced by Renyi as a measure of information that satisfies almost the same axioms as Kullback-Leibler divergence, and depends on a parameter that is called its order. In particular, the Renyi divergence of order 1 equals the Kullback-Leibler divergence. We review and extend the most important properties of Renyi divergence and Kullback-Leibler divergence, including convexity, continuity, limits of $sigma$-algebras and the relation of the special order 0 to the Gaussian dichotomy and contiguity. We also show how to generalize the Pythagorean inequality to orders different from 1, and we extend the known equivalence between channel capacity and minimax redundancy to continuous channel inputs (for all orders) and present several other minimax results.
We propose a method to fuse posterior distributions learned from heterogeneous datasets. Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors and proceeds using a simple assign-and-average approach. The components of the dataset posteriors are assigned to the proposed global model components by solving a regularized variant of the assignment problem. The global components are then updated based on these assignments by their mean under a KL divergence. For exponential family variational distributions, our formulation leads to an efficient non-parametric algorithm for computing the fused model. Our algorithm is easy to describe and implement, efficient, and competitive with state-of-the-art on motion capture analysis, topic modeling, and federated learning of Bayesian neural networks.
Bayesian nonparametric statistics is an area of considerable research interest. While recently there has been an extensive concentration in developing Bayesian nonparametric procedures for model checking, the use of the Dirichlet process, in its simplest form, along with the Kullback-Leibler divergence is still an open problem. This is mainly attributed to the discreteness property of the Dirichlet process and that the Kullback-Leibler divergence between any discrete distribution and any continuous distribution is infinity. The approach proposed in this paper, which is based on incorporating the Dirichlet process, the Kullback-Leibler divergence and the relative belief ratio, is considered the first concrete solution to this issue. Applying the approach is simple and does not require obtaining a closed form of the relative belief ratio. A Monte Carlo study and real data examples show that the developed approach exhibits excellent performance.
We study the problem of spectrum estimation from transmission data of a known phantom. The goal is to reconstruct an x-ray spectrum that can accurately model the x-ray transmission curves and reflects a realistic shape of the typical energy spectra of the CT system. To this end, spectrum estimation is posed as an optimization problem with x-ray spectrum as unknown variables, and a Kullback-Leibler (KL) divergence constraint is employed to incorporate prior knowledge of the spectrum and enhance numerical stability of the estimation process. The formulated constrained optimization problem is convex and can be solved efficiently by use of the exponentiated-gradient (EG) algorithm. We demonstrate the effectiveness of the proposed approach on the simulated and experimental data. The comparison to the expectation-maximization (EM) method is also discussed. In simulations, the proposed algorithm is seen to yield x-ray spectra that closely match the ground truth and represent the attenuation process of x-ray photons in materials, both included and not included in the estimation process. In experiments, the calculated transmission curve is in good agreement with the measured transmission curve, and the estimated spectra exhibits physically realistic looking shapes. The results further show the comparable performance between the proposed optimization-based approach and EM. In conclusion, our formulation of a constrained optimization provides an interpretable and flexible framework for spectrum estimation. Moreover, a KL-divergence constraint can include a prior spectrum and appears to capture important features of x-ray spectrum, allowing accurate and robust estimation of x-ray spectrum in CT imaging.
Kullback-Leibler (KL) divergence is one of the most important divergence measures between probability distributions. In this paper, we investigate the properties of KL divergence between Gaussians. Firstly, for any two $n$-dimensional Gaussians $mathcal{N}_1$ and $mathcal{N}_2$, we find the supremum of $KL(mathcal{N}_1||mathcal{N}_2)$ when $KL(mathcal{N}_2||mathcal{N}_1)leq epsilon$ for $epsilon>0$. This reveals the approximate symmetry of small KL divergence between Gaussians. We also find the infimum of $KL(mathcal{N}_1||mathcal{N}_2)$ when $KL(mathcal{N}_2||mathcal{N}_1)geq M$ for $M>0$. Secondly, for any three $n$-dimensional Gaussians $mathcal{N}_1, mathcal{N}_2$ and $mathcal{N}_3$, we find a bound of $KL(mathcal{N}_1||mathcal{N}_3)$ if $KL(mathcal{N}_1||mathcal{N}_2)$ and $KL(mathcal{N}_2||mathcal{N}_3)$ are bounded. This reveals that the KL divergence between Gaussians follows a relaxed triangle inequality. Importantly, all the bounds in the theorems presented in this paper are independent of the dimension $n$.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا