ﻻ يوجد ملخص باللغة العربية
Kullback-Leibler (KL) divergence is one of the most important divergence measures between probability distributions. In this paper, we investigate the properties of KL divergence between Gaussians. Firstly, for any two $n$-dimensional Gaussians $mathcal{N}_1$ and $mathcal{N}_2$, we find the supremum of $KL(mathcal{N}_1||mathcal{N}_2)$ when $KL(mathcal{N}_2||mathcal{N}_1)leq epsilon$ for $epsilon>0$. This reveals the approximate symmetry of small KL divergence between Gaussians. We also find the infimum of $KL(mathcal{N}_1||mathcal{N}_2)$ when $KL(mathcal{N}_2||mathcal{N}_1)geq M$ for $M>0$. Secondly, for any three $n$-dimensional Gaussians $mathcal{N}_1, mathcal{N}_2$ and $mathcal{N}_3$, we find a bound of $KL(mathcal{N}_1||mathcal{N}_3)$ if $KL(mathcal{N}_1||mathcal{N}_2)$ and $KL(mathcal{N}_2||mathcal{N}_3)$ are bounded. This reveals that the KL divergence between Gaussians follows a relaxed triangle inequality. Importantly, all the bounds in the theorems presented in this paper are independent of the dimension $n$.
Renyi divergence is related to Renyi entropy much like Kullback-Leibler divergence is related to Shannons entropy, and comes up in many settings. It was introduced by Renyi as a measure of information that satisfies almost the same axioms as Kullback
The variational framework for learning inducing variables (Titsias, 2009a) has had a large impact on the Gaussian process literature. The framework may be interpreted as minimizing a rigorously defined Kullback-Leibler divergence between the approxim
We propose a method to fuse posterior distributions learned from heterogeneous datasets. Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors and proceeds using a simple assign-and-average app
We introduce hardness in relative entropy, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible entropy, two forms of com
In this paper, we derive a useful lower bound for the Kullback-Leibler divergence (KL-divergence) based on the Hammersley-Chapman-Robbins bound (HCRB). The HCRB states that the variance of an estimator is bounded from below by the Chi-square divergen