ﻻ يوجد ملخص باللغة العربية
We derive a new variational formula for the Renyi family of divergences, $R_alpha(Q|P)$, between probability measures $Q$ and $P$. Our result generalizes the classical Donsker-Varadhan variational formula for the Kullback-Leibler divergence. We further show that this Renyi variational formula holds over a range of function spaces; this leads to a formula for the optimizer under very weak assumptions and is also key in our development of a consistency theory for Renyi divergence estimators. By applying this theory to neural-network estimators, we show that if a neural network family satisfies one of several strengthen
Variational representations of divergences and distances between high-dimensional probability distributions offer significant theoretical insights and practical advantages in numerous research areas. Recently, they have gained popularity in machine l
The estimation of an f-divergence between two probability distributions based on samples is a fundamental problem in statistics and machine learning. Most works study this problem under very weak assumptions, in which case it is provably hard. We con
We present a unified technique for sequential estimation of convex divergences between distributions, including integral probability metrics like the kernel maximum mean discrepancy, $varphi$-divergences like the Kullback-Leibler divergence, and opti
Parametric stochastic simulators are ubiquitous in science, often featuring high-dimensional input parameters and/or an intractable likelihood. Performing Bayesian parameter inference in this context can be challenging. We present a neural simulator-
In this paper, we develop a novel way to perturbatively calculate Renyi relative divergences $D_{gamma}(rho|| sigma) ={rm tr} rho^{gamma} sigma^{1-gamma}$ and related quantities without using replica trick as well as analytic continuation. We explici