ترغب بنشر مسار تعليمي؟ اضغط هنا

Bounding the error of discretized Langevin algorithms for non-strongly log-concave targets

74   0   0.0 ( 0 )
 نشر من قبل Arnak Dalalyan S.
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we provide non-asymptotic upper bounds on the error of sampling from a target density using three schemes of discretized Langevin diffusions. The first scheme is the Langevin Monte Carlo (LMC) algorithm, the Euler discretization of the Langevin diffusion. The second and the third schemes are, respectively, the kinetic Langevin Monte Carlo (KLMC) for differentiable potentials and the kinetic Langevin Monte Carlo for twice-differentiable potentials (KLMC2). The main focus is on the target densities that are smooth and log-concave on $mathbb R^p$, but not necessarily strongly log-concave. Bounds on the computational complexity are obtained under two types of smoothness assumption: the potential has a Lipschitz-continuous gradient and the potential has a Lipschitz-continuous Hessian matrix. The error of sampling is measured by Wasserstein-$q$ distances. We advocate for the use of a new dimension-adapted scaling in the definition of the computational complexity, when Wasserstein-$q$ distances are considered. The obtained results show that the number of iterations to achieve a scaled-error smaller than a prescribed value depends only polynomially in the dimension.



قيم البحث

اقرأ أيضاً

We present a new approach for inference about a log-concave distribution: Instead of using the method of maximum likelihood, we propose to incorporate the log-concavity constraint in an appropriate nonparametric confidence set for the cdf $F$. This a pproach has the advantage that it automatically provides a measure of statistical uncertainty and it thus overcomes a marked limitation of the maximum likelihood estimate. In particular, we show how to construct confidence bands for the density that have a finite sample guaranteed confidence level. The nonparametric confidence set for $F$ which we introduce here has attractive computational and statistical properties: It allows to bring modern tools from optimization to bear on this problem via difference of convex programming, and it results in optimal statistical inference. We show that the width of the resulting confidence bands converges at nearly the parametric $n^{-frac{1}{2}}$ rate when the log density is $k$-affine.
We study nonparametric maximum likelihood estimation of a log-concave probability density and its distribution and hazard function. Some general properties of these estimators are derived from two characterizations. It is shown that the rate of conve rgence with respect to supremum norm on a compact interval for the density and hazard rate estimator is at least $(log(n)/n)^{1/3}$ and typically $(log(n)/n)^{2/5}$, whereas the difference between the empirical and estimated distribution function vanishes with rate $o_{mathrm{p}}(n^{-1/2})$ under certain regularity assumptions.
122 - Tomohiro Nishiyama 2019
Log-concave distributions include some important distributions such as normal distribution, exponential distribution and so on. In this note, we show inequalities between two Lp-norms for log-concave distributions on the Euclidean space. These inequa lities are the generalizations of the upper and lower bound of the differential entropy and are also interpreted as a kind of expansion of the inequality between two Lp-norms on the measurable set with finite measure.
Let ${P_{theta}:theta in {mathbb R}^d}$ be a log-concave location family with $P_{theta}(dx)=e^{-V(x-theta)}dx,$ where $V:{mathbb R}^dmapsto {mathbb R}$ is a known convex function and let $X_1,dots, X_n$ be i.i.d. r.v. sampled from distribution $P_{t heta}$ with an unknown location parameter $theta.$ The goal is to estimate the value $f(theta)$ of a smooth functional $f:{mathbb R}^dmapsto {mathbb R}$ based on observations $X_1,dots, X_n.$ In the case when $V$ is sufficiently smooth and $f$ is a functional from a ball in a Holder space $C^s,$ we develop estimators of $f(theta)$ with minimax optimal error rates measured by the $L_2({mathbb P}_{theta})$-distance as well as by more general Orlicz norm distances. Moreover, we show that if $dleq n^{alpha}$ and $s>frac{1}{1-alpha},$ then the resulting estimators are asymptotically efficient in Hajek-LeCam sense with the convergence rate $sqrt{n}.$ This generalizes earlier results on estimation of smooth functionals in Gaussian shift models. The estimators have the form $f_k(hat theta),$ where $hat theta$ is the maximum likelihood estimator and $f_k: {mathbb R}^dmapsto {mathbb R}$ (with $k$ depending on $s$) are functionals defined in terms of $f$ and designed to provide a higher order bias reduction in functional estimation problem. The method of bias reduction is based on iterative parametric bootstrap and it has been successfully used before in the case of Gaussian models.
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, that is, a density of the form $f_0=expvarphi_0$ where $varphi_0$ is a concave function on $mathbb{R}$. The pointwise limiting distributi ons depend on the second and third derivatives at 0 of $H_k$, the lower invelope of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of $varphi_0=log f_0$ at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode $M(f_0)$ and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا