ﻻ يوجد ملخص باللغة العربية
A new (unadjusted) Langevin Monte Carlo (LMC) algorithm with improved rates in total variation and in Wasserstein distance is presented. All these are obtained in the context of sampling from a target distribution $pi$ that has a density $hat{pi}$ on $mathbb{R}^d$ known up to a normalizing constant. Moreover, $-log hat{pi}$ is assumed to have a locally Lipschitz gradient and its third derivative is locally H{o}lder continuous with exponent $beta in (0,1]$. Non-asymptotic bounds are obtained for the convergence to stationarity of the new sampling method with convergence rate $1+ beta/2$ in Wasserstein distance, while it is shown that the rate is 1 in total variation even in the absence of convexity. Finally, in the case where $-log hat{pi}$ is strongly convex and its gradient is Lipschitz continuous, explicit constants are provided.
Discretized Langevin diffusions are efficient Monte Carlo methods for sampling from high dimensional target densities that are log-Lipschitz-smooth and (strongly) log-concave. In particular, the Euclidean Langevin Monte Carlo sampling algorithm has r
Markov chain models are used in various fields, such behavioral sciences or econometrics. Although the goodness of fit of the model is usually assessed by large sample approximation, it is desirable to use conditional tests if the sample size is not
Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic n
We obtain an asymptotic expansion for the null distribution function of thegradient statistic for testing composite null hypotheses in the presence of nuisance parameters. The expansion is derived using a Bayesian route based on the shrinkage argumen
The classical Langevin Monte Carlo method looks for samples from a target distribution by descending the samples along the gradient of the target distribution. The method enjoys a fast convergence rate. However, the numerical cost is sometimes high b