ترغب بنشر مسار تعليمي؟ اضغط هنا

Unajusted Langevin algorithm with multiplicative noise: Total variation and Wasserstein bounds

272   0   0.0 ( 0 )
 نشر من قبل Fabien Panloup
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English
 تأليف Gilles Pages




اسأل ChatGPT حول البحث

In this paper, we focus on non-asymptotic bounds related to the Euler scheme of an ergodic diffusion with a possibly multiplicative diffusion term (non-constant diffusion coefficient). More precisely, the objective of this paper is to control the distance of the standard Euler scheme with decreasing step (usually called Unajusted Langevin Algorithm in the Monte-Carlo literature) to the invariant distribution of such an ergodic diffusion. In an appropriate Lyapunov setting and under uniform ellipticity assumptions on the diffusion coefficient, we establish (or improve) such bounds for Total Variation and L 1-Wasserstein distances in both multiplicative and additive and frameworks. These bounds rely on weak error expansions using Stochastic Analysis adapted to decreasing step setting.



قيم البحث

اقرأ أيضاً

Consider the empirical measure, $hat{mathbb{P}}_N$, associated to $N$ i.i.d. samples of a given probability distribution $mathbb{P}$ on the unit interval. For fixed $mathbb{P}$ the Wasserstein distance between $hat{mathbb{P}}_N$ and $mathbb{P}$ is a random variable on the sample space $[0,1]^N$. Our main result is that its normalised quantiles are asymptotically maximised when $mathbb{P}$ is a convex combination between the uniform distribution supported on the two points ${0,1}$ and the uniform distribution on the unit interval $[0,1]$. This allows us to obtain explicit asymptotic confidence regions for the underlying measure $mathbb{P}$. We also suggest extensions to higher dimensions with numerical evidence.
84 - Xiao Fang , Yuta Koike 2020
We prove the large-dimensional Gaussian approximation of a sum of $n$ independent random vectors in $mathbb{R}^d$ together with fourth-moment error bounds on convex sets and Euclidean balls. We show that compared with classical third-moment bounds, o ur bounds have near-optimal dependence on $n$ and can achieve improved dependence on the dimension $d$. For centered balls, we obtain an additional error bound that has a sub-optimal dependence on $n$, but recovers the known result of the validity of the Gaussian approximation if and only if $d=o(n)$. We discuss an application to the bootstrap. We prove our main results using Steins method.
We present an improved analysis of the Euler-Maruyama discretization of the Langevin diffusion. Our analysis does not require global contractivity, and yields polynomial dependence on the time horizon. Compared to existing approaches, we make an addi tional smoothness assumption, and improve the existing rate from $O(eta)$ to $O(eta^2)$ in terms of the KL divergence. This result matches the correct order for numerical SDEs, without suffering from exponential time dependence. When applied to algorithms for sampling and learning, this result simultaneously improves all those methods based on Dalayans approach.
265 - Anthony Reveillac 2008
In this paper we give a central limit theorem for the weighted quadratic variations process of a two-parameter Brownian motion. As an application, we show that the discretized quadratic variations $sum_{i=1}^{[n s]} sum_{j=1}^{[n t]} | Delta_{i,j} Y |^2$ of a two-parameter diffusion $Y=(Y_{(s,t)})_{(s,t)in[0,1]^2}$ observed on a regular grid $G_n$ is an asymptotically normal estimator of the quadratic variation of $Y$ as $n$ goes to infinity.
Suppose that a random variable $X$ of interest is observed perturbed by independent additive noise $Y$. This paper concerns the the least favorable perturbation $hat Y_ep$, which maximizes the prediction error $E(X-E(X|X+Y))^2$ in the class of $Y$ wi th $ var (Y)leq ep$. We find a characterization of the answer to this question, and show by example that it can be surprisingly complicated. However, in the special case where $X$ is infinitely divisible, the solution is complete and simple. We also explore the conjecture that noisier $Y$ makes prediction worse.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا