ترغب بنشر مسار تعليمي؟ اضغط هنا

Kernel density estimation via diffusion

122   0   0.0 ( 0 )
 نشر من قبل Z. I. Botev
 تاريخ النشر 2010
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a new adaptive kernel density estimator based on linear diffusion processes. The proposed estimator builds on existing ideas for adaptive smoothing by incorporating information from a pilot density estimate. In addition, we propose a new plug-in bandwidth selection method that is free from the arbitrary normal reference rules used by existing methods. We present simulation examples in which the proposed approach outperforms existing methods in terms of accuracy and reliability.

قيم البحث

اقرأ أيضاً

In this work we study the estimation of the density of a totally positive random vector. Total positivity of the distribution of a random vector implies a strong form of positive dependence between its coordinates and, in particular, it implies posit ive association. Since estimating a totally positive density is a non-parametric problem, we take on a (modified) kernel density estimation approach. Our main result is that the sum of scaled standard Gaussian bumps centered at a min-max closed set provably yields a totally positive distribution. Hence, our strategy for producing a totally positive estimator is to form the min-max closure of the set of samples, and output a sum of Gaussian bumps centered at the points in this set. We can frame this sum as a convolution between the uniform distribution on a min-max closed set and a scaled standard Gaussian. We further conjecture that convolving any totally positive density with a standard Gaussian remains totally positive.
This paper studies the estimation of the conditional density f (x, $times$) of Y i given X i = x, from the observation of an i.i.d. sample (X i , Y i) $in$ R d , i = 1,. .. , n. We assume that f depends only on r unknown components with typically r d . We provide an adaptive fully-nonparametric strategy based on kernel rules to estimate f. To select the bandwidth of our kernel rule, we propose a new fast iterative algorithm inspired by the Rodeo algorithm (Wasserman and Lafferty (2006)) to detect the sparsity structure of f. More precisely, in the minimax setting, our pointwise estimator, which is adaptive to both the regularity and the sparsity, achieves the quasi-optimal rate of convergence. Its computational complexity is only O(dn log n).
We aim at estimating the invariant density associated to a stochastic differential equation with jumps in low dimension, which is for $d=1$ and $d=2$. We consider a class of jump diffusion processes whose invariant density belongs to some Holder spac e. Firstly, in dimension one, we show that the kernel density estimator achieves the convergence rate $frac{1}{T}$, which is the optimal rate in the absence of jumps. This improves the convergence rate obtained in [Amorino, Gloter (2021)], which depends on the Blumenthal-Getoor index for $d=1$ and is equal to $frac{log T}{T}$ for $d=2$. Secondly, we show that is not possible to find an estimator with faster rates of estimation. Indeed, we get some lower bounds with the same rates ${frac{1}{T},frac{log T}{T}}$ in the mono and bi-dimensional cases, respectively. Finally, we obtain the asymptotic normality of the estimator in the one-dimensional case.
143 - Salim Bouzebda 2009
The purpose of this note is to provide an approximation for the generalized bootstrapped empirical process achieving the rate in Kolmos et al. (1975). The proof is based on much the same arguments as in Horvath et al. (2000). As a consequence, we est ablish an approximation of the bootstrapped kernel-type density estimator
Let $v$ be a vector field in a bounded open set $Gsubset {mathbb {R}}^d$. Suppose that $v$ is observed with a random noise at random points $X_i, i=1,...,n,$ that are independent and uniformly distributed in $G.$ The problem is to estimate the integr al curve of the differential equation [frac{dx(t)}{dt}=v(x(t)),qquad tgeq 0,x(0)=x_0in G,] starting at a given point $x(0)=x_0in G$ and to develop statistical tests for the hypothesis that the integral curve reaches a specified set $Gammasubset G.$ We develop an estimation procedure based on a Nadaraya--Watson type kernel regression estimator, show the asymptotic normality of the estimated integral curve and derive differential and integral equations for the mean and covariance function of the limit Gaussian process. This provides a method of tracking not only the integral curve, but also the covariance matrix of its estimate. We also study the asymptotic distribution of the squared minimal distance from the integral curve to a smooth enough surface $Gammasubset G$. Building upon this, we develop testing procedures for the hypothesis that the integral curve reaches $Gamma$. The problems of this nature are of interest in diffusion tensor imaging, a brain imaging technique based on measuring the diffusion tensor at discrete locations in the cerebral white matter, where the diffusion of water molecules is typically anisotropic. The diffusion tensor data is used to estimate the dominant orientations of the diffusion and to track white matter fibers from the initial location following these orientations. Our approach brings more rigorous statistical tools to the analysis of this problem providing, in particular, hypothesis testing procedures that might be useful in the study of axonal connectivity of the white matter.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا