ترغب بنشر مسار تعليمي؟ اضغط هنا

Asymptotic normality for deconvolution kernel density estimators from random fields

360   0   0.0 ( 0 )
 نشر من قبل Jiexiang Li
 تاريخ النشر 2014
  مجال البحث الاحصاء الرياضي
والبحث باللغة English
 تأليف Jiexiang Li




اسأل ChatGPT حول البحث

The paper discusses the estimation of a continuous density function of the target random field $X_{bf{i}}$, $bf{i}in mathbb {Z}^N$ which is contaminated by measurement errors. In particular, the observed random field $Y_{bf{i}}$, $bf{i}in mathbb {Z}^N$ is such that $Y_{bf{i}}=X_{bf{i}}+epsilon_{bf{i}}$, where the random error $epsilon_{bf{i}}$ is from a known distribution and independent of the target random field. Compared to the existing results, the paper is improved in two directions. First, the random vectors in contrast to univariate random variables are investigated. Second, a random field with a certain spatial interactions instead of i. i. d. random variables is studied. Asymptotic normality of the proposed estimator is established under appropriate conditions.



قيم البحث

اقرأ أيضاً

79 - A.J. van Es , H.-W. Uh 2001
We derive asymptotic normality of kernel type deconvolution estimators of the density, the distribution function at a fixed point, and of the probability of an interval. We consider the so called super smooth case where the characteristic function of the known distribution decreases exponentially. It turns out that the limit behavior of the pointwise estimators of the density and distribution function is relatively straightforward while the asymptotics of the estimator of the probability of an interval depends in a complicated way on the sequence of bandwidths.
97 - A.J. van Es , H.-W. Uh 2002
We derive asymptotic normality of kernel type deconvolution density estimators. In particular we consider deconvolution problems where the known component of the convolution has a symmetric lambda-stable distribution, 0<lambda<= 2. It turns out that the limit behavior changes if the exponent parameter lambda passes the value one, the case of Cauchy deconvolution.
In the Gaussian white noise model, we study the estimation of an unknown multidimensional function $f$ in the uniform norm by using kernel methods. The performances of procedures are measured by using the maxiset point of view: we determine the set o f functions which are well estimated (at a prescribed rate) by each procedure. So, in this paper, we determine the maxisets associated to kernel estimators and to the Lepski procedure for the rate of convergence of the form $(log n/n)^{be/(2be+d)}$. We characterize the maxisets in terms of Besov and Holder spaces of regularity $beta$.
278 - Jeremie Kellner 2015
We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for norm ality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. Our method makes use of a special kind of parametric bootstrap (typical of goodness-of-fit tests) which is computationally more efficient than standard parametric bootstrap. Moreover, an upper bound for the Type-II error highlights the dependence on influential quantities. Experiments illustrate the practical improvement allowed by our test in high-dimensional settings where common normality tests are known to fail. We also consider an application to covariance rank selection through a sequential procedure.
261 - Jeremie Kellner 2014
A new goodness-of-fit test for normality in high-dimension (and Reproducing Kernel Hilbert Space) is proposed. It shares common ideas with the Maximum Mean Discrepancy (MMD) it outperforms both in terms of computation time and applicability to a wide r range of data. Theoretical results are derived for the Type-I and Type-II errors. They guarantee the control of Type-I error at prescribed level and an exponentially fast decrease of the Type-II error. Synthetic and real data also illustrate the practical improvement allowed by our test compared with other leading approaches in high-dimensional settings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا