Do you want to publish a course? Click here

Nonparametric deconvolution problem for dependent sequences

159   0   0.0 ( 0 )
 Added by Rafa{\\l} Kulik
 Publication date 2008
and research's language is English
 Authors Rafa{l} Kulik




Ask ChatGPT about the research

We consider the nonparametric estimation of the density function of weakly and strongly dependent processes with noisy observations. We show that in the ordinary smooth case the optimal bandwidth choice can be influenced by long range dependence, as opposite to the standard case, when no noise is present. In particular, if the dependence is moderate the bandwidth, the rates of mean-square convergence and, additionally, central limit theorem are the same as in the i.i.d. case. If the dependence is strong enough, then the bandwidth choice is influenced by the strength of dependence, which is different when compared to the non-noisy case. Also, central limit theorem are influenced by the strength of dependence. On the other hand, if the density is supersmooth, then long range dependence has no effect at all on the optimal bandwidth choice.



rate research

Read More

97 - A.J. van Es , H.-W. Uh 2002
We derive asymptotic normality of kernel type deconvolution density estimators. In particular we consider deconvolution problems where the known component of the convolution has a symmetric lambda-stable distribution, 0<lambda<= 2. It turns out that the limit behavior changes if the exponent parameter lambda passes the value one, the case of Cauchy deconvolution.
173 - Jer^ome Dedecker 2016
We give the asymptotic behavior of the Mann-Whitney U-statistic for two independent stationary sequences. The result applies to a large class of short-range dependent sequences, including many non-mixing processes in the sense of Rosenblatt. We also give some partial results in the long-range dependent case, and we investigate other related questions. Based on the theoretical results, we propose some simple corrections of the usual tests for stochastic domination; next we simulate different (non-mixing) stationary processes to see that the corrected tests perform well.
For the class of Gauss-Markov processes we study the problem of asymptotic equivalence of the nonparametric regression model with errors given by the increments of the process and the continuous time model, where a whole path of a sum of a deterministic signal and the Gauss-Markov process can be observed. In particular we provide sufficient conditions such that asymptotic equivalence of the two models holds for functions from a given class, and we verify these for the special cases of Sobolev ellipsoids and Holder classes with smoothness index $> 1/2$ under mild assumptions on the Gauss-Markov process at hand. To derive these results, we develop an explicit characterization of the reproducing kernel Hilbert space associated with the Gauss-Markov process, that hinges on a characterization of such processes by a property of the corresponding covariance kernel introduced by Doob. In order to demonstrate that the given assumptions on the Gauss-Markov process are in some sense sharp we also show that asymptotic equivalence fails to hold for the special case of Brownian bridge. Our results demonstrate that the well-known asymptotic equivalence of the Gaussian white noise model and the nonparametric regression model with independent standard normal distributed errors can be extended to a broad class of models with dependent data.
In this paper, a novel Bayesian nonparametric test for assessing multivariate normal models is presented. While there are extensive frequentist and graphical methods for testing multivariate normality, it is challenging to find Bayesian counterparts. The proposed approach is based on the use of the Dirichlet process and Mahalanobis distance. More precisely, the Mahalanobis distance is employed as a good technique to transform the $m$-variate problem into a univariate problem. Then the Dirichlet process is used as a prior on the distribution of the Mahalanobis distance. The concentration of the distribution of the distance between the posterior process and the chi-square distribution with $m$ degrees of freedom is compared to the concentration of the distribution of the distance between the prior process and the chi-square distribution with $m$ degrees of freedom via a relative belief ratio. The distance between the Dirichlet process and the chi-square distribution is established based on the Anderson-Darling distance. Key theoretical results of the approach are derived. The procedure is illustrated through several examples, in which the proposed approach shows excellent performance.
Bayesian nonparametric statistics is an area of considerable research interest. While recently there has been an extensive concentration in developing Bayesian nonparametric procedures for model checking, the use of the Dirichlet process, in its simplest form, along with the Kullback-Leibler divergence is still an open problem. This is mainly attributed to the discreteness property of the Dirichlet process and that the Kullback-Leibler divergence between any discrete distribution and any continuous distribution is infinity. The approach proposed in this paper, which is based on incorporating the Dirichlet process, the Kullback-Leibler divergence and the relative belief ratio, is considered the first concrete solution to this issue. Applying the approach is simple and does not require obtaining a closed form of the relative belief ratio. A Monte Carlo study and real data examples show that the developed approach exhibits excellent performance.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا