ترغب بنشر مسار تعليمي؟ اضغط هنا

Estimation of Conditional Mean Operator under the Bandable Covariance Structure

137   0   0.0 ( 0 )
 نشر من قبل Kwangmin Lee
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider high-dimensional multivariate linear regression models, where the joint distribution of covariates and response variables is a multivariate normal distribution with a bandable covariance matrix. The main goal of this paper is to estimate the regression coefficient matrix, which is a function of the bandable covariance matrix. Although the tapering estimator of covariance has the minimax optimal convergence rate for the class of bandable covariances, we show that it has a sub-optimal convergence rate for the regression coefficient; that is, a minimax estimator for the class of bandable covariances may not be a minimax estimator for its functionals. We propose the blockwise tapering estimator of the regression coefficient, which has the minimax optimal convergence rate for the regression coefficient under the bandable covariance assumption. We also propose a Bayesian procedure called the blockwise tapering post-processed posterior of the regression coefficient and show that the proposed Bayesian procedure has the minimax optimal convergence rate for the regression coefficient under the bandable covariance assumption. We show that the proposed methods outperform the existing methods via numerical studies.



قيم البحث

اقرأ أيضاً

We propose and analyze a new estimator of the covariance matrix that admits strong theoretical guarantees under weak assumptions on the underlying distribution, such as existence of moments of only low order. While estimation of covariance matrices c orresponding to sub-Gaussian distributions is well-understood, much less in known in the case of heavy-tailed data. As K. Balasubramanian and M. Yuan write, data from real-world experiments oftentimes tend to be corrupted with outliers and/or exhibit heavy tails. In such cases, it is not clear that those covariance matrix estimators .. remain optimal and ..what are the other possible strategies to deal with heavy tailed distributions warrant further studies. We make a step towards answering this question and prove tight deviation inequalities for the proposed estimator that depend only on the parameters controlling the intrinsic dimension associated to the covariance matrix (as opposed to the dimension of the ambient space); in particular, our results are applicable in the case of high-dimensional observations.
This paper studies the minimax rate of nonparametric conditional density estimation under a weighted absolute value loss function in a multivariate setting. We first demonstrate that conditional density estimation is impossible if one only requires t hat $p_{X|Z}$ is smooth in $x$ for all values of $z$. This motivates us to consider a sub-class of absolutely continuous distributions, restricting the conditional density $p_{X|Z}(x|z)$ to not only be Holder smooth in $x$, but also be total variation smooth in $z$. We propose a corresponding kernel-based estimator and prove that it achieves the minimax rate. We give some simple examples of densities satisfying our assumptions which imply that our results are not vacuous. Finally, we propose an estimator which achieves the minimax optimal rate adaptively, i.e., without the need to know the smoothness parameter values in advance. Crucially, both of our estimators (the adaptive and non-adaptive ones) impose no assumptions on the marginal density $p_Z$, and are not obtained as a ratio between two kernel smoothing estimators which may sound like a go to approach in this problem.
We consider the problem of estimating a low rank covariance function $K(t,u)$ of a Gaussian process $S(t), tin [0,1]$ based on $n$ i.i.d. copies of $S$ observed in a white noise. We suggest a new estimation procedure adapting simultaneously to the lo w rank structure and the smoothness of the covariance function. The new procedure is based on nuclear norm penalization and exhibits superior performances as compared to the sample covariance function by a polynomial factor in the sample size $n$. Other results include a minimax lower bound for estimation of low-rank covariance functions showing that our procedure is optimal as well as a scheme to estimate the unknown noise variance of the Gaussian process.
122 - W. J. Hall , Jon A. Wellner 2017
Yang (1978) considered an empirical estimate of the mean residual life function on a fixed finite interval. She proved it to be strongly uniformly consistent and (when appropriately standardized) weakly convergent to a Gaussian process. These results are extended to the whole half line, and the variance of the the limiting process is studied. Also, nonparametric simultaneous confidence bands for the mean residual life function are obtained by transforming the limiting process to Brownian motion.
171 - Karine Bertin 2013
In this paper we consider the problem of estimating $f$, the conditional density of $Y$ given $X$, by using an independent sample distributed as $(X,Y)$ in the multivariate setting. We consider the estimation of $f(x,.)$ where $x$ is a fixed point. W e define two different procedures of estimation, the first one using kernel rules, the second one inspired from projection methods. Both adapted estimators are tuned by using the Goldenshluger and Lepski methodology. After deriving lower bounds, we show that these procedures satisfy oracle inequalities and are optimal from the minimax point of view on anisotropic H{o}lder balls. Furthermore, our results allow us to measure precisely the influence of $mathrm{f}_X(x)$ on rates of convergence, where $mathrm{f}_X$ is the density of $X$. Finally, some simulations illustrate the good behavior of our tuned estimates in practice.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا