ترغب بنشر مسار تعليمي؟ اضغط هنا

An RKHS formulation of the inverse regression dimension-reduction problem

138   0   0.0 ( 0 )
 نشر من قبل Tailen Hsing
 تاريخ النشر 2009
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Suppose that $Y$ is a scalar and $X$ is a second-order stochastic process, where $Y$ and $X$ are conditionally independent given the random variables $xi_1,...,xi_p$ which belong to the closed span $L_X^2$ of $X$. This paper investigates a unified framework for the inverse regression dimension-reduction problem. It is found that the identification of $L_X^2$ with the reproducing kernel Hilbert space of $X$ provides a platform for a seamless extension from the finite- to infinite-dimensional settings. It also facilitates convenient computational algorithms that can be applied to a variety of models.



قيم البحث

اقرأ أيضاً

We consider a $l_1$-penalization procedure in the non-parametric Gaussian regression model. In many concrete examples, the dimension $d$ of the input variable $X$ is very large (sometimes depending on the number of observations). Estimation of a $bet a$-regular regression function $f$ cannot be faster than the slow rate $n^{-2beta/(2beta+d)}$. Hopefully, in some situations, $f$ depends only on a few numbers of the coordinates of $X$. In this paper, we construct two procedures. The first one selects, with high probability, these coordinates. Then, using this subset selection method, we run a local polynomial estimator (on the set of interesting coordinates) to estimate the regression function at the rate $n^{-2beta/(2beta+d^*)}$, where $d^*$, the real dimension of the problem (exact number of variables whom $f$ depends on), has replaced the dimension $d$ of the design. To achieve this result, we used a $l_1$ penalization method in this non-parametric setup.
222 - Yehua Li , Tailen Hsing 2010
In this paper, we consider regression models with a Hilbert-space-valued predictor and a scalar response, where the response depends on the predictor only through a finite number of projections. The linear subspace spanned by these projections is cal led the effective dimension reduction (EDR) space. To determine the dimensionality of the EDR space, we focus on the leading principal component scores of the predictor, and propose two sequential $chi^2$ testing procedures under the assumption that the predictor has an elliptically contoured distribution. We further extend these procedures and introduce a test that simultaneously takes into account a large number of principal component scores. The proposed procedures are supported by theory, validated by simulation studies, and illustrated by a real-data example. Our methods and theory are applicable to functional data and high-dimensional multivariate data.
Based on the theory of reproducing kernel Hilbert space (RKHS) and semiparametric method, we propose a new approach to nonlinear dimension reduction. The method extends the semiparametric method into a more generalized domain where both the intereste d parameters and nuisance parameters to be infinite dimensional. By casting the nonlinear dimensional reduction problem in a generalized semiparametric framework, we calculate the orthogonal complement space of generalized nuisance tangent space to derive the estimating equation. Solving the estimating equation by the theory of RKHS and regularization, we obtain the estimation of dimension reduction directions of the sufficient dimension reduction (SDR) subspace and also show the asymptotic property of estimator. Furthermore, the proposed method does not rely on the linearity condition and constant variance condition. Simulation and real data studies are conducted to demonstrate the finite sample performance of our method in comparison with several existing methods.
85 - Lee H. Dicker 2016
We study asymptotic minimax problems for estimating a $d$-dimensional regression parameter over spheres of growing dimension ($dto infty$). Assuming that the data follows a linear model with Gaussian predictors and errors, we show that ridge regressi on is asymptotically minimax and derive new closed form expressions for its asymptotic risk under squared-error loss. The asymptotic risk of ridge regression is closely related to the Stieltjes transform of the Marv{c}enko-Pastur distribution and the spectral distribution of the predictors from the linear model. Adaptive ridge estimators are also proposed (which adapt to the unknown radius of the sphere) and connections with equivariant estimation are highlighted. Our results are mostly relevant for asymptotic settings where the number of observations, $n$, is proportional to the number of predictors, that is, $d/ntorhoin(0,infty)$.
284 - Huiming Zhang 2018
This short note is to point the reader to notice that the proof of high dimensional asymptotic normality of MLE estimator for logistic regression under the regime $p_n=o(n)$ given in paper: Maximum likelihood estimation in logistic regression models with a diverging number of covariates. Electronic Journal of Statistics, 6, 1838-1846. is wrong.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا