ترغب بنشر مسار تعليمي؟ اضغط هنا

Prediction error after model search

144   0   0.0 ( 0 )
 نشر من قبل Xiaoying Tian Harris
 تاريخ النشر 2016
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Estimation of the prediction error of a linear estimation rule is difficult if the data analyst also use data to select a set of variables and construct the estimation rule using only the selected variables. In this work, we propose an asymptotically unbiased estimator for the prediction error after model search. Under some additional mild assumptions, we show that our estimator converges to the true prediction error in $L^2$ at the rate of $O(n^{-1/2})$, with $n$ being the number of data points. Our estimator applies to general selection procedures, not requiring analytical forms for the selection. The number of variables to select from can grow as an exponential factor of $n$, allowing applications in high-dimensional data. It also allows model misspecifications, not requiring linear underlying models. One application of our method is that it provides an estimator for the degrees of freedom for many discontinuous estimation rules like best subset selection or relaxed Lasso. Connection to Steins Unbiased Risk Estimator is discussed. We consider in-sample prediction errors in this work, with some extension to out-of-sample errors in low dimensional, linear models. Examples such as best subset selection and relaxed Lasso are considered in simulations, where our estimator outperforms both $C_p$ and cross validation in various settings.

قيم البحث

اقرأ أيضاً

78 - Wenjia Wang , Rui Tuo , 2017
Kriging based on Gaussian random fields is widely used in reconstructing unknown functions. The kriging method has pointwise predictive distributions which are computationally simple. However, in many applications one would like to predict for a rang e of untried points simultaneously. In this work we obtain some error bounds for the (simple) kriging predictor under the uniform metric. It works for a scattered set of input points in an arbitrary dimension, and also covers the case where the covariance function of the Gaussian process is misspecified. These results lead to a better understanding of the rate of convergence of kriging under the Gaussian or the Matern correlation functions, the relationship between space-filling designs and kriging models, and the robustness of the Matern correlation functions.
132 - Martin Wahl 2018
We analyse the prediction error of principal component regression (PCR) and prove non-asymptotic upper bounds for the corresponding squared risk. Under mild assumptions, we show that PCR performs as well as the oracle method obtained by replacing emp irical principal components by their population counterparts. Our approach relies on upper bounds for the excess risk of principal component analysis.
It is known that there is a dichotomy in the performance of model selectors. Those that are consistent (having the oracle property) do not achieve the asymptotic minimax rate for prediction error. We look at this phenomenon closely, and argue that th e set of parameters on which this dichotomy occurs is extreme, even pathological, and should not be considered when evaluating model selectors. We characterize this set, and show that, when such parameters are dismissed from consideration, consistency and asymptotic minimaxity can be attained simultaneously.
In this paper, we built a new nonparametric regression estimator with the local linear method by using the mean squared relative error as a loss function when the data are subject to random right censoring. We establish the uniform almost sure consis tency with rate over a compact set of the proposed estimator. Some simulations are given to show the asymptotic behavior of the estimate in different cases.
We consider a model where the failure hazard function, conditional on a covariate $Z$ is given by $R(t,theta^0|Z)=eta_{gamma^0}(t)f_{beta^0}(Z)$, with $theta^0=(beta^0,gamma^0)^topin mathbb{R}^{m+p}$. The baseline hazard function $eta_{gamma^0}$ and relative risk $f_{beta^0}$ belong both to parametric families. The covariate $Z$ is measured through the error model $U=Z+epsilon$ where $epsilon$ is independent from $Z$, with known density $f_epsilon$. We observe a $n$-sample $(X_i, D_i, U_i)$, $i=1,...,n$, where $X_i$ is the minimum between the failure time and the censoring time, and $D_i$ is the censoring indicator. We aim at estimating $theta^0$ in presence of the unknown density $g$. Our estimation procedure based on least squares criterion provide two estimators. The first one minimizes an estimation of the least squares criterion where $g$ is estimated by density deconvolution. Its rate depends on the smoothnesses of $f_epsilon$ and $f_beta(z)$ as a function of $z$,. We derive sufficient conditions that ensure the $sqrt{n}$-consistency. The second estimator is constructed under conditions ensuring that the least squares criterion can be directly estimated with the parametric rate. These estimators, deeply studied through examples are in particular $sqrt{n}$-consistent and asymptotically Gaussian in the Cox model and in the excess risk model, whatever is $f_epsilon$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا