ترغب بنشر مسار تعليمي؟ اضغط هنا

Rate-optimal estimation for a general class of nonparametric regression models with unknown link functions

437   0   0.0 ( 0 )
 نشر من قبل Enno Mammen
 تاريخ النشر 2008
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper discusses a nonparametric regression model that naturally generalizes neural network models. The model is based on a finite number of one-dimensional transformations and can be estimated with a one-dimensional rate of convergence. The model contains the generalized additive model with unknown link function as a special case. For this case, it is shown that the additive components and link function can be estimated with the optimal rate by a smoothing spline that is the solution of a penalized least squares criterion.



قيم البحث

اقرأ أيضاً

We introduce estimation and test procedures through divergence minimization for models satisfying linear constraints with unknown parameter. Several statistical examples and motivations are given. These procedures extend the empirical likelihood (EL) method and share common features with generalized empirical likelihood (GEL). We treat the problems of existence and characterization of the divergence projections of probability measures on sets of signed finite measures. Our approach allows for a study of the estimates under misspecification. The asymptotic behavior of the proposed estimates are studied using the dual representation of the divergences and the explicit forms of the divergence projections. We discuss the problem of the choice of the divergence under various respects. Also we handle efficiency and robustness properties of minimum divergence estimates. A simulation study shows that the Hellinger divergence enjoys good efficiency and robustness properties.
In this paper, we built a new nonparametric regression estimator with the local linear method by using the mean squared relative error as a loss function when the data are subject to random right censoring. We establish the uniform almost sure consis tency with rate over a compact set of the proposed estimator. Some simulations are given to show the asymptotic behavior of the estimate in different cases.
We introduce and study a local linear nonparametric regression estimator for censorship model. The main goal of this paper is, to establish the uniform almost sure consistency result with rate over a compact set for the new estimate. To support our t heoretical result, a simulation study has been done to make comparison with the classical regression estimator.
In this paper, we study the properties of robust nonparametric estimation using deep neural networks for regression models with heavy tailed error distributions. We establish the non-asymptotic error bounds for a class of robust nonparametric regress ion estimators using deep neural networks with ReLU activation under suitable smoothness conditions on the regression function and mild conditions on the error term. In particular, we only assume that the error distribution has a finite p-th moment with p greater than one. We also show that the deep robust regression estimators are able to circumvent the curse of dimensionality when the distribution of the predictor is supported on an approximate lower-dimensional set. An important feature of our error bound is that, for ReLU neural networks with network width and network size (number of parameters) no more than the order of the square of the dimensionality d of the predictor, our excess risk bounds depend sub-linearly on d. Our assumption relaxes the exact manifold support assumption, which could be restrictive and unrealistic in practice. We also relax several crucial assumptions on the data distribution, the target regression function and the neural networks required in the recent literature. Our simulation studies demonstrate that the robust methods can significantly outperform the least squares method when the errors have heavy-tailed distributions and illustrate that the choice of loss function is important in the context of deep nonparametric regression.
248 - H. Dette , G. Wieczorek 2008
In this paper we propose a new test for the hypothesis of a constant coefficient of variation in the common nonparametric regression model. The test is based on an estimate of the $L^2$-distance between the square of the regression function and varia nce function. We prove asymptotic normality of a standardized estimate of this distance under the null hypothesis and fixed alternatives and the finite sample properties of a corresponding bootstrap test are investigated by means of a simulation study. The results are applicable to stationary processes with the common mixing conditions and are used to construct tests for ARCH assumptions in financial time series.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا