ترغب بنشر مسار تعليمي؟ اضغط هنا

Stratified incomplete local simplex tests for curvature of nonparametric multiple regression

80   0   0.0 ( 0 )
 نشر من قبل Yanglei Song
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Principled nonparametric tests for regression curvature in $mathbb{R}^{d}$ are often statistically and computationally challenging. This paper introduces the stratified incomplete local simplex (SILS) tests for joint concavity of nonparametric multiple regression. The SILS tests with suitable bootstrap calibration are shown to achieve simultaneous guarantees on dimension-free computational complexity, polynomial decay of the uniform error-in-size, and power consistency for general (global and local) alternatives. To establish these results, a general theory for incomplete $U$-processes with stratified random sparse weights is developed. Novel technical ingredients include maximal inequalities for the supremum of multiple incomplete $U$-processes.

قيم البحث

اقرأ أيضاً

In this paper, we built a new nonparametric regression estimator with the local linear method by using the mean squared relative error as a loss function when the data are subject to random right censoring. We establish the uniform almost sure consis tency with rate over a compact set of the proposed estimator. Some simulations are given to show the asymptotic behavior of the estimate in different cases.
We introduce and study a local linear nonparametric regression estimator for censorship model. The main goal of this paper is, to establish the uniform almost sure consistency result with rate over a compact set for the new estimate. To support our t heoretical result, a simulation study has been done to make comparison with the classical regression estimator.
207 - H. Dette , G. Wieczorek 2008
In this paper we propose a new test for the hypothesis of a constant coefficient of variation in the common nonparametric regression model. The test is based on an estimate of the $L^2$-distance between the square of the regression function and varia nce function. We prove asymptotic normality of a standardized estimate of this distance under the null hypothesis and fixed alternatives and the finite sample properties of a corresponding bootstrap test are investigated by means of a simulation study. The results are applicable to stationary processes with the common mixing conditions and are used to construct tests for ARCH assumptions in financial time series.
We discuss nonparametric tests for parametric specifications of regression quantiles. The test is based on the comparison of parametric and nonparametric fits of these quantiles. The nonparametric fit is a Nadaraya-Watson quantile smoothing estimator . An asymptotic treatment of the test statistic requires the development of new mathematical arguments. An approach that makes only use of plugging in a Bahadur expansion of the nonparametric estimator is not satisfactory. It requires too strong conditions on the dimension and the choice of the bandwidth. Our alternative mathematical approach requires the calculation of moments of Nadaraya-Watson quantile regression estimators. This calculation is done by application of higher order Edgeworth expansions.
In this paper, we study the properties of robust nonparametric estimation using deep neural networks for regression models with heavy tailed error distributions. We establish the non-asymptotic error bounds for a class of robust nonparametric regress ion estimators using deep neural networks with ReLU activation under suitable smoothness conditions on the regression function and mild conditions on the error term. In particular, we only assume that the error distribution has a finite p-th moment with p greater than one. We also show that the deep robust regression estimators are able to circumvent the curse of dimensionality when the distribution of the predictor is supported on an approximate lower-dimensional set. An important feature of our error bound is that, for ReLU neural networks with network width and network size (number of parameters) no more than the order of the square of the dimensionality d of the predictor, our excess risk bounds depend sub-linearly on d. Our assumption relaxes the exact manifold support assumption, which could be restrictive and unrealistic in practice. We also relax several crucial assumptions on the data distribution, the target regression function and the neural networks required in the recent literature. Our simulation studies demonstrate that the robust methods can significantly outperform the least squares method when the errors have heavy-tailed distributions and illustrate that the choice of loss function is important in the context of deep nonparametric regression.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا