Do you want to publish a course? Click here

Robust Nonparametric Regression with Deep Neural Networks

134   0   0.0 ( 0 )
 Added by Jian Huang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In this paper, we study the properties of robust nonparametric estimation using deep neural networks for regression models with heavy tailed error distributions. We establish the non-asymptotic error bounds for a class of robust nonparametric regression estimators using deep neural networks with ReLU activation under suitable smoothness conditions on the regression function and mild conditions on the error term. In particular, we only assume that the error distribution has a finite p-th moment with p greater than one. We also show that the deep robust regression estimators are able to circumvent the curse of dimensionality when the distribution of the predictor is supported on an approximate lower-dimensional set. An important feature of our error bound is that, for ReLU neural networks with network width and network size (number of parameters) no more than the order of the square of the dimensionality d of the predictor, our excess risk bounds depend sub-linearly on d. Our assumption relaxes the exact manifold support assumption, which could be restrictive and unrealistic in practice. We also relax several crucial assumptions on the data distribution, the target regression function and the neural networks required in the recent literature. Our simulation studies demonstrate that the robust methods can significantly outperform the least squares method when the errors have heavy-tailed distributions and illustrate that the choice of loss function is important in the context of deep nonparametric regression.



rate research

Read More

We discuss nonparametric tests for parametric specifications of regression quantiles. The test is based on the comparison of parametric and nonparametric fits of these quantiles. The nonparametric fit is a Nadaraya-Watson quantile smoothing estimator. An asymptotic treatment of the test statistic requires the development of new mathematical arguments. An approach that makes only use of plugging in a Bahadur expansion of the nonparametric estimator is not satisfactory. It requires too strong conditions on the dimension and the choice of the bandwidth. Our alternative mathematical approach requires the calculation of moments of Nadaraya-Watson quantile regression estimators. This calculation is done by application of higher order Edgeworth expansions.
For the class of Gauss-Markov processes we study the problem of asymptotic equivalence of the nonparametric regression model with errors given by the increments of the process and the continuous time model, where a whole path of a sum of a deterministic signal and the Gauss-Markov process can be observed. In particular we provide sufficient conditions such that asymptotic equivalence of the two models holds for functions from a given class, and we verify these for the special cases of Sobolev ellipsoids and Holder classes with smoothness index $> 1/2$ under mild assumptions on the Gauss-Markov process at hand. To derive these results, we develop an explicit characterization of the reproducing kernel Hilbert space associated with the Gauss-Markov process, that hinges on a characterization of such processes by a property of the corresponding covariance kernel introduced by Doob. In order to demonstrate that the given assumptions on the Gauss-Markov process are in some sense sharp we also show that asymptotic equivalence fails to hold for the special case of Brownian bridge. Our results demonstrate that the well-known asymptotic equivalence of the Gaussian white noise model and the nonparametric regression model with independent standard normal distributed errors can be extended to a broad class of models with dependent data.
249 - H. Dette , G. Wieczorek 2008
In this paper we propose a new test for the hypothesis of a constant coefficient of variation in the common nonparametric regression model. The test is based on an estimate of the $L^2$-distance between the square of the regression function and variance function. We prove asymptotic normality of a standardized estimate of this distance under the null hypothesis and fixed alternatives and the finite sample properties of a corresponding bootstrap test are investigated by means of a simulation study. The results are applicable to stationary processes with the common mixing conditions and are used to construct tests for ARCH assumptions in financial time series.
This paper discusses a nonparametric regression model that naturally generalizes neural network models. The model is based on a finite number of one-dimensional transformations and can be estimated with a one-dimensional rate of convergence. The model contains the generalized additive model with unknown link function as a special case. For this case, it is shown that the additive components and link function can be estimated with the optimal rate by a smoothing spline that is the solution of a penalized least squares criterion.
Principled nonparametric tests for regression curvature in $mathbb{R}^{d}$ are often statistically and computationally challenging. This paper introduces the stratified incomplete local simplex (SILS) tests for joint concavity of nonparametric multiple regression. The SILS tests with suitable bootstrap calibration are shown to achieve simultaneous guarantees on dimension-free computational complexity, polynomial decay of the uniform error-in-size, and power consistency for general (global and local) alternatives. To establish these results, a general theory for incomplete $U$-processes with stratified random sparse weights is developed. Novel technical ingredients include maximal inequalities for the supremum of multiple incomplete $U$-processes.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا