ترغب بنشر مسار تعليمي؟ اضغط هنا

Non-Asymptotic Bounds for the $ell_{infty}$ Estimator in Linear Regression with Uniform Noise

78   0   0.0 ( 0 )
 نشر من قبل Matey Neykov
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The Chebyshev or $ell_{infty}$ estimator is an unconventional alternative to the ordinary least squares in solving linear regressions. It is defined as the minimizer of the $ell_{infty}$ objective function begin{align*} hat{boldsymbol{beta}} := argmin_{boldsymbol{beta}} |boldsymbol{Y} - mathbf{X}boldsymbol{beta}|_{infty}. end{align*} The asymptotic distribution of the Chebyshev estimator under fixed number of covariates were recently studied (Knight, 2020), yet finite sample guarantees and generalizations to high-dimensional settings remain open. In this paper, we develop non-asymptotic upper bounds on the estimation error $|hat{boldsymbol{beta}}-boldsymbol{beta}^*|_2$ for a Chebyshev estimator $hat{boldsymbol{beta}}$, in a regression setting with uniformly distributed noise $varepsilon_isim U([-a,a])$ where $a$ is either known or unknown. With relatively mild assumptions on the (random) design matrix $mathbf{X}$, we can bound the error rate by $frac{C_p}{n}$ with high probability, for some constant $C_p$ depending on the dimension $p$ and the law of the design. Furthermore, we illustrate that there exist designs for which the Chebyshev estimator is (nearly) minimax optimal. In addition we show that Chebyshevs LASSO has advantages over the regular LASSO in high dimensional situations, provided that the noise is uniform. Specifically, we argue that it achieves a much faster rate of estimation under certain assumptions on the growth rate of the sparsity level and the ambient dimension with respect to the sample size.



قيم البحث

اقرأ أيضاً

Recently, the well known Liu estimator (Liu, 1993) is attracted researchers attention in regression parameter estimation for an ill conditioned linear model. It is also argued that imposing sub-space hypothesis restriction on parameters improves esti mation by shrinking toward non-sample information. Chang (2015) proposed the almost unbiased Liu estimator (AULE) in the binary logistic regression. In this article, some improved unbiased Liu type estimators, namely, restricted AULE, preliminary test AULE, Stein-type shrinkage AULE and its positive part for estimating the regression parameters in the binary logistic regression model are proposed based on the work Chang (2015). The performances of the newly defined estimators are analysed through some numerical results. A real data example is also provided to support the findings.
We analyse the reconstruction error of principal component analysis (PCA) and prove non-asymptotic upper bounds for the corresponding excess risk. These bounds unify and improve existing upper bounds from the literature. In particular, they give orac le inequalities under mild eigenvalue conditions. The bounds reveal that the excess risk differs significantly from usually considered subspace distances based on canonical angles. Our approach relies on the analysis of empirical spectral projectors combined with concentration inequalities for weighted empirical covariance operators and empirical eigenvalues.
199 - Chenlei Leng , Xingwei Tong 2013
We propose a censored quantile regression estimator motivated by unbiased estimating equations. Under the usual conditional independence assumption of the survival time and the censoring time given the covariates, we show that the proposed estimator is consistent and asymptotically normal. We develop an efficient computational algorithm which uses existing quantile regression code. As a result, bootstrap-type inference can be efficiently implemented. We illustrate the finite-sample performance of the proposed method by simulation studies and analysis of a survival data set.
178 - Xinyi Xu , Feng Liang 2010
We consider the problem of estimating the predictive density of future observations from a non-parametric regression model. The density estimators are evaluated under Kullback--Leibler divergence and our focus is on establishing the exact asymptotics of minimax risk in the case of Gaussian errors. We derive the convergence rate and constant for minimax risk among Bayesian predictive densities under Gaussian priors and we show that this minimax risk is asymptotically equivalent to that among all density estimators.
For the class of Gauss-Markov processes we study the problem of asymptotic equivalence of the nonparametric regression model with errors given by the increments of the process and the continuous time model, where a whole path of a sum of a determinis tic signal and the Gauss-Markov process can be observed. In particular we provide sufficient conditions such that asymptotic equivalence of the two models holds for functions from a given class, and we verify these for the special cases of Sobolev ellipsoids and Holder classes with smoothness index $> 1/2$ under mild assumptions on the Gauss-Markov process at hand. To derive these results, we develop an explicit characterization of the reproducing kernel Hilbert space associated with the Gauss-Markov process, that hinges on a characterization of such processes by a property of the corresponding covariance kernel introduced by Doob. In order to demonstrate that the given assumptions on the Gauss-Markov process are in some sense sharp we also show that asymptotic equivalence fails to hold for the special case of Brownian bridge. Our results demonstrate that the well-known asymptotic equivalence of the Gaussian white noise model and the nonparametric regression model with independent standard normal distributed errors can be extended to a broad class of models with dependent data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا