ﻻ يوجد ملخص باللغة العربية
Testing heteroscedasticity of the errors is a major challenge in high-dimensional regressions where the number of covariates is large compared to the sample size. Traditional procedures such as the White and the Breusch-Pagan tests typically suffer from low sizes and powers. This paper proposes two new test procedures based on standard OLS residuals. Using the theory of random Haar orthogonal matrices, the asymptotic normality of both test statistics is obtained under the null when the degree of freedom tends to infinity. This encompasses both the classical low-dimensional setting where the number of variables is fixed while the sample size tends to infinity, and the proportional high-dimensional setting where these dimensions grow to infinity proportionally. These procedures thus offer a wide coverage of dimensions in applications. To our best knowledge, this is the first procedures in the literature for testing heteroscedasticity which are valid for medium and high-dimensional regressions. The superiority of our proposed tests over the existing methods are demonstrated by extensive simulations and by several real data analyses as well.
For a multivariate linear model, Wilks likelihood ratio test (LRT) constitutes one of the cornerstone tools. However, the computation of its quantiles under the null or the alternative requires complex analytic approximations and more importantly, th
For high-dimensional small sample size data, Hotellings T2 test is not applicable for testing mean vectors due to the singularity problem in the sample covariance matrix. To overcome the problem, there are three main approaches in the literature. Not
By studying the family of $p$-dimensional scale mixtures, this paper shows for the first time a non trivial example where the eigenvalue distribution of the corresponding sample covariance matrix {em does not converge} to the celebrated Marv{c}enko-P
To fast approximate maximum likelihood estimators with massive data, this paper studies the Optimal Subsampling Method under the A-optimality Criterion (OSMAC) for generalized linear models. The consistency and asymptotic normality of the estimator f
Regression models with crossed random effect errors can be very expensive to compute. The cost of both generalized least squares and Gibbs sampling can easily grow as $N^{3/2}$ (or worse) for $N$ observations. Papaspiliopoulos et al. (2020) present a