No Arabic abstract
This paper considers the regularized estimation of covariance matrices (CM) of high-dimensional (compound) Gaussian data for minimum variance distortionless response (MVDR) beamforming. Linear shrinkage is applied to improve the accuracy and condition number of the CM estimate for low-sample-support cases. We focus on data-driven techniques that automatically choose the linear shrinkage factors for shrinkage sample covariance matrix ($text{S}^2$CM) and shrinkage Tylers estimator (STE) by exploiting cross validation (CV). We propose leave-one-out cross-validation (LOOCV) choices for the shrinkage factors to optimize the beamforming performance, referred to as $text{S}^2$CM-CV and STE-CV. The (weighted) out-of-sample output power of the beamfomer is chosen as a proxy of the beamformer performance and concise expressions of the LOOCV cost function are derived to allow fast optimization. For the large system regime, asymptotic approximations of the LOOCV cost functions are derived, yielding the $text{S}^2$CM-AE and STE-AE. In general, the proposed algorithms are able to achieve near-oracle performance in choosing the linear shrinkage factors for MVDR beamforming. Simulation results are provided for validating the proposed methods.
This paper investigates regularized estimation of Kronecker-structured covariance matrices (CM) for complex elliptically symmetric (CES) data. To obtain a well-conditioned estimate of the CM, we add penalty terms of Kullback-Leibler divergence to the negative log-likelihood function of the associated complex angular Gaussian (CAG) distribution. This is shown to be equivalent to regularizing Tylers fixed-point equations by shrinkage. A sufficient condition that the solution exists is discussed. An iterative algorithm is applied to solve the resulting fixed-point iterations and its convergence is proved. In order to solve the critical problem of tuning the shrinkage factors, we then introduce three methods by exploiting oracle approximating shrinkage (OAS) and cross-validation (CV). When the training samples are limited, the proposed estimator, referred to as the robust shrinkage Kronecker estimator (RSKE), has better performance compared with several existing methods. Simulations are conducted for validating the proposed estimator and demonstrating its high performance.
The robust adaptive beamforming design problem based on estimation of the signal of interest steering vector is considered in the paper. In this case, the optimal beamformer is obtained by computing the sample matrix inverse and an optimal estimate of the signal of interest steering vector. The common criteria to find the best estimate of the steering vector are the beamformer output SINR and output power, while the constraints assume as little as possible prior inaccurate knowledge about the signal of interest, the propagation media, and the antenna array. Herein, a new beamformer output power maximization problem is formulated and solved subject to a double-sided norm perturbation constraint, a similarity constraint, and a quadratic constraint that guarantees that the direction-of-arrival (DOA) of the signal of interest is away from the DOA region of all linear combinations of the interference steering vectors. In the new robust design, the prior information required consists of some allowable error norm bounds, the approximate knowledge of the antenna array geometry, and the angular sector of the signal of interest. It turns out that the array output power maximization problem is a non-convex QCQP problem with inhomogeneous constraints. However, we show that the problem is still solvable, and develop efficient algorithms for finding globally optimal estimate of the signal of interest steering vector. The results are generalized to the case where an ellipsoidal constraint is considered, and sufficient conditions for the global optimality are derived. In addition, a new quadratic constraint on the actual signal steering vector is proposed in order to improve the array performance. To validate our results, simulation examples are presented, and they demonstrate the improved performance of the new robust beamformers in terms of the output SINR as well as the output power.
The covariance matrix plays a fundamental role in many modern exploratory and inferential statistical procedures, including dimensionality reduction, hypothesis testing, and regression. In low-dimensional regimes, where the number of observations far exceeds the number of variables, the optimality of the sample covariance matrix as an estimator of this parameter is well-established. High-dimensional regimes do not admit such a convenience, however. As such, a variety of estimators have been derived to overcome the shortcomings of the sample covariance matrix in these settings. Yet, the question of selecting an optimal estimator from among the plethora available remains largely unaddressed. Using the framework of cross-validated loss-based estimation, we develop the theoretical underpinnings of just such an estimator selection procedure. In particular, we propose a general class of loss functions for covariance matrix estimation and establish finite-sample risk bounds and conditions for the asymptotic optimality of the cross-validated estimator selector with respect to these loss functions. We evaluate our proposed approach via a comprehensive set of simulation experiments and demonstrate its practical benefits by application in the exploratory analysis of two single-cell transcriptome sequencing datasets. A free and open-source software implementation of the proposed methodology, the cvCovEst R package, is briefly introduced.
We seek to improve estimates of the power spectrum covariance matrix from a limited number of simulations by employing a novel statistical technique known as shrinkage estimation. The shrinkage technique optimally combines an empirical estimate of the covariance with a model (the target) to minimize the total mean squared error compared to the true underlying covariance. We test this technique on N-body simulations and evaluate its performance by estimating cosmological parameters. Using a simple diagonal target, we show that the shrinkage estimator significantly outperforms both the empirical covariance and the target individually when using a small number of simulations. We find that reducing noise in the covariance estimate is essential for properly estimating the values of cosmological parameters as well as their confidence intervals. We extend our method to the jackknife covariance estimator and again find significant improvement, though simulations give better results. Even for thousands of simulations we still find evidence that our method improves estimation of the covariance matrix. Because our method is simple, requires negligible additional numerical effort, and produces superior results, we always advocate shrinkage estimation for the covariance of the power spectrum and other large-scale structure measurements when purely theoretical modeling of the covariance is insufficient.
In a Gaussian graphical model, the conditional independence between two variables are characterized by the corresponding zero entries in the inverse covariance matrix. Maximum likelihood method using the smoothly clipped absolute deviation (SCAD) penalty (Fan and Li, 2001) and the adaptive LASSO penalty (Zou, 2006) have been proposed in literature. In this article, we establish the result that using Bayesian information criterion (BIC) to select the tuning parameter in penalized likelihood estimation with both types of penalties can lead to consistent graphical model selection. We compare the empirical performance of BIC with cross validation method and demonstrate the advantageous performance of BIC criterion for tuning parameter selection through simulation studies.