Do you want to publish a course? Click here

Covariance Matrix Estimation from Linearly-Correlated Gaussian Samples

71   0   0.0 ( 0 )
 Added by Yulong Liu
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Covariance matrix estimation concerns the problem of estimating the covariance matrix from a collection of samples, which is of extreme importance in many applications. Classical results have shown that $O(n)$ samples are sufficient to accurately estimate the covariance matrix from $n$-dimensional independent Gaussian samples. However, in many practical applications, the received signal samples might be correlated, which makes the classical analysis inapplicable. In this paper, we develop a non-asymptotic analysis for the covariance matrix estimation from correlated Gaussian samples. Our theoretical results show that the error bounds are determined by the signal dimension $n$, the sample size $m$, and the shape parameter of the distribution of the correlated sample covariance matrix. Particularly, when the shape parameter is a class of Toeplitz matrices (which is of great practical interest), $O(n)$ samples are also sufficient to faithfully estimate the covariance matrix from correlated samples. Simulations are provided to verify the correctness of the theoretical results.



rate research

Read More

We propose a novel pilot structure for covariance matrix estimation in massive multiple-input multiple-output (MIMO) systems in which each user transmits two pilot sequences, with the second pilot sequence multiplied by a random phase-shift. The covariance matrix of a particular user is obtained by computing the sample cross-correlation of the channel estimates obtained from the two pilot sequences. This approach relaxes the requirement that all the users transmit their uplink pilots over the same set of symbols. We derive expressions for the achievable rate and the mean-squared error of the covariance matrix estimate when the proposed method is used with staggered pilots. The performance of the proposed method is compared with existing methods through simulations.
Obtaining channel covariance knowledge is of great importance in various Multiple-Input Multiple-Output MIMO communication applications, including channel estimation and covariance-based user grouping. In a massive MIMO system, covariance estimation proves to be challenging due to the large number of antennas ($Mgg 1$) employed in the base station and hence, a high signal dimension. In this case, the number of pilot transmissions $N$ becomes comparable to the number of antennas and standard estimators, such as the sample covariance, yield a poor estimate of the true covariance and are undesirable. In this paper, we propose a Maximum-Likelihood (ML) massive MIMO covariance estimator, based on a parametric representation of the channel angular spread function (ASF). The parametric representation emerges from super-resolving discrete ASF components via the well-known MUltiple SIgnal Classification (MUSIC) method plus approximating its continuous component using suitable limited-support density function. We maximize the likelihood function using a concave-convex procedure, which is initialized via a non-negative least-squares optimization problem. Our simulation results show that the proposed method outperforms the state of the art in various estimation quality metrics and for different sample size to signal dimension ($N/M$) ratios.
We consider the problem of direction-of-arrival (DOA) estimation in unknown partially correlated noise environments where the noise covariance matrix is sparse. A sparse noise covariance matrix is a common model for a sparse array of sensors consisted of several widely separated subarrays. Since interelement spacing among sensors in a subarray is small, the noise in the subarray is in general spatially correlated, while, due to large distances between subarrays, the noise between them is uncorrelated. Consequently, the noise covariance matrix of such an array has a block diagonal structure which is indeed sparse. Moreover, in an ordinary nonsparse array, because of small distance between adjacent sensors, there is noise coupling between neighboring sensors, whereas one can assume that nonadjacent sensors have spatially uncorrelated noise which makes again the array noise covariance matrix sparse. Utilizing some recently available tools in low-rank/sparse matrix decomposition, matrix completion, and sparse representation, we propose a novel method which can resolve possibly correlated or even coherent sources in the aforementioned partly correlated noise. In particular, when the sources are uncorrelated, our approach involves solving a second-order cone programming (SOCP), and if they are correlated or coherent, one needs to solve a computationally harder convex program. We demonstrate the effectiveness of the proposed algorithm by numerical simulations and comparison to the Cramer-Rao bound (CRB).
This paper presents a novel power spectral density estimation technique for band-limited, wide-sense stationary signals from sub-Nyquist sampled data. The technique employs multi-coset sampling and incorporates the advantages of compressed sensing (CS) when the power spectrum is sparse, but applies to sparse and nonsparse power spectra alike. The estimates are consistent piecewise constant approximations whose resolutions (width of the piecewise constant segments) are controlled by the periodicity of the multi-coset sampling. We show that compressive estimates exhibit better tradeoffs among the estimators resolution, system complexity, and average sampling rate compared to their noncompressive counterparts. For suitable sampling patterns, noncompressive estimates are obtained as least squares solutions. Because of the non-negativity of power spectra, compressive estimates can be computed by seeking non-negative least squares solutions (provided appropriate sampling patterns exist) instead of using standard CS recovery algorithms. This flexibility suggests a reduction in computational overhead for systems estimating both sparse and nonsparse power spectra because one algorithm can be used to compute both compressive and noncompressive estimates.
We consider estimating the parameters of a Gaussian mixture density with a given number of components best representing a given set of weighted samples. We adopt a density interpretation of the samples by viewing them as a discrete Dirac mixture density over a continuous domain with weighted components. Hence, Gaussian mixture fitting is viewed as density re-approximation. In order to speed up computation, an expectation-maximization method is proposed that properly considers not only the sample locations, but also the corresponding weights. It is shown that methods from literature do not treat the weights correctly, resulting in wrong estimates. This is demonstrated with simple counterexamples. The proposed method works in any number of dimensions with the same computational load as standard Gaussian mixture estimators for unweighted samples.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا