No Arabic abstract
We consider the problem of direction-of-arrival (DOA) estimation in unknown partially correlated noise environments where the noise covariance matrix is sparse. A sparse noise covariance matrix is a common model for a sparse array of sensors consisted of several widely separated subarrays. Since interelement spacing among sensors in a subarray is small, the noise in the subarray is in general spatially correlated, while, due to large distances between subarrays, the noise between them is uncorrelated. Consequently, the noise covariance matrix of such an array has a block diagonal structure which is indeed sparse. Moreover, in an ordinary nonsparse array, because of small distance between adjacent sensors, there is noise coupling between neighboring sensors, whereas one can assume that nonadjacent sensors have spatially uncorrelated noise which makes again the array noise covariance matrix sparse. Utilizing some recently available tools in low-rank/sparse matrix decomposition, matrix completion, and sparse representation, we propose a novel method which can resolve possibly correlated or even coherent sources in the aforementioned partly correlated noise. In particular, when the sources are uncorrelated, our approach involves solving a second-order cone programming (SOCP), and if they are correlated or coherent, one needs to solve a computationally harder convex program. We demonstrate the effectiveness of the proposed algorithm by numerical simulations and comparison to the Cramer-Rao bound (CRB).
In this work, we propose an alternating low-rank decomposition (ALRD) approach and novel subspace algorithms for direction-of-arrival (DOA) estimation. In the ALRD scheme, the decomposition matrix for rank reduction is composed of a set of basis vectors. A low-rank auxiliary parameter vector is then employed to compute the output power spectrum. Alternating optimization strategies based on recursive least squares (RLS), denoted as ALRD-RLS and modified ALRD-RLS (MARLD-RLS), are devised to compute the basis vectors and the auxiliary parameter vector. Simulations for large sensor arrays with both uncorrelated and correlated sources are presented, showing that the proposed algorithms are superior to existing techniques.
The performance of the existing sparse Bayesian learning (SBL) methods for off-gird DOA estimation is dependent on the trade off between the accuracy and the computational workload. To speed up the off-grid SBL method while remain a reasonable accuracy, this letter describes a computationally efficient root SBL method for off-grid DOA estimation, where a coarse refinable grid, whose sampled locations are viewed as the adjustable parameters, is adopted. We utilize an expectation-maximization (EM) algorithm to iteratively refine this coarse grid, and illustrate that each updated grid point can be simply achieved by the root of a certain polynomial. Simulation results demonstrate that the computational complexity is significantly reduced and the modeling error can be almost eliminated.
In this paper, we propose a two-dimensional (2D) joint transmit array interpolation and beamspace design for planar array mono-static multiple-input-multiple-output (MIMO) radar for direction-of-arrival (DOA) estimation via tensor modeling. Our underlying idea is to map the transmit array to a desired array and suppress the transmit power outside the spatial sector of interest. In doing so, the signal-tonoise ratio is improved at the receive array. Then, we fold the received data along each dimension into a tensorial structure and apply tensor-based methods to obtain DOA estimates. In addition, we derive a close-form expression for DOA estimation bias caused by interpolation errors and argue for using a specially designed look-up table to compensate the bias. The corresponding Cramer-Rao Bound (CRB) is also derived. Simulation results are provided to show the performance of the proposed method and compare its performance to CRB.
We consider the problem of estimating high-dimensional covariance matrices of a particular structure, which is a summation of low rank and sparse matrices. This covariance structure has a wide range of applications including factor analysis and random effects models. We propose a Bayesian method of estimating the covariance matrices by representing the covariance model in the form of a factor model with unknown number of latent factors. We introduce binary indicators for factor selection and rank estimation for the low rank component combined with a Bayesian lasso method for the sparse component estimation. Simulation studies show that our method can recover the rank as well as the sparsity of the two components respectively. We further extend our method to a graphical factor model where the graphical model of the residuals as well as selecting the number of factors is of interest. We employ a hyper-inverse Wishart prior for modeling decomposable graphs of the residuals, and a Bayesian graphical lasso selection method for unrestricted graphs. We show through simulations that the extended models can recover both the number of latent factors and the graphical model of the residuals successfully when the sample size is sufficient relative to the dimension.
Suppose that a solution $widetilde{mathbf{x}}$ to an underdetermined linear system $mathbf{b} = mathbf{A} mathbf{x}$ is given. $widetilde{mathbf{x}}$ is approximately sparse meaning that it has a few large components compared to other small entries. However, the total number of nonzero components of $widetilde{mathbf{x}}$ is large enough to violate any condition for the uniqueness of the sparsest solution. On the other hand, if only the dominant components are considered, then it will satisfy the uniqueness conditions. One intuitively expects that $widetilde{mathbf{x}}$ should not be far from the true sparse solution $mathbf{x}_0$. We show that this intuition is the case by providing an upper bound on $| widetilde{mathbf{x}} - mathbf{x}_0|$ which is a function of the magnitudes of small components of $widetilde{mathbf{x}}$ but independent from $mathbf{x}_0$. This result is extended to the case that $mathbf{b}$ is perturbed by noise. Additionally, we generalize the upper bounds to the low-rank matrix recovery problem.