No Arabic abstract
Suppose that a solution $widetilde{mathbf{x}}$ to an underdetermined linear system $mathbf{b} = mathbf{A} mathbf{x}$ is given. $widetilde{mathbf{x}}$ is approximately sparse meaning that it has a few large components compared to other small entries. However, the total number of nonzero components of $widetilde{mathbf{x}}$ is large enough to violate any condition for the uniqueness of the sparsest solution. On the other hand, if only the dominant components are considered, then it will satisfy the uniqueness conditions. One intuitively expects that $widetilde{mathbf{x}}$ should not be far from the true sparse solution $mathbf{x}_0$. We show that this intuition is the case by providing an upper bound on $| widetilde{mathbf{x}} - mathbf{x}_0|$ which is a function of the magnitudes of small components of $widetilde{mathbf{x}}$ but independent from $mathbf{x}_0$. This result is extended to the case that $mathbf{b}$ is perturbed by noise. Additionally, we generalize the upper bounds to the low-rank matrix recovery problem.
Matrix sensing is the problem of reconstructing a low-rank matrix from a few linear measurements. In many applications such as collaborative filtering, the famous Netflix prize problem, and seismic data interpolation, there exists some prior information about the column and row spaces of the ground-truth low-rank matrix. In this paper, we exploit this prior information by proposing a weighted optimization problem where its objective function promotes both rank and prior subspace information. Using the recent results in conic integral geometry, we obtain the unique optimal weights that minimize the required number of measurements. As simulation results confirm, the proposed convex program with optimal weights requires substantially fewer measurements than the regular nuclear norm minimization.
The computation of the sparse principal component of a matrix is equivalent to the identification of its principal submatrix with the largest maximum eigenvalue. Finding this optimal submatrix is what renders the problem ${mathcal{NP}}$-hard. In this work, we prove that, if the matrix is positive semidefinite and its rank is constant, then its sparse principal component is polynomially computable. Our proof utilizes the auxiliary unit vector technique that has been recently developed to identify problems that are polynomially solvable. Moreover, we use this technique to design an algorithm which, for any sparsity value, computes the sparse principal component with complexity ${mathcal O}left(N^{D+1}right)$, where $N$ and $D$ are the matrix size and rank, respectively. Our algorithm is fully parallelizable and memory efficient.
We consider the problem of direction-of-arrival (DOA) estimation in unknown partially correlated noise environments where the noise covariance matrix is sparse. A sparse noise covariance matrix is a common model for a sparse array of sensors consisted of several widely separated subarrays. Since interelement spacing among sensors in a subarray is small, the noise in the subarray is in general spatially correlated, while, due to large distances between subarrays, the noise between them is uncorrelated. Consequently, the noise covariance matrix of such an array has a block diagonal structure which is indeed sparse. Moreover, in an ordinary nonsparse array, because of small distance between adjacent sensors, there is noise coupling between neighboring sensors, whereas one can assume that nonadjacent sensors have spatially uncorrelated noise which makes again the array noise covariance matrix sparse. Utilizing some recently available tools in low-rank/sparse matrix decomposition, matrix completion, and sparse representation, we propose a novel method which can resolve possibly correlated or even coherent sources in the aforementioned partly correlated noise. In particular, when the sources are uncorrelated, our approach involves solving a second-order cone programming (SOCP), and if they are correlated or coherent, one needs to solve a computationally harder convex program. We demonstrate the effectiveness of the proposed algorithm by numerical simulations and comparison to the Cramer-Rao bound (CRB).
In this paper, the problem of matrix rank minimization under affine constraints is addressed. The state-of-the-art algorithms can recover matrices with a rank much less than what is sufficient for the uniqueness of the solution of this optimization problem. We propose an algorithm based on a smooth approximation of the rank function, which practically improves recovery limits on the rank of the solution. This approximation leads to a non-convex program; thus, to avoid getting trapped in local solutions, we use the following scheme. Initially, a rough approximation of the rank function subject to the affine constraints is optimized. As the algorithm proceeds, finer approximations of the rank are optimized and the solver is initialized with the solution of the previous approximation until reaching the desired accuracy. On the theoretical side, benefiting from the spherical section property, we will show that the sequence of the solutions of the approximating function converges to the minimum rank solution. On the experimental side, it will be shown that the proposed algorithm, termed SRF standing for Smoothed Rank Function, can recover matrices which are unique solutions of the rank minimization problem and yet not recoverable by nuclear norm minimization. Furthermore, it will be demonstrated that, in completing partially observed matrices, the accuracy of SRF is considerably and consistently better than some famous algorithms when the number of revealed entries is close to the minimum number of parameters that uniquely represent a low-rank matrix.
This work considers two popular minimization problems: (i) the minimization of a general convex function $f(mathbf{X})$ with the domain being positive semi-definite matrices; (ii) the minimization of a general convex function $f(mathbf{X})$ regularized by the matrix nuclear norm $|mathbf{X}|_*$ with the domain being general matrices. Despite their optimal statistical performance in the literature, these two optimization problems have a high computational complexity even when solved using tailored fast convex solvers. To develop faster and more scalable algorithms, we follow the proposal of Burer and Monteiro to factor the low-rank variable $mathbf{X} = mathbf{U}mathbf{U}^top $ (for semi-definite matrices) or $mathbf{X}=mathbf{U}mathbf{V}^top $ (for general matrices) and also replace the nuclear norm $|mathbf{X}|_*$ with $(|mathbf{U}|_F^2+|mathbf{V}|_F^2)/2$. In spite of the non-convexity of the resulting factored formulations, we prove that each critical point either corresponds to the global optimum of the original convex problems or is a strict saddle where the Hessian matrix has a strictly negative eigenvalue. Such a nice geometric structure of the factored formulations allows many local search algorithms to find a global optimizer even with random initializations.