Do you want to publish a course? Click here

Recovery of Low-Rank Matrices under Affine Constraints via a Smoothed Rank Function

208   0   0.0 ( 0 )
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

In this paper, the problem of matrix rank minimization under affine constraints is addressed. The state-of-the-art algorithms can recover matrices with a rank much less than what is sufficient for the uniqueness of the solution of this optimization problem. We propose an algorithm based on a smooth approximation of the rank function, which practically improves recovery limits on the rank of the solution. This approximation leads to a non-convex program; thus, to avoid getting trapped in local solutions, we use the following scheme. Initially, a rough approximation of the rank function subject to the affine constraints is optimized. As the algorithm proceeds, finer approximations of the rank are optimized and the solver is initialized with the solution of the previous approximation until reaching the desired accuracy. On the theoretical side, benefiting from the spherical section property, we will show that the sequence of the solutions of the approximating function converges to the minimum rank solution. On the experimental side, it will be shown that the proposed algorithm, termed SRF standing for Smoothed Rank Function, can recover matrices which are unique solutions of the rank minimization problem and yet not recoverable by nuclear norm minimization. Furthermore, it will be demonstrated that, in completing partially observed matrices, the accuracy of SRF is considerably and consistently better than some famous algorithms when the number of revealed entries is close to the minimum number of parameters that uniquely represent a low-rank matrix.



rate research

Read More

In this paper, we propose a new algorithm for recovery of low-rank matrices from compressed linear measurements. The underlying idea of this algorithm is to closely approximate the rank function with a smooth function of singular values, and then minimize the resulting approximation subject to the linear constraints. The accuracy of the approximation is controlled via a scaling parameter $delta$, where a smaller $delta$ corresponds to a more accurate fitting. The consequent optimization problem for any finite $delta$ is nonconvex. Therefore, in order to decrease the risk of ending up in local minima, a series of optimizations is performed, starting with optimizing a rough approximation (a large $delta$) and followed by successively optimizing finer approximations of the rank with smaller $delta$s. To solve the optimization problem for any $delta > 0$, it is converted to a new program in which the cost is a function of two auxiliary positive semidefinete variables. The paper shows that this new program is concave and applies a majorize-minimize technique to solve it which, in turn, leads to a few convex optimization iterations. This optimization scheme is also equivalent to a reweighted Nuclear Norm Minimization (NNM), where weighting update depends on the used approximating function. For any $delta > 0$, we derive a necessary and sufficient condition for the exact recovery which are weaker than those corresponding to NNM. On the numerical side, the proposed algorithm is compared to NNM and a reweighted NNM in solving affine rank minimization and matrix completion problems showing its considerable and consistent superiority in terms of success rate, especially, when the number of measurements decreases toward the lower-bound for the unique representation.
Matrix sensing is the problem of reconstructing a low-rank matrix from a few linear measurements. In many applications such as collaborative filtering, the famous Netflix prize problem, and seismic data interpolation, there exists some prior information about the column and row spaces of the ground-truth low-rank matrix. In this paper, we exploit this prior information by proposing a weighted optimization problem where its objective function promotes both rank and prior subspace information. Using the recent results in conic integral geometry, we obtain the unique optimal weights that minimize the required number of measurements. As simulation results confirm, the proposed convex program with optimal weights requires substantially fewer measurements than the regular nuclear norm minimization.
The affine rank minimization (ARM) problem arises in many real-world applications. The goal is to recover a low-rank matrix from a small amount of noisy affine measurements. The original problem is NP-hard, and so directly solving the problem is computationally prohibitive. Approximate low-complexity solutions for ARM have recently attracted much research interest. In this paper, we design an iterative algorithm for ARM based on message passing principles. The proposed algorithm is termed turbo-type ARM (TARM), as inspired by the recently developed turbo compressed sensing algorithm for sparse signal recovery. We show that, when the linear operator for measurement is right-orthogonally invariant (ROIL), a scalar function called state evolution can be established to accurately predict the behaviour of the TARM algorithm. We also show that TARM converges much faster than the counterpart algorithms for low-rank matrix recovery. We further extend the TARM algorithm for matrix completion, where the measurement operator corresponds to a random selection matrix. We show that, although the state evolution is not accurate for matrix completion, the TARM algorithm with carefully tuned parameters still significantly outperforms its counterparts.
Suppose that a solution $widetilde{mathbf{x}}$ to an underdetermined linear system $mathbf{b} = mathbf{A} mathbf{x}$ is given. $widetilde{mathbf{x}}$ is approximately sparse meaning that it has a few large components compared to other small entries. However, the total number of nonzero components of $widetilde{mathbf{x}}$ is large enough to violate any condition for the uniqueness of the sparsest solution. On the other hand, if only the dominant components are considered, then it will satisfy the uniqueness conditions. One intuitively expects that $widetilde{mathbf{x}}$ should not be far from the true sparse solution $mathbf{x}_0$. We show that this intuition is the case by providing an upper bound on $| widetilde{mathbf{x}} - mathbf{x}_0|$ which is a function of the magnitudes of small components of $widetilde{mathbf{x}}$ but independent from $mathbf{x}_0$. This result is extended to the case that $mathbf{b}$ is perturbed by noise. Additionally, we generalize the upper bounds to the low-rank matrix recovery problem.
In this work, we propose an alternating low-rank decomposition (ALRD) approach and novel subspace algorithms for direction-of-arrival (DOA) estimation. In the ALRD scheme, the decomposition matrix for rank reduction is composed of a set of basis vectors. A low-rank auxiliary parameter vector is then employed to compute the output power spectrum. Alternating optimization strategies based on recursive least squares (RLS), denoted as ALRD-RLS and modified ALRD-RLS (MARLD-RLS), are devised to compute the basis vectors and the auxiliary parameter vector. Simulations for large sensor arrays with both uncorrelated and correlated sources are presented, showing that the proposed algorithms are superior to existing techniques.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا