Do you want to publish a course? Click here

Compressed sensing of low-rank plus sparse matrices

128   0   0.0 ( 0 )
 Added by Simon Vary
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data. This model is the foundation of robust principle component analysis (Candes et al., 2011) (Chandrasekaran et al., 2009), and popularized by dynamic-foreground/static-background separation (Bouwmans et al., 2016) amongst other applications. Compressed sensing, matrix completion, and their variants (Eldar and Kutyniok, 2012) (Foucart and Rauhut, 2013) have established that data satisfying low complexity models can be efficiently measured and recovered from a number of measurements proportional to the model complexity rather than the ambient dimension. This manuscript develops similar guarantees showing that $mtimes n$ matrices that can be expressed as the sum of a rank-$r$ matrix and a $s$-sparse matrix can be recovered by computationally tractable methods from $mathcal{O}(r(m+n-r)+s)log(mn/s)$ linear measurements. More specifically, we establish that the restricted isometry constants for the aforementioned matrices remain bounded independent of problem size provided $p/mn$, $s/p$, and $r(m+n-r)/p$ reman fixed. Additionally, we show that semidefinite programming and two hard threshold gradient descent algorithms, NIHT and NAHT, converge to the measured matrix provided the measurement operators RICs are sufficiently small. Numerical experiments illustrating these results are shown for synthetic problems, dynamic-foreground/static-background separation, and multispectral imaging.



rate research

Read More

We present fast numerical methods for computing the Hessenberg reduction of a unitary plus low-rank matrix $A=G+U V^H$, where $Gin mathbb C^{ntimes n}$ is a unitary matrix represented in some compressed format using $O(nk)$ parameters and $U$ and $V$ are $ntimes k$ matrices with $k< n$. At the core of these methods is a certain structured decomposition, referred to as a LFR decomposition, of $A$ as product of three possibly perturbed unitary $k$ Hessenberg matrices of size $n$. It is shown that in most interesting cases an initial LFR decomposition of $A$ can be computed very cheaply. Then we prove structural properties of LFR decompositions by giving conditions under which the LFR decomposition of $A$ implies its Hessenberg shape. Finally, we describe a bulge chasing scheme for converting the initial LFR decomposition of $A$ into the LFR decomposition of a Hessenberg matrix by means of unitary transformations. The reduction can be performed at the overall computational cost of $O(n^2 k)$ arithmetic operations using $O(nk)$ storage. The computed LFR decomposition of the Hessenberg reduction of $A$ can be processed by the fast QR algorithm presented in [8] in order to compute the eigenvalues of $A$ within the same costs.
Some fast algorithms for computing the eigenvalues of a block companion matrix $A = U + XY^H$, where $Uin mathbb C^{ntimes n}$ is unitary block circulant and $X, Y inmathbb{C}^{n times k}$, have recently appeared in the literature. Most of these algorithms rely on the decomposition of $A$ as product of scalar companion matrices which turns into a factored representation of the Hessenberg reduction of $A$. In this paper we generalize the approach to encompass Hessenberg matrices of the form $A=U + XY^H$ where $U$ is a general unitary matrix. A remarkable case is $U$ unitary diagonal which makes possible to deal with interpolation techniques for rootfinding problems and nonlinear eigenvalue problems. Our extension exploits the properties of a larger matrix $hat A$ obtained by a certain embedding of the Hessenberg reduction of $A$ suitable to maintain its structural properties. We show that $hat A$ can be factored as product of lower and upper unitary Hessenberg matrices possibly perturbed in the first $k$ rows, and, moreover, such a data-sparse representation is well suited for the design of fast eigensolvers based on the QR/QZ iteration. The resulting algorithm is fast and backward stable.
This note studies the worst-case recovery error of low-rank and bisparse matrices as a function of the number of one-bit measurements used to acquire them. First, by way of the concept of consistency width, precise estimates are given on how fast the recovery error can in theory decay. Next, an idealized recovery method is proved to reach the fourth-root of the optimal decay rate for Gaussian sensing schemes. This idealized method being impractical, an implementable recovery algorithm is finally proposed in the context of factorized Gaussian sensing schemes. It is shown to provide a recovery error decaying as the sixth-root of the optimal rate.
Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing com plexity. Recently, fast and reliable eigensolvers dealing with low rank perturbations of unitary and Hermitian matrices were proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, e.g., the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of an Hermitian or unitary matrix plus a low rank perturbation. We propose necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew-Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Based on these conditions we are able to identify the closest Hermitian and unitary plus low rank matrix in Frobenius and spectral norm and a practical Lanczos iteration to detect the low rank perturbation is presented. Numerical tests prove that this straightforward algorithm is robust with respect to noise.
Quaternion matrices are employed successfully in many color image processing applications. In particular, a pure quaternion matrix can be used to represent red, green and blue channels of color images. A low-rank approximation for a pure quaternion matrix can be obtained by using the quaternion singular value decomposition. However, this approximation is not optimal in the sense that the resulting low-rank approximation matrix may not be pure quaternion, i.e., the low-rank matrix contains real component which is not useful for the representation of a color image. The main contribution of this paper is to find an optimal rank-$r$ pure quaternion matrix approximation for a pure quaternion matrix (a color image). Our idea is to use a projection on a low-rank quaternion matrix manifold and a projection on a quaternion matrix with zero real component, and develop an alternating projections algorithm to find such optimal low-rank pure quaternion matrix approximation. The convergence of the projection algorithm can be established by showing that the low-rank quaternion matrix manifold and the zero real component quaternion matrix manifold has a non-trivial intersection point. Numerical examples on synthetic pure quaternion matrices and color images are presented to illustrate the projection algorithm can find optimal low-rank pure quaternion approximation for pure quaternion matrices or color images.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا