Do you want to publish a course? Click here

Efficient Reduction of Compressed Unitary plus Low-rank Matrices to Hessenberg form

264   0   0.0 ( 0 )
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We present fast numerical methods for computing the Hessenberg reduction of a unitary plus low-rank matrix $A=G+U V^H$, where $Gin mathbb C^{ntimes n}$ is a unitary matrix represented in some compressed format using $O(nk)$ parameters and $U$ and $V$ are $ntimes k$ matrices with $k< n$. At the core of these methods is a certain structured decomposition, referred to as a LFR decomposition, of $A$ as product of three possibly perturbed unitary $k$ Hessenberg matrices of size $n$. It is shown that in most interesting cases an initial LFR decomposition of $A$ can be computed very cheaply. Then we prove structural properties of LFR decompositions by giving conditions under which the LFR decomposition of $A$ implies its Hessenberg shape. Finally, we describe a bulge chasing scheme for converting the initial LFR decomposition of $A$ into the LFR decomposition of a Hessenberg matrix by means of unitary transformations. The reduction can be performed at the overall computational cost of $O(n^2 k)$ arithmetic operations using $O(nk)$ storage. The computed LFR decomposition of the Hessenberg reduction of $A$ can be processed by the fast QR algorithm presented in [8] in order to compute the eigenvalues of $A$ within the same costs.



rate research

Read More

127 - Jared Tanner , Simon Vary 2020
Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data. This model is the foundation of robust principle component analysis (Candes et al., 2011) (Chandrasekaran et al., 2009), and popularized by dynamic-foreground/static-background separation (Bouwmans et al., 2016) amongst other applications. Compressed sensing, matrix completion, and their variants (Eldar and Kutyniok, 2012) (Foucart and Rauhut, 2013) have established that data satisfying low complexity models can be efficiently measured and recovered from a number of measurements proportional to the model complexity rather than the ambient dimension. This manuscript develops similar guarantees showing that $mtimes n$ matrices that can be expressed as the sum of a rank-$r$ matrix and a $s$-sparse matrix can be recovered by computationally tractable methods from $mathcal{O}(r(m+n-r)+s)log(mn/s)$ linear measurements. More specifically, we establish that the restricted isometry constants for the aforementioned matrices remain bounded independent of problem size provided $p/mn$, $s/p$, and $r(m+n-r)/p$ reman fixed. Additionally, we show that semidefinite programming and two hard threshold gradient descent algorithms, NIHT and NAHT, converge to the measured matrix provided the measurement operators RICs are sufficiently small. Numerical experiments illustrating these results are shown for synthetic problems, dynamic-foreground/static-background separation, and multispectral imaging.
Some fast algorithms for computing the eigenvalues of a block companion matrix $A = U + XY^H$, where $Uin mathbb C^{ntimes n}$ is unitary block circulant and $X, Y inmathbb{C}^{n times k}$, have recently appeared in the literature. Most of these algorithms rely on the decomposition of $A$ as product of scalar companion matrices which turns into a factored representation of the Hessenberg reduction of $A$. In this paper we generalize the approach to encompass Hessenberg matrices of the form $A=U + XY^H$ where $U$ is a general unitary matrix. A remarkable case is $U$ unitary diagonal which makes possible to deal with interpolation techniques for rootfinding problems and nonlinear eigenvalue problems. Our extension exploits the properties of a larger matrix $hat A$ obtained by a certain embedding of the Hessenberg reduction of $A$ suitable to maintain its structural properties. We show that $hat A$ can be factored as product of lower and upper unitary Hessenberg matrices possibly perturbed in the first $k$ rows, and, moreover, such a data-sparse representation is well suited for the design of fast eigensolvers based on the QR/QZ iteration. The resulting algorithm is fast and backward stable.
We develop two fast algorithms for Hessenberg reduction of a structured matrix $A = D + UV^H$ where $D$ is a real or unitary $n times n$ diagonal matrix and $U, V inmathbb{C}^{n times k}$. The proposed algorithm for the real case exploits a two--stage approach by first reducing the matrix to a generalized Hessenberg form and then completing the reduction by annihilation of the unwanted sub-diagonals. It is shown that the novel method requires $O(n^2k)$ arithmetic operations and it is significantly faster than other reduction algorithms for rank structured matrices. The method is then extended to the unitary plus low rank case by using a block analogue of the CMV form of unitary matrices. It is shown that a block Lanczos-type procedure for the block tridiagonalization of $Re(D)$ induces a structured reduction on $A$ in a block staircase CMV--type shape. Then, we present a numerically stable method for performing this reduction using unitary transformations and we show how to generalize the sub-diagonal elimination to this shape, while still being able to provide a condensed representation for the reduced matrix. In this way the complexity still remains linear in $k$ and, moreover, the resulting algorithm can be adapted to deal efficiently with block companion matrices.
Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing com plexity. Recently, fast and reliable eigensolvers dealing with low rank perturbations of unitary and Hermitian matrices were proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, e.g., the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of an Hermitian or unitary matrix plus a low rank perturbation. We propose necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew-Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Based on these conditions we are able to identify the closest Hermitian and unitary plus low rank matrix in Frobenius and spectral norm and a practical Lanczos iteration to detect the low rank perturbation is presented. Numerical tests prove that this straightforward algorithm is robust with respect to noise.
Some variants of the (block) Gauss-Seidel iteration for the solution of linear systems with $M$-matrices in (block) Hessenberg form are discussed. Comparison results for the asymptotic convergence rate of some regular splittings are derived: in particular, we prove that for a lower-Hessenberg M-matrix $rho(P_{GS})geq rho(P_S)geq rho(P_{AGS})$, where $P_{GS}, P_S, P_{AGS}$ are the iteration matrices of the Gauss-Seidel, staircase, and anti-Gauss-Seidel method. This is a result that does not seem to follow from classical comparison results, as these splittings are not directly comparable. It is shown that the concept of stair partitioning provides a powerful tool for the design of new variants that are suited for parallel computation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا