Do you want to publish a course? Click here

When is a matrix unitary or Hermitian plus low rank?

169   0   0.0 ( 0 )
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Hermitian and unitary matrices are two representatives of the class of normal matrices whose full eigenvalue decomposition can be stably computed in quadratic computing com plexity. Recently, fast and reliable eigensolvers dealing with low rank perturbations of unitary and Hermitian matrices were proposed. These structured eigenvalue problems appear naturally when computing roots, via confederate linearizations, of polynomials expressed in, e.g., the monomial or Chebyshev basis. Often, however, it is not known beforehand whether or not a matrix can be written as the sum of an Hermitian or unitary matrix plus a low rank perturbation. We propose necessary and sufficient conditions characterizing the class of Hermitian or unitary plus low rank matrices. The number of singular values deviating from 1 determines the rank of a perturbation to bring a matrix to unitary form. A similar condition holds for Hermitian matrices; the eigenvalues of the skew-Hermitian part differing from 0 dictate the rank of the perturbation. We prove that these relations are linked via the Cayley transform. Based on these conditions we are able to identify the closest Hermitian and unitary plus low rank matrix in Frobenius and spectral norm and a practical Lanczos iteration to detect the low rank perturbation is presented. Numerical tests prove that this straightforward algorithm is robust with respect to noise.



rate research

Read More

Some fast algorithms for computing the eigenvalues of a block companion matrix $A = U + XY^H$, where $Uin mathbb C^{ntimes n}$ is unitary block circulant and $X, Y inmathbb{C}^{n times k}$, have recently appeared in the literature. Most of these algorithms rely on the decomposition of $A$ as product of scalar companion matrices which turns into a factored representation of the Hessenberg reduction of $A$. In this paper we generalize the approach to encompass Hessenberg matrices of the form $A=U + XY^H$ where $U$ is a general unitary matrix. A remarkable case is $U$ unitary diagonal which makes possible to deal with interpolation techniques for rootfinding problems and nonlinear eigenvalue problems. Our extension exploits the properties of a larger matrix $hat A$ obtained by a certain embedding of the Hessenberg reduction of $A$ suitable to maintain its structural properties. We show that $hat A$ can be factored as product of lower and upper unitary Hessenberg matrices possibly perturbed in the first $k$ rows, and, moreover, such a data-sparse representation is well suited for the design of fast eigensolvers based on the QR/QZ iteration. The resulting algorithm is fast and backward stable.
We present fast numerical methods for computing the Hessenberg reduction of a unitary plus low-rank matrix $A=G+U V^H$, where $Gin mathbb C^{ntimes n}$ is a unitary matrix represented in some compressed format using $O(nk)$ parameters and $U$ and $V$ are $ntimes k$ matrices with $k< n$. At the core of these methods is a certain structured decomposition, referred to as a LFR decomposition, of $A$ as product of three possibly perturbed unitary $k$ Hessenberg matrices of size $n$. It is shown that in most interesting cases an initial LFR decomposition of $A$ can be computed very cheaply. Then we prove structural properties of LFR decompositions by giving conditions under which the LFR decomposition of $A$ implies its Hessenberg shape. Finally, we describe a bulge chasing scheme for converting the initial LFR decomposition of $A$ into the LFR decomposition of a Hessenberg matrix by means of unitary transformations. The reduction can be performed at the overall computational cost of $O(n^2 k)$ arithmetic operations using $O(nk)$ storage. The computed LFR decomposition of the Hessenberg reduction of $A$ can be processed by the fast QR algorithm presented in [8] in order to compute the eigenvalues of $A$ within the same costs.
127 - Jared Tanner , Simon Vary 2020
Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data. This model is the foundation of robust principle component analysis (Candes et al., 2011) (Chandrasekaran et al., 2009), and popularized by dynamic-foreground/static-background separation (Bouwmans et al., 2016) amongst other applications. Compressed sensing, matrix completion, and their variants (Eldar and Kutyniok, 2012) (Foucart and Rauhut, 2013) have established that data satisfying low complexity models can be efficiently measured and recovered from a number of measurements proportional to the model complexity rather than the ambient dimension. This manuscript develops similar guarantees showing that $mtimes n$ matrices that can be expressed as the sum of a rank-$r$ matrix and a $s$-sparse matrix can be recovered by computationally tractable methods from $mathcal{O}(r(m+n-r)+s)log(mn/s)$ linear measurements. More specifically, we establish that the restricted isometry constants for the aforementioned matrices remain bounded independent of problem size provided $p/mn$, $s/p$, and $r(m+n-r)/p$ reman fixed. Additionally, we show that semidefinite programming and two hard threshold gradient descent algorithms, NIHT and NAHT, converge to the measured matrix provided the measurement operators RICs are sufficiently small. Numerical experiments illustrating these results are shown for synthetic problems, dynamic-foreground/static-background separation, and multispectral imaging.
292 - An-Bao Xu 2020
This paper considers the completion problem for a tensor (also referred to as a multidimensional array) from limited sampling. Our greedy method is based on extending the low-rank approximation pursuit (LRAP) method for matrix completions to tensor completions. The method performs a tensor factorization using the tensor singular value decomposition (t-SVD) which extends the standard matrix SVD to tensors. The t-SVD leads to a notion of rank, called tubal-rank here. We want to recreate the data in tensors from low resolution samples as best we can here. To complete a low resolution tensor successfully we assume that the given tensor data has low tubal-rank. For tensors of low tubal-rank, we establish convergence results for our method that are based on the tensor restricted isometry property (TRIP). Our result with the TRIP condition for tensors is similar to low-rank matrix completions under the RIP condition. The TRIP condition uses the t-SVD for low tubal-rank tensors, while RIP uses the SVD for matrices. We show that a subgaussian measurement map satisfies the TRIP condition with high probability and gives an almost optimal bound on the number of required measurements. We compare the numerical performance of the proposed algorithm with those for state-of-the-art approaches on video recovery and color image recovery.
We consider adaptive approximations of the parameter-to-solution map for elliptic operator equations depending on a large or infinite number of parameters, comparing approximation strategies of different degrees of nonlinearity: sparse polynomial expansions, general low-rank approximations separating spatial and parametric variables, and hierarchical tensor decompositions separating all variables. We describe corresponding adaptive algorithms based on a common generic template and show their near-optimality with respect to natural approximability assumptions for each type of approximation. A central ingredient in the resulting bounds for the total computational complexity are new operator compression results for the case of infinitely many parameters. We conclude with a comparison of the complexity estimates based on the actual approximability properties of classes of parametric model problems, which shows that the computational costs of optimized low-rank expansions can be significantly lower or higher than those of sparse polynomial expansions, depending on the particular type of parametric problem.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا