ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast Hessenberg reduction of some rank structured matrices

380   0   0.0 ( 0 )
 نشر من قبل Leonardo Robol
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop two fast algorithms for Hessenberg reduction of a structured matrix $A = D + UV^H$ where $D$ is a real or unitary $n times n$ diagonal matrix and $U, V inmathbb{C}^{n times k}$. The proposed algorithm for the real case exploits a two--stage approach by first reducing the matrix to a generalized Hessenberg form and then completing the reduction by annihilation of the unwanted sub-diagonals. It is shown that the novel method requires $O(n^2k)$ arithmetic operations and it is significantly faster than other reduction algorithms for rank structured matrices. The method is then extended to the unitary plus low rank case by using a block analogue of the CMV form of unitary matrices. It is shown that a block Lanczos-type procedure for the block tridiagonalization of $Re(D)$ induces a structured reduction on $A$ in a block staircase CMV--type shape. Then, we present a numerically stable method for performing this reduction using unitary transformations and we show how to generalize the sub-diagonal elimination to this shape, while still being able to provide a condensed representation for the reduced matrix. In this way the complexity still remains linear in $k$ and, moreover, the resulting algorithm can be adapted to deal efficiently with block companion matrices.



قيم البحث

اقرأ أيضاً

We present fast numerical methods for computing the Hessenberg reduction of a unitary plus low-rank matrix $A=G+U V^H$, where $Gin mathbb C^{ntimes n}$ is a unitary matrix represented in some compressed format using $O(nk)$ parameters and $U$ and $V$ are $ntimes k$ matrices with $k< n$. At the core of these methods is a certain structured decomposition, referred to as a LFR decomposition, of $A$ as product of three possibly perturbed unitary $k$ Hessenberg matrices of size $n$. It is shown that in most interesting cases an initial LFR decomposition of $A$ can be computed very cheaply. Then we prove structural properties of LFR decompositions by giving conditions under which the LFR decomposition of $A$ implies its Hessenberg shape. Finally, we describe a bulge chasing scheme for converting the initial LFR decomposition of $A$ into the LFR decomposition of a Hessenberg matrix by means of unitary transformations. The reduction can be performed at the overall computational cost of $O(n^2 k)$ arithmetic operations using $O(nk)$ storage. The computed LFR decomposition of the Hessenberg reduction of $A$ can be processed by the fast QR algorithm presented in [8] in order to compute the eigenvalues of $A$ within the same costs.
Some fast algorithms for computing the eigenvalues of a block companion matrix $A = U + XY^H$, where $Uin mathbb C^{ntimes n}$ is unitary block circulant and $X, Y inmathbb{C}^{n times k}$, have recently appeared in the literature. Most of these algo rithms rely on the decomposition of $A$ as product of scalar companion matrices which turns into a factored representation of the Hessenberg reduction of $A$. In this paper we generalize the approach to encompass Hessenberg matrices of the form $A=U + XY^H$ where $U$ is a general unitary matrix. A remarkable case is $U$ unitary diagonal which makes possible to deal with interpolation techniques for rootfinding problems and nonlinear eigenvalue problems. Our extension exploits the properties of a larger matrix $hat A$ obtained by a certain embedding of the Hessenberg reduction of $A$ suitable to maintain its structural properties. We show that $hat A$ can be factored as product of lower and upper unitary Hessenberg matrices possibly perturbed in the first $k$ rows, and, moreover, such a data-sparse representation is well suited for the design of fast eigensolvers based on the QR/QZ iteration. The resulting algorithm is fast and backward stable.
Some variants of the (block) Gauss-Seidel iteration for the solution of linear systems with $M$-matrices in (block) Hessenberg form are discussed. Comparison results for the asymptotic convergence rate of some regular splittings are derived: in par ticular, we prove that for a lower-Hessenberg M-matrix $rho(P_{GS})geq rho(P_S)geq rho(P_{AGS})$, where $P_{GS}, P_S, P_{AGS}$ are the iteration matrices of the Gauss-Seidel, staircase, and anti-Gauss-Seidel method. This is a result that does not seem to follow from classical comparison results, as these splittings are not directly comparable. It is shown that the concept of stair partitioning provides a powerful tool for the design of new variants that are suited for parallel computation.
340 - Jouhayna Harmouch 2017
We study the decomposition of a multivariate Hankel matrix H_$sigma$ as a sum of Hankel matrices of small rank in correlation with the decomposition of its symbol $sigma$ as a sum of polynomial-exponential series. We present a new algorithm to comput e the low rank decomposition of the Hankel operator and the decomposition of its symbol exploiting the properties of the associated Artinian Gorenstein quotient algebra A_$sigma$. A basis of A_$sigma$ is computed from the Singular Value Decomposition of a sub-matrix of the Hankel matrix H_$sigma$. The frequencies and the weights are deduced from the generalized eigenvectors of pencils of shifted sub-matrices of H $sigma$. Explicit formula for the weights in terms of the eigenvectors avoid us to solve a Vandermonde system. This new method is a multivariate generalization of the so-called Pencil method for solving Prony-type decomposition problems. We analyse its numerical behaviour in the presence of noisy input moments, and describe a rescaling technique which improves the numerical quality of the reconstruction for frequencies of high amplitudes. We also present a new Newton iteration, which converges locally to the closest multivariate Hankel matrix of low rank and show its impact for correcting errors on input moments.
127 - Jared Tanner , Simon Vary 2020
Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data. This model is the foundation of robust principle component analysis (Candes et al., 2011) (Chandrasekaran et al. , 2009), and popularized by dynamic-foreground/static-background separation (Bouwmans et al., 2016) amongst other applications. Compressed sensing, matrix completion, and their variants (Eldar and Kutyniok, 2012) (Foucart and Rauhut, 2013) have established that data satisfying low complexity models can be efficiently measured and recovered from a number of measurements proportional to the model complexity rather than the ambient dimension. This manuscript develops similar guarantees showing that $mtimes n$ matrices that can be expressed as the sum of a rank-$r$ matrix and a $s$-sparse matrix can be recovered by computationally tractable methods from $mathcal{O}(r(m+n-r)+s)log(mn/s)$ linear measurements. More specifically, we establish that the restricted isometry constants for the aforementioned matrices remain bounded independent of problem size provided $p/mn$, $s/p$, and $r(m+n-r)/p$ reman fixed. Additionally, we show that semidefinite programming and two hard threshold gradient descent algorithms, NIHT and NAHT, converge to the measured matrix provided the measurement operators RICs are sufficiently small. Numerical experiments illustrating these results are shown for synthetic problems, dynamic-foreground/static-background separation, and multispectral imaging.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا