Do you want to publish a course? Click here

Iterative optimal solutions of linear matrix equations for Hyperspectral and Multispectral image fusing

258   0   0.0 ( 0 )
 Added by An-Bao Xu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

For a linear matrix function $f$ in $X in R^{mtimes n}$ we consider inhomogeneous linear matrix equations $f(X) = E$ for $E eq 0$ that have or do not have solutions. For such systems we compute optimal norm constrained solutions iteratively using the Conjugate Gradient and Lanczos methods in combination with the More-Sorensen optimizer. We build codes for ten linear matrix equations, of Sylvester, Lyapunov, Stein and structured types and their



rate research

Read More

Often in applications ranging from medical imaging and sensor networks to error correction and data science (and beyond), one needs to solve large-scale linear systems in which a fraction of the measurements have been corrupted. We consider solving such large-scale systems of linear equations $mathbf{A}mathbf{x}=mathbf{b}$ that are inconsistent due to corruptions in the measurement vector $mathbf{b}$. We develop several variants of iterative methods that converge to the solution of the uncorrupted system of equations, even in the presence of large corruptions. These methods make use of a quantile of the absolute values of the residual vector in determining the iterate update. We present both theoretical and empirical results that demonstrate the promise of these iterative approaches.
159 - Federico Poloni 2020
We review a family of algorithms for Lyapunov- and Riccati-type equations which are all related to each other by the idea of emph{doubling}: they construct the iterate $Q_k = X_{2^k}$ of another naturally-arising fixed-point iteration $(X_h)$ via a sort of repeated squaring. The equations we consider are Stein equations $X - A^*XA=Q$, Lyapunov equations $A^*X+XA+Q=0$, discrete-time algebraic Riccati equations $X=Q+A^*X(I+GX)^{-1}A$, continuous-time algebraic Riccati equations $Q+A^*X+XA-XGX=0$, palindromic quadratic matrix equations $A+QY+A^*Y^2=0$, and nonlinear matrix equations $X+A^*X^{-1}A=Q$. We draw comparisons among these algorithms, highlight the connections between them and to other algorithms such as subspace iteration, and discuss open issues in their theory.
96 - Yuye Feng , Qingbiao Wu 2020
This paper introduces and analyzes a preconditioned modified of the Hermitian and skew-Hermitian splitting (PMHSS). The large sparse continuous Sylvester equations are solved by PMHSS iterative algorithm based on nonHermitian, complex, positive definite/semidefinite, and symmetric matrices. We prove that the PMHSS is converged under suitable conditions. In addition, we propose an accelerated PMHSS method consisting of two preconditioned matrices and two iteration parameters {alpha}, b{eta}. Theoretical analysis showed that the convergence speed of the accelerated PMHSS is faster compared to the PMHSS. Also, the robustness and efficiency of the proposed two iterative algorithms were demonstrated in numerical experiments.
The task of predicting missing entries of a matrix, from a subset of known entries, is known as textit{matrix completion}. In todays data-driven world, data completion is essential whether it is the main goal or a pre-processing step. Structured matrix completion includes any setting in which data is not missing uniformly at random. In recent work, a modification to the standard nuclear norm minimization (NNM) for matrix completion has been developed to take into account emph{sparsity-based} structure in the missing entries. This notion of structure is motivated in many settings including recommender systems, where the probability that an entry is observed depends on the value of the entry. We propose adjusting an Iteratively Reweighted Least Squares (IRLS) algorithm for low-rank matrix completion to take into account sparsity-based structure in the missing entries. We also present an iterative gradient-projection-based implementation of the algorithm that can handle large-scale matrices. Finally, we present a robust array of numerical experiments on matrices of varying sizes, ranks, and level of structure. We show that our proposed method is comparable with the adjusted NNM on small-sized matrices, and often outperforms the IRLS algorithm in structured settings on matrices up to size $1000 times 1000$.
Projection-based iterative methods for solving large over-determined linear systems are well-known for their simplicity and computational efficiency. It is also known that the correct choice of a sketching procedure (i.e., preprocessing steps that reduce the dimension of each iteration) can improve the performance of iterative methods in multiple ways, such as, to speed up the convergence of the method by fighting inner correlations of the system, or to reduce the variance incurred by the presence of noise. In the current work, we show that sketching can also help us to get better theoretical guarantees for the projection-based methods. Specifically, we use good properties of Gaussian sketching to prove an accelerated convergence rate of the sketched relaxation (also known as Motzkins) method. The new estimates hold for linear systems of arbitrary structure. We also provide numerical experiments in support of our theoretical analysis of the sketched relaxation method.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا