ترغب بنشر مسار تعليمي؟ اضغط هنا

Iterative and doubling algorithms for Riccati-type matrix equations: a comparative introduction

160   0   0.0 ( 0 )
 نشر من قبل Federico G. Poloni
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Federico Poloni




اسأل ChatGPT حول البحث

We review a family of algorithms for Lyapunov- and Riccati-type equations which are all related to each other by the idea of emph{doubling}: they construct the iterate $Q_k = X_{2^k}$ of another naturally-arising fixed-point iteration $(X_h)$ via a sort of repeated squaring. The equations we consider are Stein equations $X - A^*XA=Q$, Lyapunov equations $A^*X+XA+Q=0$, discrete-time algebraic Riccati equations $X=Q+A^*X(I+GX)^{-1}A$, continuous-time algebraic Riccati equations $Q+A^*X+XA-XGX=0$, palindromic quadratic matrix equations $A+QY+A^*Y^2=0$, and nonlinear matrix equations $X+A^*X^{-1}A=Q$. We draw comparisons among these algorithms, highlight the connections between them and to other algorithms such as subspace iteration, and discuss open issues in their theory.



قيم البحث

اقرأ أيضاً

In emph{Guo et al, arXiv:2005.08288}, we propose a decoupled form of the structure-preserving doubling algorithm (dSDA). The method decouples the original two to four coupled recursions, enabling it to solve large-scale algebraic Riccati equations an d other related problems. In this paper, we consider the numerical computations of the novel dSDA for solving large-scale continuous-time algebraic Riccati equations with low-rank structures (thus possessing numerically low-rank solutions). With the help of a new truncation strategy, the rank of the approximate solution is controlled. Consequently, large-scale problems can be treated efficiently. Illustrative numerical examples are presented to demonstrate and confirm our claims.
257 - Frank Uhlig , An-Bao Xu 2021
For a linear matrix function $f$ in $X in R^{mtimes n}$ we consider inhomogeneous linear matrix equations $f(X) = E$ for $E eq 0$ that have or do not have solutions. For such systems we compute optimal norm constrained solutions iteratively using th e Conjugate Gradient and Lanczos methods in combination with the More-Sorensen optimizer. We build codes for ten linear matrix equations, of Sylvester, Lyapunov, Stein and structured types and their
96 - Yuye Feng , Qingbiao Wu 2020
This paper introduces and analyzes a preconditioned modified of the Hermitian and skew-Hermitian splitting (PMHSS). The large sparse continuous Sylvester equations are solved by PMHSS iterative algorithm based on nonHermitian, complex, positive defin ite/semidefinite, and symmetric matrices. We prove that the PMHSS is converged under suitable conditions. In addition, we propose an accelerated PMHSS method consisting of two preconditioned matrices and two iteration parameters {alpha}, b{eta}. Theoretical analysis showed that the convergence speed of the accelerated PMHSS is faster compared to the PMHSS. Also, the robustness and efficiency of the proposed two iterative algorithms were demonstrated in numerical experiments.
The task of predicting missing entries of a matrix, from a subset of known entries, is known as textit{matrix completion}. In todays data-driven world, data completion is essential whether it is the main goal or a pre-processing step. Structured matr ix completion includes any setting in which data is not missing uniformly at random. In recent work, a modification to the standard nuclear norm minimization (NNM) for matrix completion has been developed to take into account emph{sparsity-based} structure in the missing entries. This notion of structure is motivated in many settings including recommender systems, where the probability that an entry is observed depends on the value of the entry. We propose adjusting an Iteratively Reweighted Least Squares (IRLS) algorithm for low-rank matrix completion to take into account sparsity-based structure in the missing entries. We also present an iterative gradient-projection-based implementation of the algorithm that can handle large-scale matrices. Finally, we present a robust array of numerical experiments on matrices of varying sizes, ranks, and level of structure. We show that our proposed method is comparable with the adjusted NNM on small-sized matrices, and often outperforms the IRLS algorithm in structured settings on matrices up to size $1000 times 1000$.
In this paper, we aim at solving the Biot model under stabilized finite element discretizations. To solve the resulting generalized saddle point linear systems, some iterative methods are proposed and compared. In the first method, we apply the GMRES algorithm as the outer iteration. In the second method, the Uzawa method with variable relaxation parameters is employed as the outer iteration method. In the third approach, Uzawa method is treated as a fixed-point iteration, the outer solver is the so-called Anderson acceleration. In all these methods, the inner solvers are preconditioners for the generalized saddle point problem. In the preconditioners, the Schur complement approximation is derived by using Fourier analysis approach. These preconditioners are implemented exactly or inexactly. Extensive experiments are given to justify the performance of the proposed preconditioners and to compare all the algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا