ترغب بنشر مسار تعليمي؟ اضغط هنا

A note on augmented unprojected Krylov subspace methods

106   0   0.0 ( 0 )
 نشر من قبل Kirk M. Soodhalter
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Subspace recycling iterative methods and other subspace augmentation schemes are a successful extension to Krylov subspace methods in which a Krylov subspace is augmented with a fixed subspace spanned by vectors deemed to be helpful in accelerating convergence or conveying knowledge of the solution. Recently, a survey was published, in which a framework describing the vast majority of such methods was proposed [Soodhalter et al, GAMM-Mitt. 2020]. In many of these methods, the Krylov subspace is one generated by the system matrix composed with a projector that depends on the augmentation space. However, it is not a requirement that a projected Krylov subspace be used. There are augmentation methods built on using Krylov subspaces generated by the original system matrix, and these methods also fit into the general framework. In this note, we observe that one gains implementation benefits by considering such augmentation methods with unprojected Krylov subspaces in the general framework. We demonstrate this by applying the idea to the R$^3$GMRES method proposed in [Dong et al. ETNA 2014] to obtain a simplified implementation and to connect that algorithm to early augmentation schemes based on flexible preconditioning [Saad. SIMAX 1997].



قيم البحث

اقرأ أيضاً

This paper introduces new solvers for the computation of low-rank approximate solutions to large-scale linear problems, with a particular focus on the regularization of linear inverse problems. Although Krylov methods incorporating explicit projectio ns onto low-rank subspaces are already used for well-posed systems that arise from discretizing stochastic or time-dependent PDEs, we are mainly concerned with algorithms that solve the so-called nuclear norm regularized problem, where a suitable nuclear norm penalization on the solution is imposed alongside a fit-to-data term expressed in the 2-norm: this has the effect of implicitly enforcing low-rank solutions. By adopting an iteratively reweighted norm approach, the nuclear norm regularized problem is reformulated as a sequence of quadratic problems, which can then be efficiently solved using Krylov methods, giving rise to an inner-outer iteration scheme. Our approach differs from the other solvers available in the literature in that: (a) Kronecker product properties are exploited to define the reweighted 2-norm penalization terms; (b) efficient preconditioned Krylov methods replace gradient (projection) methods; (c) the regularization parameter can be efficiently and adaptively set along the iterations. Furthermore, we reformulate within the framework of flexible Krylov methods both the new inner-outer methods for nuclear norm regularization and some of the existing Krylov methods incorporating low-rank projections. This results in an even more computationally efficient (but heuristic) strategy, that does not rely on an inner-outer iteration scheme. Numerical experiments show that our new solvers are competitive with other state-of-the-art solvers for low-rank problems, and deliver reconstructions of increased quality with respect to other classical Krylov methods.
This survey concerns subspace recycling methods, a popular class of iterative methods that enable effective reuse of subspace information in order to speed up convergence and find good initial guesses over a sequence of linear systems with slowly cha nging coefficient matrices, multiple right-hand sides, or both. The subspace information that is recycled is usually generated during the run of an iterative method (usually a Krylov subspace method) on one or more of the systems. Following introduction of definitions and notation, we examine the history of early augmentation schemes along with deflation preconditioning schemes and their influence on the development of recycling methods. We then discuss a general residual constraint framework through which many augmented Krylov and recycling methods can both be viewed. We review several augmented and recycling methods within this framework. We then discuss some known effective strategies for choosing subspaces to recycle before taking the reader through more recent developments that have generalized recycling for (sequences of) shifted linear systems, some of them with multiple right-hand sides in mind. We round out our survey with a brief review of application areas that have seen benefit from subspace recycling methods.
Subspace recycling techniques have been used quite successfully for the acceleration of iterative methods for solving large-scale linear systems. These methods often work by augmenting a solution subspace generated iteratively by a known algorithm wi th a fixed subspace of vectors which are ``useful for solving the problem. Often, this has the effect of inducing a projected version of the original linear system to which the known iterative method is then applied, and this projection can act as a deflation preconditioner, accelerating convergence. Most often, these methods have been applied for the solution of well-posed problems. However, they have also begun to be considered for the solution of ill-posed problems. In this paper, we consider subspace augmentation-type iterative schemes applied to linear ill-posed problems in a continuous Hilbert space setting, based on a recently developed framework describing these methods. We show that under suitable assumptions, a recycling method satisfies the formal definition of a regularization, as long as the underlying scheme is itself a regularization. We then develop an augmented subspace version of the gradient descent method and demonstrate its effectiveness, both on an academic Gaussian blur model and on problems arising from the adaptive optics community for the resolution of large sky images by ground-based extremely large telescopes.
193 - Kirk M. Soodhalter 2014
Many Krylov subspace methods for shifted linear systems take advantage of the invariance of the Krylov subspace under a shift of the matrix. However, exploiting this fact in the non-Hermitian case introduces restrictions; e.g., initial residuals must be collinear and this collinearity must be maintained at restart. Thus we cannot simultaneously solve shifted systems with unrelated right-hand sides using this strategy, and all shifted residuals cannot be simultaneously minimized over a Krylov subspace such that collinearity is maintained. It has been shown that this renders them generally incompatible with techniques of subspace recycling [Soodhalter et al. APNUM 14]. This problem, however, can be overcome. By interpreting a family of shifted systems as one Sylvester equation, we can take advantage of the known shift invariance of the Krylov subspace generated by the Sylvester operator. Thus we can simultaneously solve all systems over one block Krylov subspace using FOM or GMRES type methods, even when they have unrelated right-hand sides. Because residual collinearity is no longer a requirement at restart, these methods are fully compatible with subspace recycling techniques. Furthermore, we realize the benefits of block sparse matrix operations which arise in the context of high-performance computing applications. In this paper, we discuss exploiting this Sylvester equation point of view which has yielded methods for shifted systems which are compatible with unrelated right-hand sides. From this, we propose a recycled GMRES method for simultaneous solution of shifted systems.Numerical experiments demonstrate the effectiveness of the methods.
An approach is given for solving large linear systems that combines Krylov methods with use of two different grid levels. Eigenvectors are computed on the coarse grid and used to deflate eigenvalues on the fine grid. GMRES-type methods are first used on both the coarse and fine grids. Then another approach is given that has a restarted BiCGStab (or IDR) method on the fine grid. While BiCGStab is generally considered to be a non-restarted method, it works well in this context with deflating and restarting. Tests show this new approach can be very efficient for difficult linear equations problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا