ترغب بنشر مسار تعليمي؟ اضغط هنا

Polynomial Preconditioned GMRES to Reduce Communication in Parallel Computing

81   0   0.0 ( 0 )
 نشر من قبل Jennifer Loe
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Polynomial preconditioning with the GMRES minimal residual polynomial has the potential to greatly reduce orthogonalization costs, making it useful for communication reduction. We implement polynomial preconditioning in the Belos package from Trilinos and show how it can be effective in both serial and parallel implementations. We further show it is a communication-avoiding technique and is a viable option to CA-GMRES for large-scale parallel computing.

قيم البحث

اقرأ أيضاً

We present a polynomial preconditioner for solving large systems of linear equations. The polynomial is derived from the minimum residual polynomial and is straightforward to compute and implement. It this paper, we study the polynomial preconditione r applied to GMRES; however it could be used with any Krylov solver. Stability control using added roots allows for high degree polynomials. We discuss the effectiveness and challenges of root-adding and give an additional check for stability. This polynomial preconditioning algorithm can dramatically improve convergence for difficult problems and can reduce dot products by an even greater margin.
238 - Mark Embree , Jennifer A. Loe , 2018
Polynomial preconditioning can improve the convergence of the Arnoldi method for computing eigenvalues. Such preconditioning significantly reduces the cost of orthogonalization; for difficult problems, it can also reduce the number of matrix-vector p roducts. Parallel computations can particularly benefit from the reduction of communication-intensive operations. The GMRES algorithm provides a simple and effective way of generating the preconditioning polynomial. For some problems high degree polynomials are especially effective, but they can lead to stability problems that must be mitigated. A two-level double polynomial preconditioning strategy provides an effective way to generate high-degree preconditioners.
In this paper, we study how to quickly compute the <-minimal monomial interpolating basis for a multivariate polynomial interpolation problem. We address the notion of reverse reduced basis of linearly independent polynomials and design an algorithm for it. Based on the notion, for any monomial ordering we present a new method to read off the <-minimal monomial interpolating basis from monomials appearing in the polynomials representing the interpolation conditions.
140 - Kirk M. Soodhalter 2014
We analyze the the convergence behavior of block GMRES and characterize the phenomenon of stagnation which is then related to the behavior of the block FOM method. We generalize the block FOM method to generate well-defined approximations in the case that block FOM would normally break down, and these generalized solutions are used in our analysis. This behavior is also related to the principal angles between the column-space of the previous block GMRES residual and the current minimum residual constraint space. At iteration $j$, it is shown that the proper generalization of GMRES stagnation to the block setting relates to the columnspace of the $j$th block Arnoldi vector. Our analysis covers both the cases of normal iterations as well as block Arnoldi breakdown wherein dependent basis vectors are replaced with random ones. Numerical examples are given to illustrate what we have proven, including a small application problem to demonstrate the validity of the analysis in a less pathological case.
The programming paradigm Map-Reduce and its main open-source implementation, Hadoop, have had an enormous impact on large scale data processing. Our goal in this expository writeup is two-fold: first, we want to present some complexity measures that allow us to talk about Map-Reduce algorithms formally, and second, we want to point out why this model is actually different from other models of parallel programming, most notably the PRAM (Parallel Random Access Memory) model. We are looking for complexity measures that are detailed enough to make fine-grained distinction between different algorithms, but which also abstract away many of the implementation details.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا