No Arabic abstract
Using the framework of operator or Calderon preconditioning, uniform preconditioners are constructed for elliptic operators discretized with continuous finite (or boundary) elements. The preconditioners are constructed as the composition of an opposite order operator, discretized on the same ansatz space, and two diagonal scaling operators.
Unless special conditions apply, the attempt to solve ill-conditioned systems of linear equations with standard numerical methods leads to uncontrollably high numerical error. Often, such systems arise from the discretization of operator equations with a large number of discrete variables. In this paper we show that the accuracy can be improved significantly if the equation is transformed before discretization, a process we call full operator preconditioning (FOP). It bears many similarities with traditional preconditioning for iterative methods but, crucially, transformations are applied at the operator level. We show that while condition-number improvements from traditional preconditioning generally do not improve the accuracy of the solution, FOP can. A number of topics in numerical analysis can be interpreted as implicitly employing FOP; we highlight (i) Chebyshev interpolation in polynomial approximation, and (ii) Olver-Townsends spectral method, both of which produce solutions of dramatically improved accuracy over a naive problem formulation. In addition, we propose a FOP preconditioner based on integration for the solution of fourth-order differential equations with the finite-element method, showing the resulting linear system is well-conditioned regardless of the discretization size, and demonstrate its error-reduction capabilities on several examples. This work shows that FOP can improve accuracy beyond the standard limit for both direct and iterative methods.
We present a polynomial preconditioner for solving large systems of linear equations. The polynomial is derived from the minimum residual polynomial and is straightforward to compute and implement. It this paper, we study the polynomial preconditioner applied to GMRES; however it could be used with any Krylov solver. Stability control using added roots allows for high degree polynomials. We discuss the effectiveness and challenges of root-adding and give an additional check for stability. This polynomial preconditioning algorithm can dramatically improve convergence for difficult problems and can reduce dot products by an even greater margin.
There is growing awareness that errors in the model equations cannot be ignored in data assimilation methods such as four-dimensional variational assimilation (4D-Var). If allowed for, more information can be extracted from observations, longer time windows are possible, and the minimisation process is easier, at least in principle. Weak constraint 4D-Var estimates the model error and minimises a series of linear least-squares cost functionsfunctions, which can be achieved using the conjugate gradient (CG) method; minimising each cost function is called an inner loop. CG needs preconditioning to improve its performance. In previous work, limited memory preconditioners (LMPs) have been constructed using approximations of the eigenvalues and eigenvectors of the Hessian in the previous inner loop. If the Hessian changes significantly in consecutive inner loops, the LMP may be of limited usefulness. To circumvent this, we propose using randomised methods for low rank eigenvalue decomposition and use these approximations to cheaply construct LMPs using information from the current inner loop. Three randomised methods are compared. Numerical experiments in idealized systems show that the resulting LMPs perform better than the existing LMPs. Using these methods may allow more efficient and robust implementations of incremental weak constraint 4D-Var.
The paper focuses on developing and studying efficient block preconditioners based on classical algebraic multigrid for the large-scale sparse linear systems arising from the fully coupled and implicitly cell-centered finite volume discretization of multi-group radiation diffusion equations, whose coefficient matrices can be rearranged into the $(G+2)times(G+2)$ block form, where $G$ is the number of energy groups. The preconditioning techniques are based on the monolithic classical algebraic multigrid method, physical-variable based coarsening two-level algorithm and two types of block Schur complement preconditioners. The classical algebraic multigrid is applied to solve the subsystems that arise in the last three block preconditioners. The coupling strength and diagonal dominance are further explored to improve performance. We use representative one-group and twenty-group linear systems from capsule implosion simulations to test the robustness, efficiency, strong and weak parallel scaling properties of the proposed methods. Numerical results demonstrate that block preconditioners lead to mesh- and problem-independent convergence, and scale well both algorithmically and in parallel.
This paper analyses the following question: let $mathbf{A}_j$, $j=1,2,$ be the Galerkin matrices corresponding to finite-element discretisations of the exterior Dirichlet problem for the heterogeneous Helmholtz equations $ ablacdot (A_j abla u_j) + k^2 n_j u_j= -f$. How small must $|A_1 -A_2|_{L^q}$ and $|{n_1} - {n_2}|_{L^q}$ be (in terms of $k$-dependence) for GMRES applied to either $(mathbf{A}_1)^{-1}mathbf{A}_2$ or $mathbf{A}_2(mathbf{A}_1)^{-1}$ to converge in a $k$-independent number of iterations for arbitrarily large $k$? (In other words, for $mathbf{A}_1$ to be a good left- or right-preconditioner for $mathbf{A}_2$?). We prove results answering this question, give theoretical evidence for their sharpness, and give numerical experiments supporting the estimates. Our motivation for tackling this question comes from calculating quantities of interest for the Helmholtz equation with random coefficients $A$ and $n$. Such a calculation may require the solution of many deterministic Helmholtz problems, each with different $A$ and $n$, and the answer to the question above dictates to what extent a previously-calculated inverse of one of the Galerkin matrices can be used as a preconditioner for other Galerkin matrices.