No Arabic abstract
Unless special conditions apply, the attempt to solve ill-conditioned systems of linear equations with standard numerical methods leads to uncontrollably high numerical error. Often, such systems arise from the discretization of operator equations with a large number of discrete variables. In this paper we show that the accuracy can be improved significantly if the equation is transformed before discretization, a process we call full operator preconditioning (FOP). It bears many similarities with traditional preconditioning for iterative methods but, crucially, transformations are applied at the operator level. We show that while condition-number improvements from traditional preconditioning generally do not improve the accuracy of the solution, FOP can. A number of topics in numerical analysis can be interpreted as implicitly employing FOP; we highlight (i) Chebyshev interpolation in polynomial approximation, and (ii) Olver-Townsends spectral method, both of which produce solutions of dramatically improved accuracy over a naive problem formulation. In addition, we propose a FOP preconditioner based on integration for the solution of fourth-order differential equations with the finite-element method, showing the resulting linear system is well-conditioned regardless of the discretization size, and demonstrate its error-reduction capabilities on several examples. This work shows that FOP can improve accuracy beyond the standard limit for both direct and iterative methods.
Using the framework of operator or Calderon preconditioning, uniform preconditioners are constructed for elliptic operators discretized with continuous finite (or boundary) elements. The preconditioners are constructed as the composition of an opposite order operator, discretized on the same ansatz space, and two diagonal scaling operators.
Based on the geometric {it Triangle Algorithm} for testing membership of a point in a convex set, we present a novel iterative algorithm for testing the solvability of a real linear system $Ax=b$, where $A$ is an $m times n$ matrix of arbitrary rank. Let $C_{A,r}$ be the ellipsoid determined as the image of the Euclidean ball of radius $r$ under the linear map $A$. The basic procedure in our algorithm computes a point in $C_{A,r}$ that is either within $varepsilon$ distance to $b$, or acts as a certificate proving $b ot in C_{A,r}$. Each iteration takes $O(mn)$ operations and when $b$ is well-situated in $C_{A,r}$, the number of iterations is proportional to $log{(1/varepsilon)}$. If $Ax=b$ is solvable the algorithm computes an approximate solution or the minimum-norm solution. Otherwise, it computes a certificate to unsolvability, or the minimum-norm least-squares solution. It is also applicable to complex input. In a computational comparison with the state-of-the-art algorithm BiCGSTAB ({it Bi-conjugate gradient method stabilized}), the Triangle Algorithm is very competitive. In fact, when the iterates of BiCGSTAB do not converge, our algorithm can verify $Ax=b$ is unsolvable and approximate the minimum-norm least-squares solution. The Triangle Algorithm is robust, simple to implement, and requires no preconditioner, making it attractive to practitioners, as well as researchers and educators.
We present two minimum residual methods for solving sequences of shifted linear systems, the right-preconditioned shifted GMRES and shifted recycled GMRES algorithms which use a seed projection strategy often employed to solve multiple related problems. These methods are compatible with general preconditioning of all systems, and when restricted to right preconditioning, require no extra applications of the operator or preconditioner. These seed projection methods perform a minimum residual iteration for the base system while improving the approximations for the shifted systems at little additional cost. The iteration continues until the base system approximation is of satisfactory quality. The method is then recursively called for the remaining unconverged systems. We present both methods inside of a general framework which allows these techniques to be extended to the setting of flexible preconditioning and inexact Krylov methods. We present some analysis of such methods and numerical experiments demonstrating the effectiveness of the algorithms we have derived.
The famous greedy randomized Kaczmarz (GRK) method uses the greedy selection rule on maximum distance to determine a subset of the indices of working rows. In this paper, with the greedy selection rule on maximum residual, we propose the greedy randomized Motzkin-Kaczmarz (GRMK) method for linear systems. The block version of the new method is also presented. We analyze the convergence of the two methods and provide the corresponding convergence factors. Extensive numerical experiments show that the GRMK method has almost the same performance as the GRK method for dense matrices and the former performs better in computing time for some sparse matrices, and the blo
We propose a deterministic Kaczmarz method for solving linear systems $Ax=b$ with $A$ nonsingular. Instead of using orthogonal projections, we use reflections in the original Kaczmarz iterative method. This generates a series of points on an $n$-sphere $S$ centered at the solution $x_*=A^{-1}b$. We show that these points are nicely distributed on $S$. Taking the average of several points will lead to an effective approximation to the solution. We will show how to choose these points efficiently. The numerical tests show that in practice this deterministic scheme converges much faster than we expected and can beat the (block) randomized Kaczmarz methods.