No Arabic abstract
Properties of Superiorized Preconditioned Conjugate Gradient (SupPCG) algorithms in image reconstruction from projections are examined. Least squares (LS) is usually chosen for measuring data-inconsistency in these inverse problems. Preconditioned Conjugate Gradient algorithms are fast methods for finding an LS solution. However, for ill-posed problems, such as image reconstruction, an LS solution may not provide good image quality. This can be taken care of by superiorization. A superiorized algorithm leads to images with the value of a secondary criterion (a merit function such as the total variation) improved as compared to images with similar data-inconsistency obtained by the algorithm without superiorization. Numerical experimentation shows that SupPCG can lead to high-quality reconstructions within a remarkably short time. A theoretical analysis is also provided.
We propose the superiorization of incremental algorithms for tomographic image reconstruction. The resulting methods follow a better path in its way to finding the optimal solution for the maximum likelihood problem in the sense that they are closer to the Pareto optimal curve than the non-superiorized techniques. A new scaled gradient iteration is proposed and three superiorization schemes are evaluated. Theoretical analysis of the methods as well as computational experiments with both synthetic and real data are provided.
Fast computation of demagnetization curves is essential for the computational design of soft magnetic sensors or permanent magnet materials. We show that a sparse preconditioner for a nonlinear conjugate gradient energy minimizer can lead to a speed up by a factor of 3 and 7 for computing hysteresis in soft magnetic and hard magnetic materials, respectively. As a preconditioner an approximation of the Hessian of the Lagrangian is used, which only takes local field terms into account. Preconditioning requires a few additional sparse matrix vector multiplications per iteration of the nonlinear conjugate gradient method, which is used for minimizing the energy for a given external field. The time to solution for computing the demagnetization curve scales almost linearly with problem size.
The Fast Proximal Gradient Method (FPGM) and the Monotone FPGM (MFPGM) for minimization of nonsmooth convex functions are introduced and applied to tomographic image reconstruction. Convergence properties of the sequence of objective function values are derived, including a $Oleft(1/k^{2}right)$ non-asymptotic bound. The presented theory broadens current knowledge and explains the convergence behavior of certain methods that are known to present good practical performance. Numerical experimentation involving computerized tomography image reconstruction shows the methods to be competitive in practical scenarios. Experimental comparison with Algebraic Reconstruction Techniques are performed uncovering certain behaviors of accelerated Proximal Gradient algorithms that apparently have not yet been noticed when these are applied to tomographic image reconstruction.
The Preconditioned Conjugate Gradient method is often employed for the solution of linear systems of equations arising in numerical simulations of physical phenomena. While being widely used, the solver is also known for its lack of accuracy while computing the residual. In this article, we propose two algorithmic solutions that originate from the ExBLAS project to enhance the accuracy of the solver as well as to ensure its reproducibility in a hybrid MPI + OpenMP tasks programming environment. One is based on ExBLAS and preserves every bit of information until the final rounding, while the other relies upon floating-point expansions and, hence, expands the intermediate precision. Instead of converting the entire solver into its ExBLAS-related implementation, we identify those parts that violate reproducibility/non-associativity, secure them, and combine this with the sequential executions. These algorithmic strategies are reinforced with programmability suggestions to assure deterministic executions. Finally, we verify these approaches on two modern HPC systems: bo
In this paper, we extend to the block case, the a posteriori bound showing superlinear convergence of Conjugate Gradients developed in [J. Comput. Applied Math., 48 (1993), pp. 327- 341], that is, we obtain similar bounds, but now for block Conjugate Gradients. We also present a series of computational experiments, illustrating the validity of the bound developed here, as well as the bound from [SIAM Review, 47 (2005), pp. 247-272] using angles between subspaces. Using these bounds, we make some observations on the onset of superlinearity, and how this onset depends on the eigenvalue distribution and the block size.