No Arabic abstract
Projection-based iterative methods for solving large over-determined linear systems are well-known for their simplicity and computational efficiency. It is also known that the correct choice of a sketching procedure (i.e., preprocessing steps that reduce the dimension of each iteration) can improve the performance of iterative methods in multiple ways, such as, to speed up the convergence of the method by fighting inner correlations of the system, or to reduce the variance incurred by the presence of noise. In the current work, we show that sketching can also help us to get better theoretical guarantees for the projection-based methods. Specifically, we use good properties of Gaussian sketching to prove an accelerated convergence rate of the sketched relaxation (also known as Motzkins) method. The new estimates hold for linear systems of arbitrary structure. We also provide numerical experiments in support of our theoretical analysis of the sketched relaxation method.
Often in applications ranging from medical imaging and sensor networks to error correction and data science (and beyond), one needs to solve large-scale linear systems in which a fraction of the measurements have been corrupted. We consider solving such large-scale systems of linear equations $mathbf{A}mathbf{x}=mathbf{b}$ that are inconsistent due to corruptions in the measurement vector $mathbf{b}$. We develop several variants of iterative methods that converge to the solution of the uncorrupted system of equations, even in the presence of large corruptions. These methods make use of a quantile of the absolute values of the residual vector in determining the iterate update. We present both theoretical and empirical results that demonstrate the promise of these iterative approaches.
With a quite different way to determine the working rows, we propose a novel greedy Kaczmarz method for solving consistent linear systems. Convergence analysis of the new method is provided. Numerical experiments show that, for the same accuracy, our method outperforms the greedy randomized Kaczmarz method and the relaxed greedy randomized Kaczmarz method introduced recently by Bai and Wu [Z.Z. BAI AND W.T. WU, On greedy randomized Kaczmarz method for solving large sparse linear systems, SIAM J. Sci. Comput., 40 (2018), pp. A592--A606; Z.Z. BAI AND W.T. WU, On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems, Appl. Math. Lett., 83 (2018), pp. 21--26] in term of the computing time.
In this paper, combining count sketch and maximal weighted residual Kaczmarz method, we propose a fast randomized algorithm for large overdetermined linear systems. Convergence analysis of the new algorithm is provided. Numerical experiments show that, for the same accuracy, our method behaves better in computing time compared with the state-of-the-art algorithm.
In this paper, we present a generic methodology for the efficient numerical approximation of the density function of the McKean-Vlasov SDEs. The weak error analysis for the projected process motivates us to combine the iterative Multilevel Monte Carlo method for McKean-Vlasov SDEs cite{szpruch2019} with non-interacting kernels and projection estimation of particle densities cite{belomestny2018projected}. By exploiting smoothness of the coefficients for McKean-Vlasov SDEs, in the best case scenario (i.e $C^{infty}$ for the coefficients), we obtain the complexity of order $O(epsilon^{-2}|logepsilon|^4)$ for the approximation of expectations and $O(epsilon^{-2}|logepsilon|^5)$ for density estimation.
We present a class of reduced basis (RB) methods for the iterative solution of parametrized symmetric positive-definite (SPD) linear systems. The essential ingredients are a Galerkin projection of the underlying parametrized system onto a reduced basis space to obtain a reduced system; an adaptive greedy algorithm to efficiently determine sampling parameters and associated basis vectors; an offline-online computational procedure and a multi-fidelity approach to decouple the construction and application phases of the reduced basis method; and solution procedures to employ the reduced basis approximation as a {em stand-alone iterative solver} or as a {em preconditioner} in the conjugate gradient method. We present numerical examples to demonstrate the performance of the proposed methods in comparison with multigrid methods. Numerical results show that, when applied to solve linear systems resulting from discretizing the Poissons equations, the speed of convergence of our methods matches or surpasses that of the multigrid-preconditioned conjugate gradient method, while their computational cost per iteration is significantly smaller providing a feasible alternative when the multigrid approach is out of reach due to timing or memory constraints for large systems. Moreover, numerical results verify that this new class of reduced basis methods, when applied as a stand-alone solver or as a preconditioner, is capable of achieving the accuracy at the level of the {em truth approximation} which is far beyond the RB level.