Do you want to publish a course? Click here

Greed Works: An Improved Analysis of Sampling Kaczmarz-Motzkin

176   0   0.0 ( 0 )
 Added by Anna Ma
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Stochastic iterative algorithms have gained recent interest in machine learning and signal processing for solving large-scale systems of equations, $Ax=b$. One such example is the Randomized Kaczmarz (RK) algorithm, which acts only on single rows of the matrix $A$ at a time. While RK randomly selects a row of $A$ to work with, Motzkins Method (MM) employs a greedy row selection. Connections between the two algorithms resulted in the Sampling Kaczmarz-Motzkin (SKM) algorithm which samples a random subset of $beta$ rows of $A$ and then greedily selects the best row of the subset. Despite their variable computational costs, all three algorithms have been proven to have the same theoretical upper bound on the convergence rate. In this work, an improved analysis of the range of random (RK) to greedy (MM) methods is presented. This analysis improves upon previous known convergence bounds for SKM, capturing the benefit of partially greedy selection schemes. This work also further generalizes previous known results, removing the theoretical assumptions that $beta$ must be fixed at every iteration and that $A$ must have normalized rows.



rate research

Read More

The randomized sparse Kaczmarz method was recently proposed to recover sparse solutions of linear systems. In this work, we introduce a greedy variant of the randomized sparse Kaczmarz method by employing the sampling Kaczmarz-Motzkin method, and prove its linear convergence in expectation with respect to the Bregman distance in the noiseless and noisy cases. This greedy variant can be viewed as a unification of the sampling Kaczmarz-Motzkin method and the randomized sparse Kaczmarz method, and hence inherits the merits of these two methods. Numerically, we report a couple of experimental results to demonstrate its superiority
98 - Yanjun Zhang , Hanyu Li 2020
The sampling Kaczmarz-Motzkin (SKM) method is a generalization of the randomized Kaczmarz and Motzkin methods. It first samples some rows of coefficient matrix randomly to build a set and then makes use of the maximum violation criterion within this set to determine a constraint. Finally, it makes progress by enforcing this single constraint. In this paper, on the basis of the framework of the SKM method and considering the greedy strategies, we present two block sampling Kaczmarz-Motzkin methods for consistent linear systems. Specifically, we also first sample a subset of rows of coefficient matrix and then determine an index in this set using the maximum violation criterion. Unlike the SKM method, in the rest of the block methods, we devise different greedy strategies to build index sets. Then, the new methods make progress by enforcing the corresponding multiple constraints simultaneously. Theoretical analyses demonstrate that these block methods converge at least as quickly as the SKM method, and numerical experiments show that, for the same accuracy, our methods outperform the SKM method in terms of the number of iterations and computing time.
238 - Yanjun Zhang , Hanyu Li 2020
The famous greedy randomized Kaczmarz (GRK) method uses the greedy selection rule on maximum distance to determine a subset of the indices of working rows. In this paper, with the greedy selection rule on maximum residual, we propose the greedy randomized Motzkin-Kaczmarz (GRMK) method for linear systems. The block version of the new method is also presented. We analyze the convergence of the two methods and provide the corresponding convergence factors. Extensive numerical experiments show that the GRMK method has almost the same performance as the GRK method for dense matrices and the former performs better in computing time for some sparse matrices, and the blo
236 - Yanjun Zhang , Hanyu Li 2021
In this paper, we analyze the convergence behavior of the randomized extended Kaczmarz (REK) method for all types of linear systems (consistent or inconsistent, overdetermined or underdetermined, full-rank or rank-deficient). The analysis shows that the larger the singular value of $A$ is, the faster the error decays in the corresponding right singular vector space, and as $krightarrowinfty$, $x_{k}-x_{star}$ tends to the right singular vector corresponding to the smallest singular value of $A$, where $x_{k}$ is the $k$th approximation of the REK method and $x_{star}$ is the minimum $ell_2 $-norm least squares solution. These results explain the phenomenon found in the extensive numerical experiments appearing in the literature that the REK method seems to converge faster in the beginning. A simple numerical example is provided to confirm the above findings.
209 - Changpeng Shao 2021
We propose a deterministic Kaczmarz method for solving linear systems $Ax=b$ with $A$ nonsingular. Instead of using orthogonal projections, we use reflections in the original Kaczmarz iterative method. This generates a series of points on an $n$-sphere $S$ centered at the solution $x_*=A^{-1}b$. We show that these points are nicely distributed on $S$. Taking the average of several points will lead to an effective approximation to the solution. We will show how to choose these points efficiently. The numerical tests show that in practice this deterministic scheme converges much faster than we expected and can beat the (block) randomized Kaczmarz methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا