ترغب بنشر مسار تعليمي؟ اضغط هنا

Preconvergence of the randomized extended Kaczmarz method

237   0   0.0 ( 0 )
 نشر من قبل Hanyu Li Dr.
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we analyze the convergence behavior of the randomized extended Kaczmarz (REK) method for all types of linear systems (consistent or inconsistent, overdetermined or underdetermined, full-rank or rank-deficient). The analysis shows that the larger the singular value of $A$ is, the faster the error decays in the corresponding right singular vector space, and as $krightarrowinfty$, $x_{k}-x_{star}$ tends to the right singular vector corresponding to the smallest singular value of $A$, where $x_{k}$ is the $k$th approximation of the REK method and $x_{star}$ is the minimum $ell_2 $-norm least squares solution. These results explain the phenomenon found in the extensive numerical experiments appearing in the literature that the REK method seems to converge faster in the beginning. A simple numerical example is provided to confirm the above findings.

قيم البحث

اقرأ أيضاً

84 - Teng Zhang , Feng Yu 2020
This paper investigates the convergence of the randomized Kaczmarz algorithm for the problem of phase retrieval of complex-valued objects. While this algorithm has been studied for the real-valued case}, its generalization to the complex-valued case is nontrivial and has been left as a conjecture. This paper establishes the connection between the convergence of the algorithm and the convexity of an objective function. Based on the connection, it demonstrates that when the sensing vectors are sampled uniformly from a unit sphere and the number of sensing vectors $m$ satisfies $m>O(nlog n)$ as $n, mrightarrowinfty$, then this algorithm with a good initialization achieves linear convergence to the solution with high probability.
The randomized sparse Kaczmarz method was recently proposed to recover sparse solutions of linear systems. In this work, we introduce a greedy variant of the randomized sparse Kaczmarz method by employing the sampling Kaczmarz-Motzkin method, and pro ve its linear convergence in expectation with respect to the Bregman distance in the noiseless and noisy cases. This greedy variant can be viewed as a unification of the sampling Kaczmarz-Motzkin method and the randomized sparse Kaczmarz method, and hence inherits the merits of these two methods. Numerically, we report a couple of experimental results to demonstrate its superiority
270 - Hanyu Li , Yanjun Zhang 2020
With a quite different way to determine the working rows, we propose a novel greedy Kaczmarz method for solving consistent linear systems. Convergence analysis of the new method is provided. Numerical experiments show that, for the same accuracy, our method outperforms the greedy randomized Kaczmarz method and the relaxed greedy randomized Kaczmarz method introduced recently by Bai and Wu [Z.Z. BAI AND W.T. WU, On greedy randomized Kaczmarz method for solving large sparse linear systems, SIAM J. Sci. Comput., 40 (2018), pp. A592--A606; Z.Z. BAI AND W.T. WU, On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems, Appl. Math. Lett., 83 (2018), pp. 21--26] in term of the computing time.
179 - Yanjun Zhang , Hanyu Li 2020
In this paper, combining count sketch and maximal weighted residual Kaczmarz method, we propose a fast randomized algorithm for large overdetermined linear systems. Convergence analysis of the new algorithm is provided. Numerical experiments show tha t, for the same accuracy, our method behaves better in computing time compared with the state-of-the-art algorithm.
128 - Yanjun Zhang , Hanyu Li 2021
The randomized Gauss--Seidel method and its extension have attracted much attention recently and their convergence rates have been considered extensively. However, the convergence rates are usually determined by upper bounds, which cannot fully refle ct the actual convergence. In this paper, we make a detailed analysis of their convergence behaviors. The analysis shows that the larger the singular value of $A$ is, the faster the error decays in the corresponding singular vector space, and the convergence directions are mainly driven by the large singular values at the beginning, then gradually driven by the small singular values, and finally by the smallest nonzero singular value. These results explain the phenomenon found in the extensive numerical experiments appearing in the literature that these two methods seem to converge faster at the beginning. Numerical examples are provided to confirm the above findings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا