ترغب بنشر مسار تعليمي؟ اضغط هنا

A strongly convergent Krasnoselskiv{i}-Mann-type algorithm for finding a common fixed point of a countably infinite family of nonexpansive operators in Hilbert spaces

100   0   0.0 ( 0 )
 نشر من قبل Radu Ioan Bot
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this article, we propose a Krasnoselskiv{i}-Mann-type algorithm for finding a common fixed point of a countably infinite family of nonexpansive operators $(T_n)_{n geq 0}$ in Hilbert spaces. We formulate an asymptotic property which the family $(T_n)_{n geq 0}$ has to fulfill such that the sequence generated by the algorithm converges strongly to the element in $bigcap_{n geq 0} operatorname{Fix} T_n$ with minimum norm. Based on this, we derive a forward-backward algorithm that allows variable step sizes and generates a sequence of iterates that converge strongly to the zero with minimum norm of the sum of a maximally monotone operator and a cocoercive one. We demonstrate the superiority of the forward-backward algorithm with variable step sizes over the one with constant step size by means of numerical experiments on variational image reconstruction and split feasibility problems in infinite dimensional Hilbert spaces.



قيم البحث

اقرأ أيضاً

This paper investigates optimal error bounds and convergence rates for general Mann iterations for computing fixed-points of non-expansive maps in normed spaces. We look for iterations that achieve the smallest fixed-point residual after $n$ steps, b y minimizing a worst-case bound $|x^n-Tx^n|le R_n$ derived from a nested family of optimal transport problems. We prove that this bound is tight so that minimizing $R_n$ yields optimal iterations. Inspired from numerical results we identify iterations that attain the rate $R_n=O(1/n)$, which we also show to be the best possible. In particular, we prove that the classical Halpern iteration achieves this optimal rate for several alternative stepsizes, and we determine analytically the optimal stepsizes that attain the smallest worst-case residuals at every step $n$, with a tight bound $R_napproxfrac{4}{n+4}$. We also determine the optimal Halpern stepsizes for affine nonexpansive maps, for which we get exactly $R_n=frac{1}{n+1}$. Finally, we show that the best rate for the classical Krasnoselskiu{i}-Mann iteration is $Omega(1/sqrt{n})$, and we present numerical evidence suggesting that even after introducing inertial terms one cannot reach the faster rate $O(1/n)$.
383 - Yizun Lin , Yuesheng Xu 2021
We estimate convergence rates for fixed-point iterations of a class of nonlinear operators which are partially motivated from solving convex optimization problems. We introduce the notion of the generalized averaged nonexpansive (GAN) operator with a positive exponent, and provide a convergence rate analysis of the fixed-point iteration of the GAN operator. The proposed generalized averaged nonexpansiveness is weaker than the averaged nonexpansiveness while stronger than nonexpansiveness. We show that the fixed-point iteration of a GAN operator with a positive exponent converges to its fixed-point and estimate the local convergence rate (the convergence rate in terms of the distance between consecutive iterates) according to the range of the exponent. We prove that the fixed-point iteration of a GAN operator with a positive exponent strictly smaller than 1 can achieve an exponential global convergence rate (the convergence rate in terms of the distance between an iterate and the solution). Furthermore, we establish the global convergence rate of the fixed-point iteration of a GAN operator, depending on both the exponent of generalized averaged nonexpansiveness and the exponent of the H$ddot{text{o}}$lder regularity, if the GAN operator is also H$ddot{text{o}}$lder regular. We then apply the established theory to three types of convex optimization problems that appear often in data science to design fixed-point iterative algorithms for solving these optimization problems and to analyze their convergence properties.
This paper is concerned with the variational inequality problem (VIP) over the fixed point set of a quasi-nonexpansive operator. We propose, in particular, an algorithm which entails, at each step, projecting onto a suitably chosen half-space, and pr ove that the sequences it generates converge to the unique solution of the VIP. We also present an application of our result to a hierarchical optimization problem.
We study a class of countably-infinite-dimensional linear programs (CILPs) whose feasible sets are bounded subsets of appropriately defined spaces of measures. The optimal value, optimal points, and minimal points of these CILPs can be approximated b y solving finite-dimensional linear programs. We show how to construct finite-dimensional programs that lead to approximations with easy-to-evaluate error bounds, and we prove that the errors converge to zero as the size of the finite-dimensional programs approaches that of the original problem. We discuss the use of our methods in the computation of the stationary distributions, occupation measures, and exit distributions of Markov~chains.
In this note, we frst consider boundedness properties of a family of operators generalizing the Hilbert operator in the upper triangle case. In the diagonal case, we give the exact norm of these operators under some restrictions on the parameters. We secondly consider boundedness properties of a family of positive Bergman-type operators of the upper-half plane. We give necessary and sufficient conditions on the parameters under which these operators are bounded in the upper triangle case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا