ترغب بنشر مسار تعليمي؟ اضغط هنا

A modified subgradient extragradient method for solving the variational inequality problem

144   0   0.0 ( 0 )
 نشر من قبل Aviv Gibali
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The subgradient extragradient method for solving the variational inequality (VI) problem, which is introduced by Censor et al. cite{CGR}, replaces the second projection onto the feasible set of the VI, in the extragradient method, with a subgradient projection onto some constructible half-space. Since the method has been introduced, many authors proposed extensions and modifications with applications to various problems. In this paper, we introduce a modified subgradient extragradient method by improving the stepsize of its second step. Convergence of the proposed method is proved under standard and mild conditions and primary numerical experiments illustrate the performance and advantage of this new subgradient extragradient variant.



قيم البحث

اقرأ أيضاً

This paper is concerned with the variational inequality problem (VIP) over the fixed point set of a quasi-nonexpansive operator. We propose, in particular, an algorithm which entails, at each step, projecting onto a suitably chosen half-space, and pr ove that the sequences it generates converge to the unique solution of the VIP. We also present an application of our result to a hierarchical optimization problem.
253 - Kui Zhu , Yutao Tang 2021
This paper studies the distributed optimization problem where the objective functions might be nondifferentiable and subject to heterogeneous set constraints. Unlike existing subgradient methods, we focus on the case when the exact subgradients of th e local objective functions can not be accessed by the agents. To solve this problem, we propose a projected primal-dual dynamics using only the objective functions approximate subgradients. We first prove that the formulated optimization problem can only be solved with an approximate error depending upon the accuracy of the available subgradients. Then, we show the exact solvability of this optimization problem if the accumulated approximation error is not too large. After that, we also give a novel componentwise normalized variant to improve the transient behavior of the convergent sequence. The effectiveness of our algorithms is verified by a numerical example.
This paper considers a general convex constrained problem setting where functions are not assumed to be differentiable nor Lipschitz continuous. Our motivation is in finding a simple first-order method for solving a wide range of convex optimization problems with minimal requirements. We study the method of weighted dual averages (Nesterov, 2009) in this setting and prove that it is an optimal method.
We study variational inequalities which are governed by a strongly monotone and Lipschitz continuous operator $F$ over a closed and convex set $S$. We assume that $S=Ccap A^{-1}(Q)$ is the nonempty solution set of a (multiple-set) split convex feasib ility problem, where $C$ and $Q$ are both closed and convex subsets of two real Hilbert spaces $mathcal H_1$ and $mathcal H_2$, respectively, and the operator $A$ acting between them is linear. We consider a modification of the gradient projection method the main idea of which is to replace at each step the metric projection onto $S$ by another metric projection onto a half-space which contains $S$. We propose three variants of a method for constructing the above-mentioned half-spaces by employing the multiple-set and the split structure of the set $S$. For the split part we make use of the Landweber transform.
We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-ba ckward methods both in Euclidean and Bregman setups. All proposed methods converge in exactly the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا