Do you want to publish a course? Click here

Resolvent Splitting for Sums of Monotone Operators with Minimal Lifting

101   0   0.0 ( 0 )
 Added by Matthew Tam
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

In this work, we study fixed point algorithms for finding a zero in the sum of $ngeq 2$ maximally monotone operators by using their resolvents. More precisely, we consider the class of such algorithms where each resolvent is evaluated only once per iteration. For any algorithm from this class, we show that the underlying fixed point operator is necessarily defined on a $d$-fold Cartesian product space with $dgeq n-1$. Further, we show that this bound is unimprovable by providing a family of examples for which $d=n-1$ is attained. This family includes the Douglas-Rachford algorithm as the special case when $n=2$. Applications of the new family of algorithms in distributed decentralised optimisation and multi-block extensions of the alternation direction method of multipliers (ADMM) are discussed.



rate research

Read More

Monotone operator splitting is a powerful paradigm that facilitates parallel processing for optimization problems where the cost function can be split into two convex functions. We propose a generalized form of monotone operator splitting based on Bregman divergence. We show that an appropriate design of the Bregman divergence leads to faster convergence than conventional splitting algorithms. The proposed Bregman monotone operator splitting (B-MOS) is applied to an application to illustrate its effectiveness. B-MOS was found to significantly improve the convergence rate.
137 - Jinjian Chen , Yuchao Tang 2021
Monotone inclusions play an important role in studying various convex minimization problems. In this paper, we propose a forward-partial inverse-half-forward splitting (FPIHFS) algorithm for finding a zero of the sum of a maximally monotone operator, a monotone Lipschitzian operator, a cocoercive operator, and a normal cone of a closed vector subspace. The FPIHFS algorithm is derived from a combination of the partial inverse method with the forward-backward-half-forward splitting algorithm. As applications, we employ the proposed algorithm to solve several composite monotone inclusion problems, which include a finite sum of maximally monotone operators and parallel-sum of operators. In particular, we obtain a primal-dual splitting algorithm for solving a composite convex minimization problem, which has wide applications in many real problems. To verify the efficiency of the proposed algorithm, we apply it to solve the Projection on Minkowski sums of convex sets problem and the generalized Heron problem. Numerical results demonstrate the effectiveness of the proposed algorithm.
We consider monotone inclusion problems where the operators may be expectation-valued. A direct application of proximal and splitting schemes is complicated by resolving problems with expectation-valued maps at each step, a concern that is addressed by using sampling. Accordingly, we propose avenues for addressing uncertainty in the mapping. (i) Variance-reduced stochastic proximal point method (vr-SPP). We develop amongst the first variance-reduced stochastic proximal-point schemes that achieves deterministic rates of convergence in terms of solving proximal-point problems. In addition, it is shown that the schemes are characterized by either optimal or near-optimal oracle (or sample) complexity guarantees. Finally, the generated sequences are shown to be convergent to a solution in an almost-sure sense in both monotone and strongly monotone regimes; (ii) Variance-reduced stochastic modified forward-backward splitting scheme (vr-SMFBS). In constrained settings, we consider structured settings when the map can be decomposed into an expectation-valued map $A$ and a maximal monotone map $B$ with a tractable resolvent. Akin to (i), we show that the proposed schemes are equipped with a.s. convergence guarantees, linear (strongly monotone $A$) and $mathcal{O}(1/k)$ (monotone $A$) rates of convergence while achieving optimal oracle complexity bounds. Of these, the rate statements in monotone regimes rely on leveraging the Fitzpatrick gap function for monotone inclusions. Furthermore, the schemes rely on weaker moment requirements on noise as well as allow for weakening unbiasedness requirements on oracles in strongly monotone regimes. Preliminary numerics reflect these findings and show that the variance-reduced schemes outperform stochastic approximation schemes, stochastic splitting and proximal point schemes, and sample-average approximation approaches.
217 - Cong D. Dang , Guanghui Lan 2013
In this paper, we study a class of generalized monotone variational inequality (GMVI) problems whose operators are not necessarily monotone (e.g., pseudo-monotone). We present non-Euclidean extragradient (N-EG) methods for computing approximate strong solutions of these problems, and demonstrate how their iteration complexities depend on the global Lipschitz or H{o}lder continuity properties for their operators and the smoothness properties for the distance generating function used in the N-EG algorithms. We also introduce a variant of this algorithm by incorporating a simple line-search procedure to deal with problems with more general continuous operators. Numerical studies are conducted to illustrate the significant advantages of the developed algorithms over the existing ones for solving large-scale GMVI problems.
We consider monotone inclusions defined on a Hilbert space where the operator is given by the sum of a maximal monotone operator $T$ and a single-valued monotone, Lipschitz continuous, and expectation-valued operator $V$. We draw motivation from the seminal work by Attouch and Cabot on relaxed inertial methods for monotone inclusions and present a stochastic extension of the relaxed inertial forward-backward-forward (RISFBF) method. Facilitated by an online variance reduction strategy via a mini-batch approach, we show that (RISFBF) produces a sequence that weakly converges to the solution set. Moreover, it is possible to estimate the rate at which the discrete velocity of the stochastic process vanishes. Under strong monotonicity, we demonstrate strong convergence, and give a detailed assessment of the iteration and oracle complexity of the scheme. When the mini-batch is raised at a geometric (polynomial) rate, the rate statement can be strengthened to a linear (suitable polynomial) rate while the oracle complexity of computing an $epsilon$-solution improves to $O(1/epsilon)$. Importantly, the latter claim allows for possibly biased oracles, a key theoretical advancement allowing for far broader applicability. By defining a restricted gap function based on the Fitzpatrick function, we prove that the expected gap of an averaged sequence diminishes at a sublinear rate of $O(1/k)$ while the oracle complexity of computing a suitably defined $epsilon$-solution is $O(1/epsilon^{1+a})$ where $a>1$. Numerical results on two-stage games and an overlapping group Lasso problem illustrate the advantages of our method compared to stochastic forward-backward-forward (SFBF) and SA schemes.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا