Do you want to publish a course? Click here

A Relaxed Inertial Forward-Backward-Forward Algorithm for Solving Monotone Inclusions with Application to GANs

148   0   0.0 ( 0 )
 Added by Radu Ioan Bot
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We introduce a relaxed inertial forward-backward-forward (RIFBF) splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a single-valued monotone and Lipschitz continuous operator. This work aims to extend Tsengs forward-backward-forward method by both using inertial effects as well as relaxation parameters. We formulate first a second order dynamical system which approaches the solution set of the monotone inclusion problem to be solved and provide an asymptotic analysis for its trajectories. We provide for RIFBF, which follows by explicit time discretization, a convergence analysis in the general monotone case as well as when applied to the solving of pseudo-monotone variational inequalities. We illustrate the proposed method by applications to a bilinear saddle point problem, in the context of which we also emphasize the interplay between the inertial and the relaxation parameters, and to the training of Generative Adversarial Networks (GANs).



rate research

Read More

We consider monotone inclusions defined on a Hilbert space where the operator is given by the sum of a maximal monotone operator $T$ and a single-valued monotone, Lipschitz continuous, and expectation-valued operator $V$. We draw motivation from the seminal work by Attouch and Cabot on relaxed inertial methods for monotone inclusions and present a stochastic extension of the relaxed inertial forward-backward-forward (RISFBF) method. Facilitated by an online variance reduction strategy via a mini-batch approach, we show that (RISFBF) produces a sequence that weakly converges to the solution set. Moreover, it is possible to estimate the rate at which the discrete velocity of the stochastic process vanishes. Under strong monotonicity, we demonstrate strong convergence, and give a detailed assessment of the iteration and oracle complexity of the scheme. When the mini-batch is raised at a geometric (polynomial) rate, the rate statement can be strengthened to a linear (suitable polynomial) rate while the oracle complexity of computing an $epsilon$-solution improves to $O(1/epsilon)$. Importantly, the latter claim allows for possibly biased oracles, a key theoretical advancement allowing for far broader applicability. By defining a restricted gap function based on the Fitzpatrick function, we prove that the expected gap of an averaged sequence diminishes at a sublinear rate of $O(1/k)$ while the oracle complexity of computing a suitably defined $epsilon$-solution is $O(1/epsilon^{1+a})$ where $a>1$. Numerical results on two-stage games and an overlapping group Lasso problem illustrate the advantages of our method compared to stochastic forward-backward-forward (SFBF) and SA schemes.
137 - Jinjian Chen , Yuchao Tang 2021
Monotone inclusions play an important role in studying various convex minimization problems. In this paper, we propose a forward-partial inverse-half-forward splitting (FPIHFS) algorithm for finding a zero of the sum of a maximally monotone operator, a monotone Lipschitzian operator, a cocoercive operator, and a normal cone of a closed vector subspace. The FPIHFS algorithm is derived from a combination of the partial inverse method with the forward-backward-half-forward splitting algorithm. As applications, we employ the proposed algorithm to solve several composite monotone inclusion problems, which include a finite sum of maximally monotone operators and parallel-sum of operators. In particular, we obtain a primal-dual splitting algorithm for solving a composite convex minimization problem, which has wide applications in many real problems. To verify the efficiency of the proposed algorithm, we apply it to solve the Projection on Minkowski sums of convex sets problem and the generalized Heron problem. Numerical results demonstrate the effectiveness of the proposed algorithm.
In this paper we propose a new operator splitting algorithm for distributed Nash equilibrium seeking under stochastic uncertainty, featuring relaxation and inertial effects. Our work is inspired by recent deterministic operator splitting methods, designed for solving structured monotone inclusion problems. The algorithm is derived from a forward-backward-forward scheme for solving structured monotone inclusion problems featuring a Lipschitz continuous and monotone game operator. To the best of our knowledge, this is the first distributed (generalized) Nash equilibrium seeking algorithm featuring acceleration techniques in stochastic Nash games without assuming cocoercivity. Numerical examples illustrate the effect of inertia and relaxation on the performance of our proposed algorithm.
In infinite-dimensional Hilbert spaces we device a class of strongly convergent primal-dual schemes for solving variational inequalities defined by a Lipschitz continuous and pseudomonote map. Our novel numerical scheme is based on Tsengs forward-backward-forward scheme, which is known to display weak convergence, unless very strong global monotonicity assumptions are made on the involved operators. We provide a simple augmentation of this algorithm which is computationally cheap and still guarantees strong convergence to a minimal norm solution of the underlying problem. We provide an adaptive extension of the algorithm, freeing us from requiring knowledge of the global Lipschitz constant. We test the performance of the algorithm in the computationally challenging task to find dynamic user equilibria in traffic networks and verify that our scheme is at least competitive to state-of-the-art solvers, and in some case even improve upon them.
75 - Yaron Shoham , Gal Elidan 2021
Despite seminal advances in reinforcement learning in recent years, many domains where the rewards are sparse, e.g. given only at task completion, remain quite challenging. In such cases, it can be beneficial to tackle the task both from its beginning and end, and make the two ends meet. Existing approaches that do so, however, are not effective in the common scenario where the strategy needed near the end goal is very different from the one that is effective earlier on. In this work we propose a novel RL approach for such settings. In short, we first train a backward-looking agent with a simple relaxed goal, and then augment the state representation of the forward-looking agent with straightforward hint features. This allows the learned forward agent to leverage information from backward plans, without mimicking their policy. We demonstrate the efficacy of our approach on the challenging game of Sokoban, where we substantially surpass learned solvers that generalize across levels, and are competitive with SOTA performance of the best highly-crafted systems. Impressively, we achieve these results while learning from a small number of practice levels and using simple RL techniques.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا