No Arabic abstract
The alternating direction method of multipliers (ADMM) is a popular method for solving convex separable minimization problems with linear equality constraints. The generalization of the two-block ADMM to the three-block ADMM is not trivial since the three-block ADMM is not convergence in general. Many variants of three-block ADMM have been developed with guarantee convergence. Besides the ADMM, the alternating minimization algorithm (AMA) is also an important algorithm for solving the convex separable minimization problem with linear equality constraints. The AMA is first proposed by Tseng, and it is equivalent to the forward-backward splitting algorithm applied to the corresponding dual problem. In this paper, we design a variant of three-block AMA, which is derived by employing an inertial extension of the three-operator splitting algorithm to the dual problem. Compared with three-block ADMM, the first subproblem of the proposed algorithm only minimizing the Lagrangian function. As a by-product, we obtain a relaxed algorithm of Davis and Yin. Under mild conditions on the parameters, we establish the convergence of the proposed algorithm in infinite-dimensional Hilbert spaces. Finally, we conduct numerical experiments on the stable principal component pursuit (SPCP) to verify the efficiency and effectiveness of the proposed algorithm.
We introduce a relaxed inertial forward-backward-forward (RIFBF) splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a single-valued monotone and Lipschitz continuous operator. This work aims to extend Tsengs forward-backward-forward method by both using inertial effects as well as relaxation parameters. We formulate first a second order dynamical system which approaches the solution set of the monotone inclusion problem to be solved and provide an asymptotic analysis for its trajectories. We provide for RIFBF, which follows by explicit time discretization, a convergence analysis in the general monotone case as well as when applied to the solving of pseudo-monotone variational inequalities. We illustrate the proposed method by applications to a bilinear saddle point problem, in the context of which we also emphasize the interplay between the inertial and the relaxation parameters, and to the training of Generative Adversarial Networks (GANs).
In this paper we propose a new operator splitting algorithm for distributed Nash equilibrium seeking under stochastic uncertainty, featuring relaxation and inertial effects. Our work is inspired by recent deterministic operator splitting methods, designed for solving structured monotone inclusion problems. The algorithm is derived from a forward-backward-forward scheme for solving structured monotone inclusion problems featuring a Lipschitz continuous and monotone game operator. To the best of our knowledge, this is the first distributed (generalized) Nash equilibrium seeking algorithm featuring acceleration techniques in stochastic Nash games without assuming cocoercivity. Numerical examples illustrate the effect of inertia and relaxation on the performance of our proposed algorithm.
In this paper, we consider a class of nonsmooth nonconvex optimization problems whose objective is the sum of a block relative smooth function and a proper and lower semicontinuous block separable function. Although the analysis of block proximal gradient (BPG) methods for the class of block $L$-smooth functions have been successfully extended to Bregman BPG methods that deal with the class of block relative smooth functions, accelerated Bregman BPG methods are scarce and challenging to design. Taking our inspiration from Nesterov-type acceleration and the majorization-minimization scheme, we propose a block alternating Bregman Majorization-Minimization framework with Extrapolation (BMME). We prove subsequential convergence of BMME to a first-order stationary point under mild assumptions, and study its global convergence under stronger conditions. We illustrate the effectiveness of BMME on the penalized orthogonal nonnegative matrix factorization problem.
In this paper, we develop convergence analysis of a modified line search method for objective functions whose value is computed with noise and whose gradient estimates are inexact and possibly random. The noise is assumed to be bounded in absolute value without any additional assumptions. We extend the framework based on stochastic methods from [Cartis and Scheinberg, 2018] which was developed to provide analysis of a standard line search method with exact function values and random gradients to the case of noisy functions. We introduce two alternative conditions on the gradient which when satisfied with some sufficiently large probability at each iteration, guarantees convergence properties of the line search method. We derive expected complexity bounds to reach a near optimal neighborhood for convex, strongly convex and nonconvex functions. The exact dependence of the convergence neighborhood on the noise is specified.
In this paper, we consider a class of nonconvex problems with linear constraints appearing frequently in the area of image processing. We solve this problem by the penalty method and propose the iteratively reweighted alternating minimization algorithm. To speed up the algorithm, we also apply the continuation strategy to the penalty parameter. A convergence result is proved for the algorithm. Compared with the nonconvex ADMM, the proposed algorithm enjoys both theoretical and computational advantages like weaker convergence requirements and faster speed. Numerical results demonstrate the efficiency of the proposed algorithm.