Do you want to publish a course? Click here

Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities

83   0   0.0 ( 0 )
 Added by Chaobing Song
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The optimization problems associated with training generative adversarial neural networks can be largely reduced to certain {em non-monotone} variational inequality problems (VIPs), whereas existing convergence results are mostly based on monotone or strongly monotone assumptions. In this paper, we propose {em optimistic dual extrapolation (OptDE)}, a method that only performs {em one} gradient evaluation per iteration. We show that OptDE is provably convergent to {em a strong solution} under different coherent non-monotone assumptions. In particular, when a {em weak solution} exists, the convergence rate of our method is $O(1/{epsilon^{2}})$, which matches the best existing result of the methods with two gradient evaluations. Further, when a {em $sigma$-weak solution} exists, the convergence guarantee is improved to the linear rate $O(logfrac{1}{epsilon})$. Along the way--as a byproduct of our inquiries into non-monotone variational inequalities--we provide the near-optimal $Obig(frac{1}{epsilon}log frac{1}{epsilon}big)$ convergence guarantee in terms of restricted strong merit function for monotone variational inequalities. We also show how our results can be naturally generalized to the stochastic setting, and obtain corresponding new convergence results. Taken together, our results contribute to the broad landscape of variational inequality--both non-monotone and monotone alike--by providing a novel and more practical algorithm with the state-of-the-art convergence guarantees.



rate research

Read More

224 - Cong D. Dang , Guanghui Lan 2013
In this paper, we study a class of generalized monotone variational inequality (GMVI) problems whose operators are not necessarily monotone (e.g., pseudo-monotone). We present non-Euclidean extragradient (N-EG) methods for computing approximate strong solutions of these problems, and demonstrate how their iteration complexities depend on the global Lipschitz or H{o}lder continuity properties for their operators and the smoothness properties for the distance generating function used in the N-EG algorithms. We also introduce a variant of this algorithm by incorporating a simple line-search procedure to deal with problems with more general continuous operators. Numerical studies are conducted to illustrate the significant advantages of the developed algorithms over the existing ones for solving large-scale GMVI problems.
We provide improved convergence rates for constrained convex-concave min-max problems and monotone variational inequalities with higher-order smoothness. In min-max settings where the $p^{th}$-order derivatives are Lipschitz continuous, we give an algorithm HigherOrderMirrorProx that achieves an iteration complexity of $O(1/T^{frac{p+1}{2}})$ when given access to an oracle for finding a fixed point of a $p^{th}$-order equation. We give analogous rates for the weak monotone variational inequality problem. For $p>2$, our results improve upon the iteration complexity of the first-order Mirror Prox method of Nemirovski [2004] and the second-order method of Monteiro and Svaiter [2012]. We further instantiate our entire algorithm in the unconstrained $p=2$ case.
In this paper, we propose a unifying framework incorporating several momentum-related search directions for solving strongly monotone variational inequalities. The specific combinations of the search directions in the framework are made to guarantee the optimal iteration complexity bound of $mathcal{O}left(kappaln(1/epsilon)right)$ to reach an $epsilon$-solution, where $kappa$ is the condition number. This framework provides the flexibility for algorithm designers to train -- among different parameter combinations -- the one that best suits the structure of the problem class at hand. The proposed framework includes the following iterative points and directions as its constituents: the extra-gradient, the optimistic gradient descent ascent (OGDA) direction (aka optimism), the heavy-ball direction, and Nesterovs extrapolation points. As a result, all the afore-mentioned methods become the special cases under the general scheme of extra points. We also specialize this approach to strongly convex minimization, and show that a similar extra-point approach achieves the optimal iteration complexity bound of $mathcal{O}(sqrt{kappa}ln(1/epsilon))$ for this class of problems.
In this paper we propose a primal-dual homotopy method for $ell_1$-minimization problems with infinity norm constraints in the context of sparse reconstruction. The natural homotopy parameter is the value of the bound for the constraints and we show that there exists a piecewise linear solution path with finitely many break points for the primal problem and a respective piecewise constant path for the dual problem. We show that by solving a small linear program, one can jump to the next primal break point and then, solving another small linear program, a new optimal dual solution is calculated which enables the next such jump in the subsequent iteration. Using a theorem of the alternative, we show that the method never gets stuck and indeed calculates the whole path in a finite number of steps. Numerical experiments demonstrate the effectiveness of our algorithm. In many cases, our method significantly outperforms commercial LP solvers; this is possible since our approach employs a sequence of considerably simpler auxiliary linear programs that can be solved efficiently with specialized active-set strategies.
In this article, we study algorithms for nonnegative matrix factorization (NMF) in various applications involving streaming data. Utilizing the continual nature of the data, we develop a fast two-stage algorithm for highly efficient and accurate NMF. In the first stage, an alternating non-negative least squares (ANLS) framework is used, in combination with active set method with warm-start strategy for the solution of subproblems. In the second stage, an interior point method is adopted to accelerate the local convergence. The convergence of the proposed algorithm is proved. The new algorithm is compared with some existing algorithms in benchmark tests using both real-world data and synthetic data. The results demonstrate the advantage of our algorithm in finding high-precision solutions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا