Efficient Search of First-Order Nash Equilibria in Nonconvex-Concave Smooth Min-Max Problems


الملخص بالإنكليزية

We propose an efficient algorithm for finding first-order Nash equilibria in min-max problems of the form $min_{x in X}max_{yin Y} F(x,y)$, where the objective function is smooth in both variables and concave with respect to $y$; the sets $X$ and $Y$ are convex and projection-friendly, and $Y$ is compact. Our goal is to find an $(varepsilon_x,varepsilon_y)$-first-order Nash equilibrium with respect to a stationarity criterion that is stronger than the commonly used proximal gradient norm. The proposed approach is fairly simple: we perform approximate proximal-point iterations on the primal function, with inexact oracle provided by Nesterovs algorithm run on the regularized function $F(x_t,cdot)$, $x_t$ being the current primal iterate. The resulting iteration complexity is $O(varepsilon_x{}^{-2} varepsilon_y{}^{-1/2})$ up to a logarithmic factor. As a byproduct, the choice $varepsilon_y = O(varepsilon_x{}^2)$ allows for the $O(varepsilon_x{}^{-3})$ complexity of finding an $varepsilon_x$-stationary point for the standard Moreau envelope of the primal function. Moreover, when the objective is strongly concave with respect to $y$, the complexity estimate for our algorithm improves to $O(varepsilon_x{}^{-2}{kappa_y}^{1/2})$ up to a logarithmic factor, where $kappa_y$ is the condition number appropriately adjusted for coupling. In both scenarios, the complexity estimates are the best known so far, and are only known for the (weaker) proximal gradient norm criterion. Meanwhile, our approach is user-friendly: (i) the algorithm is built upon running a variant of Nesterovs accelerated algorithm as subroutine and avoids extragradient steps; (ii) the convergence analysis recycles the well-known results on accelerated methods with inexact oracle. Finally, we extend the approach to non-Euclidean proximal geometries.

تحميل البحث