No Arabic abstract
In this work, we introduce ADAPD, $textbf{A}$ $textbf{D}$ecentr$textbf{A}$lized $textbf{P}$rimal-$textbf{D}$ual algorithmic framework for solving non-convex and smooth consensus optimization problems over a network of distributed agents. ADAPD makes use of an inexact ADMM-type update. During each iteration, each agent first inexactly solves a local strongly convex subproblem and then performs a neighbor communication while updating a set of dual variables. Two variations to ADAPD are presented. The variants allow agents to balance the communication and computation workload while they collaboratively solve the consensus optimization problem. The optimal convergence rate for non-convex and smooth consensus optimization problems is established; namely, ADAPD achieves $varepsilon$-stationarity in $mathcal{O}(varepsilon^{-1})$ iterations. Numerical experiments demonstrate the superiority of ADAPD over several existing decentralized methods.
This paper investigates accelerating the convergence of distributed optimization algorithms on non-convex problems. We propose a distributed primal-dual stochastic gradient descent~(SGD) equipped with powerball method to accelerate. We show that the proposed algorithm achieves the linear speedup convergence rate $mathcal{O}(1/sqrt{nT})$ for general smooth (possibly non-convex) cost functions. We demonstrate the efficiency of the algorithm through numerical experiments by training two-layer fully connected neural networks and convolutional neural networks on the MNIST dataset to compare with state-of-the-art distributed SGD algorithms and centralized SGD algorithms.
We introduce a novel primal-dual flow for affine constrained convex optimization problem. As a modification of the standard saddle-point system, our primal-dual flow is proved to possesses the exponential decay property, in terms of a tailored Lyapunov function. Then a class of primal-dual methods for the original optimization problem are obtained from numerical discretizations of the continuous flow, and with a unified discrete Lyapunov function, nonergodic convergence rates are established. Among those algorithms, we can recover the (linearized) augmented Lagrangian method and the quadratic penalty method with continuation technique. Also, new methods with a special inner problem, that is a linear symmetric positive definite system or a nonlinear equation which may be solved efficiently via the semi-smooth Newton method, have been proposed as well. Especially, numerical tests on the linearly constrained $l_1$-$l_2$ minimization show that our method outperforms the accelerated linearized Bregman method.
This paper considers a general convex constrained problem setting where functions are not assumed to be differentiable nor Lipschitz continuous. Our motivation is in finding a simple first-order method for solving a wide range of convex optimization problems with minimal requirements. We study the method of weighted dual averages (Nesterov, 2009) in this setting and prove that it is an optimal method.
We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. Both for primal and dual oracles, the proposed methods are optimal in terms of the number of communication steps. However, for all classes of the objective, the optimality in terms of the number of oracle calls per node takes place only up to a logarithmic factor and the notion of smoothness. By using mini-batching technique, we show that the proposed methods with stochastic oracle can be additionally parallelized at each node. The considered algorithms can be applied to many data science problems and inverse problems.
We propose a new majorization-minimization (MM) method for non-smooth and non-convex programs, which is general enough to include the existing MM methods. Besides the local majorization condition, we only require that the difference between the directional derivatives of the objective function and its surrogate function vanishes when the number of iterations approaches infinity, which is a very weak condition. So our method can use a surrogate function that directly approximates the non-smooth objective function. In comparison, all the existing MM methods construct the surrogate function by approximating the smooth component of the objective function. We apply our relaxed MM methods to the robust matrix factorization (RMF) problem with different regularizations, where our locally majorant algorithm shows advantages over the state-of-the-art approaches for RMF. This is the first algorithm for RMF ensuring, without extra assumptions, that any limit point of the iterates is a stationary point.