ترغب بنشر مسار تعليمي؟ اضغط هنا

Decentralized and Parallel Primal and Dual Accelerated Methods for Stochastic Convex Programming Problems

118   0   0.0 ( 0 )
 نشر من قبل Darina Dvinskikh
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. Both for primal and dual oracles, the proposed methods are optimal in terms of the number of communication steps. However, for all classes of the objective, the optimality in terms of the number of oracle calls per node takes place only up to a logarithmic factor and the notion of smoothness. By using mini-batching technique, we show that the proposed methods with stochastic oracle can be additionally parallelized at each node. The considered algorithms can be applied to many data science problems and inverse problems.

قيم البحث

اقرأ أيضاً

This paper investigates accelerating the convergence of distributed optimization algorithms on non-convex problems. We propose a distributed primal-dual stochastic gradient descent~(SGD) equipped with powerball method to accelerate. We show that the proposed algorithm achieves the linear speedup convergence rate $mathcal{O}(1/sqrt{nT})$ for general smooth (possibly non-convex) cost functions. We demonstrate the efficiency of the algorithm through numerical experiments by training two-layer fully connected neural networks and convolutional neural networks on the MNIST dataset to compare with state-of-the-art distributed SGD algorithms and centralized SGD algorithms.
This paper considers a general convex constrained problem setting where functions are not assumed to be differentiable nor Lipschitz continuous. Our motivation is in finding a simple first-order method for solving a wide range of convex optimization problems with minimal requirements. We study the method of weighted dual averages (Nesterov, 2009) in this setting and prove that it is an optimal method.
In this work, we introduce ADAPD, $textbf{A}$ $textbf{D}$ecentr$textbf{A}$lized $textbf{P}$rimal-$textbf{D}$ual algorithmic framework for solving non-convex and smooth consensus optimization problems over a network of distributed agents. ADAPD makes use of an inexact ADMM-type update. During each iteration, each agent first inexactly solves a local strongly convex subproblem and then performs a neighbor communication while updating a set of dual variables. Two variations to ADAPD are presented. The variants allow agents to balance the communication and computation workload while they collaboratively solve the consensus optimization problem. The optimal convergence rate for non-convex and smooth consensus optimization problems is established; namely, ADAPD achieves $varepsilon$-stationarity in $mathcal{O}(varepsilon^{-1})$ iterations. Numerical experiments demonstrate the superiority of ADAPD over several existing decentralized methods.
We study constrained stochastic programs where the decision vector at each time slot cannot be chosen freely but is tied to the realization of an underlying random state vector. The goal is to minimize a general objective function subject to linear c onstraints. A typical scenario where such programs appear is opportunistic scheduling over a network of time-varying channels, where the random state vector is the channel state observed, and the control vector is the transmission decision which depends on the current channel state. We consider a primal-dual type Frank-Wolfe algorithm that has a low complexity update during each slot and that learns to make efficient decisions without prior knowledge of the probability distribution of the random state vector. We establish convergence time guarantees for the case of both convex and non-convex objective functions. We also emphasize application of the algorithm to non-convex opportunistic scheduling and distributed non-convex stochastic optimization over a connected graph.
175 - Shengjie Xu 2021
The augmented Lagrangian method (ALM) is a fundamental tool for solving the canonical convex minimization problem with linear constraints, and efficiently and easily how to implement the original ALM is affirmatively significant. Recently, He and Yua n have proposed a balanced version of ALM [B.S. He and X.M. Yuan, arXiv:2108.08554, 2021], which reshapes the original ALM by balancing its subproblems and makes the benchmark ALM easier to implement without any additional condition. In practice, the balanced ALM updates the new iterate by a primal-dual order. In this note, exploiting the variational inequality structure of the most recent balanced ALM, we propose a dual-primal version of the balanced ALM for linearly constrained convex minimization problems. The novel proposed method generates the new iterate by a dual-primal order and enjoys the same computational difficulty with the original primal-dual balanced ALM. Furthermore, under the lens of the proximal point algorithm, we conduct the convergence analysis of the novel introduced method in the context of variational inequalities. Numerical tests on the basic pursuit problem demonstrate that the introduced method enjoys the same high efficiency with the prototype balanced ALM.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا