ترغب بنشر مسار تعليمي؟ اضغط هنا

On the O(1/k) Convergence of Asynchronous Distributed Alternating Direction Method of Multipliers

128   0   0.0 ( 0 )
 نشر من قبل Ermin Wei
 تاريخ النشر 2013
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases of this formulation and studied their distributed solution through either subgradient based methods with O(1/sqrt(k)) rate of convergence (where k is the iteration number) or Alternating Direction Method of Multipliers (ADMM) based methods, which require a synchronous implementation and a globally known order on the agents. In this paper, we present a novel asynchronous ADMM based distributed method for the general formulation and show that it converges at the rate O(1/k).



قيم البحث

اقرأ أيضاً

In this work, we consider the asynchronous distributed optimization problem in which each node has its own convex cost function and can communicate directly only with its neighbors, as determined by a directed communication topology (directed graph o r digraph). First, we reformulate the optimization problem so that Alternating Direction Method of Multipliers (ADMM) can be utilized. Then, we propose an algorithm, herein called Asynchronous Approximate Distributed Alternating Direction Method of Multipliers (AsyAD-ADMM), using finite-time asynchronous approximate ratio consensus, to solve the multi-node convex optimization problem, in which every node performs iterative computations and exchanges information with its neighbors asynchronously. More specifically, at every iteration of AsyAD-ADMM, each node solves a local convex optimization problem for one of the primal variables and utilizes a finite-time asynchronous approximate consensus protocol to obtain the value of the other variable which is close to the optimal value, since the cost function for the second primal variable is not decomposable. If the individual cost functions are convex but not necessarily differentiable, the proposed algorithm converges at a rate of $mathcal{O}(1/k)$, where $k$ is the iteration counter. The efficacy of AsyAD-ADMM is exemplified via a proof-of-concept distributed least-square optimization problem with different performance-influencing factors investigated.
Quantization of the parameters of machine learning models, such as deep neural networks, requires solving constrained optimization problems, where the constraint set is formed by the Cartesian product of many simple discrete sets. For such optimizati on problems, we study the performance of the Alternating Direction Method of Multipliers for Quantization ($texttt{ADMM-Q}$) algorithm, which is a variant of the widely-used ADMM method applied to our discrete optimization problem. We establish the convergence of the iterates of $texttt{ADMM-Q}$ to certain $textit{stationary points}$. To the best of our knowledge, this is the first analysis of an ADMM-type method for problems with discrete variables/constraints. Based on our theoretical insights, we develop a few variants of $texttt{ADMM-Q}$ that can handle inexact update rules, and have improved performance via the use of soft projection and injecting randomness to the algorithm. We empirically evaluate the efficacy of our proposed approaches.
In this work, we consider the distributed optimization problem in which each node has its own convex cost function and can communicate directly only with its neighbors, as determined by a directed communication topology (directed graph or digraph). F irst, we reformulate the optimization problem so that Alternating Direction Method of Multipliers (ADMM) can be utilized. Then, we propose an algorithm, herein called Distributed Alternating Direction Method of Multipliers using Finite-Time Exact Ratio Consensus (D-ADMM-FTERC), to solve the multi-node convex optimization problem, in which every node performs iterative computations and exchanges information with its neighbors. At every iteration of D-ADMM-FTERC, each node solves a local convex optimization problem for the one of the primal variables and utilizes a finite-time exact consensus protocol to obtain the optimal value of the other variable, since the cost function for the second primal variable is not decomposable. Since D-ADMM-FTERC requires to know the upper bound on the number of nodes in the network, we furthermore propose a new algorithm, called Fully D-ADMM Finite-Time Distributed Termination (FD-ADMM-FTDT) algorithm, which does not need any global information. If the individual cost functions are convex and not-necessarily differentiable, the proposed algorithms converge at a rate of O(1/k), where k is the iteration counter. Additionally, if the global objective function is strongly convex and smooth, the proposed algorithms have an approximate R-linear convergence rate. The efficacy of FD-ADMM-FTDT is demonstrated via a distributed L1 regularized logistic regression optimization example. Additionally, comparisons with other state-of-the-art algorithms are provided on large-scale networks showing the superior precision and time-efficient performance of FD-ADMM-FTDT.
The alternating direction method of multipliers (ADMM) is one of the most widely used first-order optimisation methods in the literature owing to its simplicity, flexibility and efficiency. Over the years, numerous efforts are made to improve the per formance of the method, such as the inertial technique. By studying the geometric properties of ADMM, we discuss the limitations of current inertial accelerated ADMM and then present and analyze an adaptive acceleration scheme for the method. Numerical experiments on problems arising from image processing, statistics and machine learning demonstrate the advantages of the proposed acceleration approach.
In this paper, we aim to provide a comprehensive analysis on the linear rate convergence of the alternating direction method of multipliers (ADMM) for solving linearly constrained convex composite optimization problems. Under a certain error bound co ndition, we establish the global linear rate of convergence for a more general semi-proximal ADMM with the dual steplength being restricted to be in the open interval $(0, (1+sqrt{5})/2)$. In our analysis, we assume neither the strong convexity nor the strict complementarity except an error bound condition, which holds automatically for convex composite quadratic programming. This semi-proximal ADMM, which includes the classic ADMM, not only has the advantage to resolve the potentially non-solvability issue of the subproblems in the classic ADMM but also possesses the abilities of handling multi-block convex optimization problems efficiently. We shall use convex composite quadratic programming and quadratic semi-definite programming as important applications to demonstrate the significance of the obtained results. Of its own novelty in second-order variational analysis, a complete characterization is provided on the isolated calmness for the nonlinear convex semi-definite optimization problem in terms of its second order sufficient optimality condition and the strict Robinson constraint qualification for the purpose of proving the linear rate convergence of the semi-proximal ADMM when applied to two- and multi-block convex quadratic semi-definite programming.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا