ﻻ يوجد ملخص باللغة العربية
In this work, we first consider distributed convex constrained optimization problems where the objective function is encoded by multiple local and possibly nonsmooth objectives privately held by a group of agents, and propose a distributed subgradient method with double averaging (abbreviated as ${rm DSA_2}$) that only requires peer-to-peer communication and local computation to solve the global problem. The algorithmic framework builds on dual methods and dynamic average consensus; the sequence of test points is formed by iteratively minimizing a local dual model of the overall objective where the coefficients, i.e., approximated subgradients of the objective, are supplied by the dynamic average consensus scheme. We theoretically show that ${rm DSA_2}$ enjoys non-ergodic convergence properties, i.e., the local minimizing sequence itself is convergent, a distinct feature that cannot be found in existing results. Specifically, we establish a convergence rate of $O(frac{1}{sqrt{t}})$ in terms of objective function error. Then, extensions are made to tackle distributed optimization problems with coupled functional constraints by combining ${rm DSA_2}$ and dual decomposition. This is made possible by Lagrangian relaxation that transforms the coupling in constraints of the primal problem into that in cost functions of the dual, thus allowing us to solve the dual problem via ${rm DSA_2}$. Both the dual objective error and the quadratic penalty for the coupled constraint are proved to converge at a rate of $O(frac{1}{sqrt{t}})$, and the primal objective error asymptotically vanishes. Numerical experiments and comparisons are conducted to illustrate the advantage of the proposed algorithms and validate our theoretical findings.
This paper studies the distributed optimization problem where the objective functions might be nondifferentiable and subject to heterogeneous set constraints. Unlike existing subgradient methods, we focus on the case when the exact subgradients of th
In this paper, we aim to solve a distributed optimization problem with coupling constraints based on proximal gradient method in a multi-agent network, where the cost function of the agents is composed of smooth and possibly non-smooth parts. To solv
This work develops effective distributed strategies for the solution of constrained multi-agent stochastic optimization problems with coupled parameters across the agents. In this formulation, each agent is influenced by only a subset of the entries
A collection of optimization problems central to power system operation requires distributed solution architectures to avoid the need for aggregation of all information at a central location. In this paper, we study distributed dual subgradient metho
This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents. This scenario arises in many machine learning and engine