ﻻ يوجد ملخص باللغة العربية
We consider minimizing a sum of non-smooth objective functions with set constraints in a distributed manner. As to this problem, we propose a distributed algorithm with an exponential convergence rate for the first time. By the exact penalty method, we reformulate the problem equivalently as a standard distributed one without consensus constraints. Then we design a distributed projected subgradient algorithm with the help of differential inclusions. Furthermore, we show that the algorithm converges to the optimal solution exponentially for strongly convex objective functions.
To solve distributed optimization efficiently with various constraints and nonsmooth functions, we propose a distributed mirror descent algorithm with embedded Bregman damping, as a generalization of conventional distributed projection-based algorith
Considering the constrained stochastic optimization problem over a time-varying random network, where the agents are to collectively minimize a sum of objective functions subject to a common constraint set, we investigate asymptotic properties of a d
This paper investigates accelerating the convergence of distributed optimization algorithms on non-convex problems. We propose a distributed primal-dual stochastic gradient descent~(SGD) equipped with powerball method to accelerate. We show that the
Decentralized optimization is a powerful paradigm that finds applications in engineering and learning design. This work studies decentralized composite optimization problems with non-smooth regularization terms. Most existing gradient-based proximal
We investigate a distributed optimization problem over a cooperative multi-agent time-varying network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and glob