ﻻ يوجد ملخص باللغة العربية
In this paper, we revisit a well-known distributed projected subgradient algorithm which aims to minimize a sum of cost functions with a common set constraint. In contrast to most of existing results, weight matrices of the time-varying multi-agent network are assumed to be more general, i.e., they are only required to be row stochastic instead of doubly stochastic. We focus on analyzing convergence properties of this algorithm under general graphs. We first show that there generally exists a graph sequence such that the algorithm is not convergent when the network switches freely within finitely many general graphs. Then to guarantee the convergence of this algorithm under any uniformly jointly strongly connected general graph sequence, we provide a necessary and sufficient condition, i.e., the intersection of optimal solution sets to all local optimization problems is not empty. Furthermore, we surprisingly find that the algorithm is convergent for any periodically switching general graph sequence, and the converged solution minimizes a weighted sum of local cost functions, where the weights depend on the Perron vectors of some product matrices of the underlying periodically switching graphs. Finally, we consider a slightly broader class of quasi-periodically switching graph sequences, and show that the algorithm is convergent for any quasi-periodic graph sequence if and only if the network switches between only two graphs.
We investigate a distributed optimization problem over a cooperative multi-agent time-varying network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and glob
A stochastic incremental subgradient algorithm for the minimization of a sum of convex functions is introduced. The method sequentially uses partial subgradient information and the sequence of partial subgradients is determined by a general Markov ch
A collection of optimization problems central to power system operation requires distributed solution architectures to avoid the need for aggregation of all information at a central location. In this paper, we study distributed dual subgradient metho
This paper studies the distributed optimization problem where the objective functions might be nondifferentiable and subject to heterogeneous set constraints. Unlike existing subgradient methods, we focus on the case when the exact subgradients of th
Dual decomposition is widely utilized in distributed optimization of multi-agent systems. In practice, the dual decomposition algorithm is desired to admit an asynchronous implementation due to imperfect communication, such as time delay and packet d