ﻻ يوجد ملخص باللغة العربية
In this paper we consider a distributed convex optimization problem over time-varying undirected networks. We propose a dual method, primarily averaged network dual ascent (PANDA), that is proven to converge R-linearly to the optimal point given that the agents objective functions are strongly convex and have Lipschitz continuous gradients. Like dual decomposition, PANDA requires half the amount of variable exchanges per iterate of methods based on DIGing, and can provide with practical improved performance as empirically demonstrated.
In this paper we consider a distributed convex optimization problem over time-varying networks. We propose a dual method that converges R-linearly to the optimal point given that the agents objective functions are strongly convex and have Lipschitz c
In this paper we consider distributed convex optimization over time-varying undirected graphs. We propose a linearized version of primarily averaged network dual ascent (PANDA) while requiring less computational costs. The proposed method, economic p
We investigate a distributed optimization problem over a cooperative multi-agent time-varying network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and glob
This paper considers a distributed convex optimization problem over a time-varying multi-agent network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and glo
Decentralized optimization over time-varying graphs has been increasingly common in modern machine learning with massive data stored on millions of mobile devices, such as in federated learning. This paper revisits the widely used accelerated gradien