ﻻ يوجد ملخص باللغة العربية
In this paper we consider distributed convex optimization over time-varying undirected graphs. We propose a linearized version of primarily averaged network dual ascent (PANDA) while requiring less computational costs. The proposed method, economic primarily averaged network dual ascent (Eco-PANDA), provably converges at R-linear rate to the optimal point given that the agents objective functions are strongly convex and have Lipschitz continuous gradients. Therefore, the method is competitive, in terms of type of rate, with both DIGing and PANDA. The proposed method halves the communication costs of methods like DIGing while still converging R-linearly and having the same per iterate complexity.
In this paper we consider a distributed convex optimization problem over time-varying networks. We propose a dual method that converges R-linearly to the optimal point given that the agents objective functions are strongly convex and have Lipschitz c
In this paper we consider a distributed convex optimization problem over time-varying undirected networks. We propose a dual method, primarily averaged network dual ascent (PANDA), that is proven to converge R-linearly to the optimal point given that
We investigate a distributed optimization problem over a cooperative multi-agent time-varying network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and glob
Decentralized optimization over time-varying graphs has been increasingly common in modern machine learning with massive data stored on millions of mobile devices, such as in federated learning. This paper revisits the widely used accelerated gradien
This paper considers a distributed convex optimization problem over a time-varying multi-agent network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and glo