ﻻ يوجد ملخص باللغة العربية
Optimization in distributed networks plays a central role in almost all distributed machine learning problems. In principle, the use of distributed task allocation has reduced the computational time, allowing better response rates and higher data reliability. However, for these computational algorithms to run effectively in complex distributed systems, the algorithms ought to compensate for communication asynchrony, network node failures and delays known as stragglers. These issues can change the effective connection topology of the network, which may vary over time, thus hindering the optimization process. In this paper, we propose a new distributed unconstrained optimization algorithm for minimizing a convex function which is adaptable to a parameter server network. In particular, the network worker nodes solve their local optimization problems, allowing the computation of their local coded gradients, which will be sent to different server nodes. Then within this parameter server platform each server node aggregates its communicated local gradients, allowing convergence to the desired optimizer. This algorithm is robust to network s worker node failures, disconnection, or delaying nodes known as stragglers. One way to overcome the straggler problem is to allow coding over the network. We further extend this coding framework to enhance the convergence of the proposed algorithm under such varying network topologies. By using coding and utilizing evaluations of gradients of uniformly bounded delay we further enhance the proposed algorithm performance. Finally, we implement the proposed scheme in MATLAB and provide comparative results demonstrating the effectiveness of the proposed framework
Optimization in distributed networks plays a central role in almost all distributed machine learning problems. In principle, the use of distributed task allocation has reduced the computational time, allowing better response rates and higher data rel
A collection of optimization problems central to power system operation requires distributed solution architectures to avoid the need for aggregation of all information at a central location. In this paper, we study distributed dual subgradient metho
This work is concerned with the design and effects of the synchronization gains on the synchronization problem for a class of networked distributed parameter systems. The networked systems, assumed to be described by the same evolution equation in a
This paper studies the distributed optimization problem where the objective functions might be nondifferentiable and subject to heterogeneous set constraints. Unlike existing subgradient methods, we focus on the case when the exact subgradients of th
Dual decomposition is widely utilized in distributed optimization of multi-agent systems. In practice, the dual decomposition algorithm is desired to admit an asynchronous implementation due to imperfect communication, such as time delay and packet d