Straggler-Robust Distributed Optimization with the Parameter Server Utilizing Coded Gradient


الملخص بالإنكليزية

Optimization in distributed networks plays a central role in almost all distributed machine learning problems. In principle, the use of distributed task allocation has reduced the computational time, allowing better response rates and higher data reliability. However, for these computational algorithms to run effectively in complex distributed systems, the algorithms ought to compensate for communication asynchrony, and network node failures and delays known as stragglers. These issues can change the effective connection topology of the network, which may vary through time, thus hindering the optimization process. In this paper, we propose a new distributed unconstrained optimization algorithm for minimizing a strongly convex function which is adaptable to a parameter server network. In particular, the network worker nodes solve their local optimization problems, allowing the computation of their local coded gradients, and send them to different server nodes. Then each server node aggregates its communicated local gradients, allowing convergence to the desired optimizer. This algorithm is robust to network worker node failures or disconnection, or delays known as stragglers. One way to overcome the straggler problem is to allow coding over the network. We further extend this coding framework to enhance the convergence of the proposed algorithm under such varying network topologies. Finally, we implement the proposed scheme in MATLAB and provide comparative results demonstrating the effectiveness of the proposed framework.

تحميل البحث