Fundamental Limits of Approximate Gradient Coding


Abstract in English

It has been established that when the gradient coding problem is distributed among $n$ servers, the computation load (number of stored data partitions) of each worker is at least $s+1$ in order to resists $s$ stragglers. This scheme incurs a large overhead when the number of stragglers $s$ is large. In this paper, we focus on a new framework called emph{approximate gradient coding} to mitigate stragglers in distributed learning. We show that, to exactly recover the gradient with high probability, the computation load is lower bounded by $O(log(n)/log(n/s))$. We also propose a code that exactly matches such lower bound. We identify a fundamental three-fold tradeoff for any approximate gradient coding scheme $dgeq O(log(1/epsilon)/log(n/s))$, where $d$ is the computation load, $epsilon$ is the error of gradient. We give an explicit code construction based on random edge removal process that achieves the derived tradeoff. We implement our schemes and demonstrate the advantage of the approaches over the current fastest gradient coding strategies.

Download