Federated Learning with Nesterov Accelerated Gradient Momentum Method


Abstract in English

Federated learning (FL) is a fast-developing technique that allows multiple workers to train a global model based on a distributed dataset. Conventional FL employs gradient descent algorithm, which may not be efficient enough. It is well known that Nesterov Accelerated Gradient (NAG) is more advantageous in centralized training environment, but it is not clear how to quantify the benefits of NAG in FL so far. In this work, we focus on a version of FL based on NAG (FedNAG) and provide a detailed convergence analysis. The result is compared with conventional FL based on gradient descent. One interesting conclusion is that as long as the learning step size is sufficiently small, FedNAG outperforms FedAvg. Extensive experiments based on real-world datasets are conducted, verifying our conclusions and confirming the better convergence performance of FedNAG.

Download