ﻻ يوجد ملخص باللغة العربية
Federated learning (FL) is a fast-developing technique that allows multiple workers to train a global model based on a distributed dataset. Conventional FL employs gradient descent algorithm, which may not be efficient enough. It is well known that Nesterov Accelerated Gradient (NAG) is more advantageous in centralized training environment, but it is not clear how to quantify the benefits of NAG in FL so far. In this work, we focus on a version of FL based on NAG (FedNAG) and provide a detailed convergence analysis. The result is compared with conventional FL based on gradient descent. One interesting conclusion is that as long as the learning step size is sufficiently small, FedNAG outperforms FedAvg. Extensive experiments based on real-world datasets are conducted, verifying our conclusions and confirming the better convergence performance of FedNAG.
In this paper, a Federated Learning (FL) simulation platform is introduced. The target scenario is Acoustic Model training based on this platform. To our knowledge, this is the first attempt to apply FL techniques to Speech Recognition tasks due to t
We consider the problem of decentralized deep learning where multiple agents collaborate to learn from a distributed dataset. While there exist several decentralized deep learning approaches, the majority consider a central parameter-server topology
Federated learning (FL) aims to train machine learning models in the decentralized system consisting of an enormous amount of smart edge devices. Federated averaging (FedAvg), the fundamental algorithm in FL settings, proposes on-device training and
Federated Averaging (FedAvg, also known as Local-SGD) (McMahan et al., 2017) is a classical federated learning algorithm in which clients run multiple local SGD steps before communicating their update to an orchestrating server. We propose a new fede
While federated learning traditionally aims to train a single global model across decentralized local datasets, one model may not always be ideal for all participating clients. Here we propose an alternative, where each client only federates with oth