ﻻ يوجد ملخص باللغة العربية
Although the challenge of the device connection is much relieved in 5G networks, the training latency is still an obstacle preventing Federated Learning (FL) from being largely adopted. One of the most fundamental problems that lead to large latency is the bad candidate-selection for FL. In the dynamic environment, the mobile devices selected by the existing reactive candidate-selection algorithms very possibly fail to complete the training and reporting phases of FL, because the FL parameter server only knows the currently-observed resources of all candidates. To this end, we study the proactive candidate-selection for FL in this paper. We first let each candidate device predict the qualities of both its training and reporting phases locally using LSTM. Then, the proposed candidateselection algorithm is implemented by the Deep Reinforcement Learning (DRL) framework. Finally, the real-world trace-driven experiments prove that the proposed approach outperforms the existing reactive algorithms
Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Several works have analyzed the convergence of federated learning by accountin
Federated Learning (FL), arising as a novel secure learning paradigm, has received notable attention from the public. In each round of synchronous FL training, only a fraction of available clients are chosen to participate and the selection decision
Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed
Nowadays, devices are equipped with advanced sensors with higher processing/computing capabilities. Further, widespread Internet availability enables communication among sensing devices. As a result, vast amounts of data are generated on edge devices
We compare communication efficiencies of two compelling distributed machine learning approaches of split learning and federated learning. We show useful settings under which each method outperforms the other in terms of communication efficiency. We c