ﻻ يوجد ملخص باللغة العربية
This paper proposes a client selection method for federated learning (FL) when the computation and communication resource of clients cannot be estimated; the method trains a machine learning (ML) model using the rich data and computational resources of mobile clients without collecting their data in central systems. Conventional FL with client selection estimates the required time for an FL round from a given clients computation power and throughput and determines a client set to reduce time consumption in FL rounds. However, it is difficult to obtain accurate resource information for all clients before the FL process is conducted because the available computation and communication resources change easily based on background computation tasks, background traffic, bottleneck links, etc. Consequently, the FL operator must select clients through exploration and exploitation processes. This paper proposes a multi-armed bandit (MAB)-based client selection method to solve the exploration and exploitation trade-off and reduce the time consumption for FL in mobile networks. The proposed method balances the selection of clients for which the amount of resources is uncertain and those known to have a large amount of resources. The simulation evaluation demonstrated that the proposed scheme requires less learning time than the conventional method in the resource fluctuating scenario.
Federated Learning (FL), arising as a novel secure learning paradigm, has received notable attention from the public. In each round of synchronous FL training, only a fraction of available clients are chosen to participate and the selection decision
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications. Traditional cloudbased Mac
The issue of potential privacy leakage during centralized AIs model training has drawn intensive concern from the public. A Parallel and Distributed Computing (or PDC) scheme, termed Federated Learning (FL), has emerged as a new paradigm to cope with
Federated learning (FL) is a distributed machine learning paradigm that allows clients to collaboratively train a model over their own local data. FL promises the privacy of clients and its security can be strengthened by cryptographic methods such a
We investigate a cooperative federated learning framework among devices for mobile edge computing, named CFLMEC, where devices co-exist in a shared spectrum with interference. Keeping in view the time-average network throughput of cooperative federat