ﻻ يوجد ملخص باللغة العربية
Blockchain assisted federated learning (BFL) has been intensively studied as a promising technology to process data at the network edge in a distributed manner. In this paper, we focus on BFL over wireless environments with varying channels and energy harvesting at clients. We are interested in proposing dynamic resource allocation (i.e., transmit power, computation frequency for model training and block mining for each client) and client scheduling (DRACS) to maximize the long-term time average (LTA) training data size with an LTA energy consumption constraint. Specifically, we first define the Lyapunov drift by converting the LTA energy consumption to a queue stability constraint. Then, we construct a Lyapunov drift-plus-penalty ratio function to decouple the original stochastic problem into multiple deterministic optimizations along the time line. Our construction is capable of dealing with uneven durations of communication rounds. To make the one-shot deterministic optimization problem of combinatorial fractional form tractable, we next convert the fractional problem into a subtractive-form one by Dinkelbach method, which leads to the asymptotically optimal solution in an iterative way. In addition, the closed-form of the optimal resource allocation and client scheduling is obtained in each iteration with a low complexity. Furthermore, we conduct the performance analysis for the proposed algorithm, and discover that the LTA training data size and energy consumption obey an [$mathcal{O}(1/V)$, $mathcal{O}(sqrt{V})$] trade-off. Our experimental results show that the proposed algorithm can provide both higher learning accuracy and faster convergence with limited time and energy consumption based on the MNIST and Fashion-MNIST datasets.
With the development of federated learning (FL), mobile devices (MDs) are able to train their local models with private data and sends them to a central server for aggregation, thereby preventing sensitive raw data leakage. In this paper, we aim to i
There is an increasing interest in a fast-growing machine learning technique called Federated Learning, in which the model training is distributed over mobile user equipments (UEs), exploiting UEs local computation and training data. Despite its adva
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server. The performance of uploaded models in such situations can vary widely due to imbalanced data distributions, potential demands
Federated learning (FL), as a distributed machine learning paradigm, promotes personal privacy by local data processing at each client. However, relying on a centralized server for model aggregation, standard FL is vulnerable to server malfunctions,
To ensure uninterrupted services to the cloud clients from federated cloud providers, it is important to guarantee an efficient allocation of the cloud resources to users to improve the rate of client satisfaction and the quality of the service provi