ﻻ يوجد ملخص باللغة العربية
Federated learning has attracted attention in recent years for collaboratively training data on distributed devices with privacy-preservation. The limited network capacity of mobile and IoT devices has been seen as one of the major challenges for cross-device federated learning. Recent solutions have been focusing on threshold-based client selection schemes to guarantee the communication efficiency. However, we find this approach can cause biased client selection and results in deteriorated performance. Moreover, we find that the challenge of network limit may be overstated in some cases and the packet loss is not always harmful. In this paper, we explore the loss tolerant federated learning (LT-FL) in terms of aggregation, fairness, and personalization. We use ThrowRightAway (TRA) to accelerate the data uploading for low-bandwidth-devices by intentionally ignoring some packet losses. The results suggest that, with proper integration, TRA and other algorithms can together guarantee the personalization and fairness performance in the face of packet loss below a certain fraction (10%-30%).
Federated learning (FL) is a distributed learning paradigm that enables a large number of devices to collaboratively learn a model without sharing their raw data. Despite its practical efficiency and effectiveness, the iterative on-device learning pr
Graph neural networks (GNN) have been successful in many fields, and derived various researches and applications in real industries. However, in some privacy sensitive scenarios (like finance, healthcare), training a GNN model centrally faces challen
The lottery ticket hypothesis (LTH) claims that a deep neural network (i.e., ground network) contains a number of subnetworks (i.e., winning tickets), each of which exhibiting identically accurate inference capability as that of the ground network. F
Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model under the coordination of a central server without sharing their raw data. Despite its practical efficiency and
Blockchain-enabled Federated Learning (BFL) enables mobile devices to collaboratively train neural network models required by a Machine Learning Model Owner (MLMO) while keeping data on the mobile devices. Then, the model updates are stored in the bl