ﻻ يوجد ملخص باللغة العربية
The heterogeneity across devices usually hinders the optimization convergence and generalization performance of federated learning (FL) when the aggregation of devices knowledge occurs in the gradient space. For example, devices may differ in terms of data distribution, network latency, input/output space, and/or model architecture, which can easily lead to the misalignment of their local gradients. To improve the tolerance to heterogeneity, we propose a novel federated prototype learning (FedProto) framework in which the devices and server communicate the class prototypes instead of the gradients. FedProto aggregates the local prototypes collected from different devices, and then sends the global prototypes back to all devices to regularize the training of local models. The training on each device aims to minimize the classification error on the local data while keeping the resulting local prototypes sufficiently close to the corresponding global ones. Through experiments, we propose a benchmark setting tailored for heterogeneous FL, with FedProto outperforming several recent FL approaches on multiple datasets.
Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which c
The Internet of Things (IoT) revolution has shown potential to give rise to many medical applications with access to large volumes of healthcare data collected by IoT devices. However, the increasing demand for healthcare data privacy and security ma
This work investigates fault-resilient federated learning when the data samples are non-uniformly distributed across workers, and the number of faulty workers is unknown to the central server. In the presence of adversarially faulty workers who may s
Federated learning (FL) was proposed to achieve collaborative machine learning among various clients without uploading private data. However, due to model aggregation strategies, existing frameworks require strict model homogeneity, limiting the appl
Federated learning learns from scattered data by fusing collaborative models from local nodes. However, due to chaotic information distribution, the model fusion may suffer from structural misalignment with regard to unmatched parameters. In this wor