ﻻ يوجد ملخص باللغة العربية
Federated learning (FL) has enabled training models collaboratively from multiple data owning parties without sharing their data. Given the privacy regulations of patients healthcare data, learning-based systems in healthcare can greatly benefit from privacy-preserving FL approaches. However, typical model aggregation methods in FL are sensitive to local model updates, which may lead to failure in learning a robust and accurate global model. In this work, we implement and evaluate different robust aggregation methods in FL applied to healthcare data. Furthermore, we show that such methods can detect and discard faulty or malicious local clients during training. We run two sets of experiments using two real-world healthcare datasets for training medical diagnosis classification tasks. Each dataset is used to simulate the performance of three different robust FL aggregation strategies when facing different poisoning attacks. The results show that privacy preserving methods can be successfully applied alongside Byzantine-robust aggregation techniques. We observed in particular how using differential privacy (DP) did not significantly impact the final learning convergence of the different aggregation strategies.
Recent attacks on federated learning demonstrate that keeping the training data on clients devices does not provide sufficient privacy, as the model parameters shared by clients can leak information about their training data. A secure aggregation pro
In this paper, we address the problem of privacy-preserving training and evaluation of neural networks in an $N$-party, federated learning setting. We propose a novel system, POSEIDON, the first of its kind in the regime of privacy-preserving neural
Federated learning has emerged as a promising approach for collaborative and privacy-preserving learning. Participants in a federated learning process cooperatively train a model by exchanging model parameters instead of the actual training data, whi
Federated learning (FL) is an emerging paradigm that enables multiple organizations to jointly train a model without revealing their private data to each other. This paper studies {it vertical} federated learning, which tackles the scenarios where (i
We present a robust aggregation approach to make federated learning robust to settings when a fraction of the devices may be sending corrupted updates to the server. The proposed approach relies on a robust secure aggregation oracle based on the geom