ﻻ يوجد ملخص باللغة العربية
For model privacy, local model parameters in federated learning shall be obfuscated before sent to the remote aggregator. This technique is referred to as emph{secure aggregation}. However, secure aggregation makes model poisoning attacks such backdooring more convenient considering that existing anomaly detection methods mostly require access to plaintext local models. This paper proposes SAFELearning which supports backdoor detection for secure aggregation. We achieve this through two new primitives - emph{oblivious random grouping (ORG)} and emph{partial parameter disclosure (PPD)}. ORG partitions participants into one-time random subgroups with group configurations oblivious to participants; PPD allows secure partial disclosure of aggregated subgroup models for anomaly detection without leaking individual model privacy. SAFELearning can significantly reduce backdoor model accuracy without jeopardizing the main task accuracy under common backdoor strategies. Extensive experiments show SAFELearning is robust against malicious and faulty participants, whilst being more efficient than the state-of-art secure aggregation protocol in terms of both communication and computation costs.
Recent attacks on federated learning demonstrate that keeping the training data on clients devices does not provide sufficient privacy, as the model parameters shared by clients can leak information about their training data. A secure aggregation pro
Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users
Secure aggregation is a critical component in federated learning, which enables the server to learn the aggregate model of the users without observing their local models. Conventionally, secure aggregation algorithms focus only on ensuring the privac
Secure model aggregation across many users is a key component of federated learning systems. The state-of-the-art protocols for secure model aggregation, which are based on additive masking, require all users to quantize their model updates to the sa
Federated learning enables thousands of participants to construct a deep learning model without sharing their private training data with each other. For example, multiple smartphones can jointly train a next-word predictor for keyboards without revea