ترغب بنشر مسار تعليمي؟ اضغط هنا

Secure Aggregation with Heterogeneous Quantization in Federated Learning

71   0   0.0 ( 0 )
 نشر من قبل Ahmed Elkordy
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Secure model aggregation across many users is a key component of federated learning systems. The state-of-the-art protocols for secure model aggregation, which are based on additive masking, require all users to quantize their model updates to the same level of quantization. This severely degrades their performance due to lack of adaptation to available bandwidth at different users. We propose three schemes that allow secure model aggregation while using heterogeneous quantization. This enables the users to adjust their quantization proportional to their available bandwidth, which can provide a substantially better trade-off between the accuracy of training and the communication time. The proposed schemes are based on a grouping strategy by partitioning the network into groups, and partitioning the local model updates of users into segments. Instead of applying aggregation protocol to the entire local model update vector, it is applied on segments with specific coordination between users. We theoretically evaluate the quantization error for our schemes, and also demonstrate how our schemes can be utilized to overcome Byzantine users.



قيم البحث

اقرأ أيضاً

For model privacy, local model parameters in federated learning shall be obfuscated before sent to the remote aggregator. This technique is referred to as emph{secure aggregation}. However, secure aggregation makes model poisoning attacks such backdo oring more convenient considering that existing anomaly detection methods mostly require access to plaintext local models. This paper proposes SAFELearning which supports backdoor detection for secure aggregation. We achieve this through two new primitives - emph{oblivious random grouping (ORG)} and emph{partial parameter disclosure (PPD)}. ORG partitions participants into one-time random subgroups with group configurations oblivious to participants; PPD allows secure partial disclosure of aggregated subgroup models for anomaly detection without leaking individual model privacy. SAFELearning can significantly reduce backdoor model accuracy without jeopardizing the main task accuracy under common backdoor strategies. Extensive experiments show SAFELearning is robust against malicious and faulty participants, whilst being more efficient than the state-of-art secure aggregation protocol in terms of both communication and computation costs.
Federated learning enables one to train a common machine learning model across separate, privately-held datasets via distributed model training. During federated training, only intermediate model parameters are transmitted to a central server which a ggregates these parameters to create a new common model, thus exposing only intermediate parameters rather than the training data itself. However, some attacks (e.g. membership inference) are able to infer properties of local data from these intermediate model parameters. Hence, performing the aggregation of these client-specific model parameters in a secure way is required. Additionally, the communication cost is often the bottleneck of the federated systems, especially for large neural networks. So, limiting the number and the size of communications is necessary to efficiently train large neural architectures. In this article, we present an efficient and secure protocol for performing secure aggregation over compressed model updates in the context of collaborative, few-party federated learning, a context common in the medical, healthcare, and biotechnical use-cases of federated systems. By making compression-based federated techniques amenable to secure computation, we develop a secure aggregation protocol between multiple servers with very low communication and computation costs and without preprocessing overhead. Our experiments demonstrate the efficiency of this new approach for secure federated training of deep convolutional neural networks.
66 - Jingheng Zheng , Wanli Ni , 2021
This paper investigates the model aggregation process in an over-the-air federated learning (AirFL) system, where an intelligent reflecting surface (IRS) is deployed to assist the transmission from users to the base station (BS). With the purpose of overcoming the absence of the security examination against malicious individuals, successive interference cancellation (SIC) is adopted as a basis to support analyzing statistic characteristics of model parameters from devices. The objective of this paper is to minimize the mean-square-error by jointly optimizing the receive beamforming vector at the BS, transmit power allocation at users, and phase shift matrix of the IRS, subject to the transmit power constraint for devices, unit-modulus constraint for reflecting elements, SIC decoding order constraint and quality-of-service constraint. To address this complicated problem, alternating optimization is employed to decompose it into three subproblems, where the optimal receive beamforming vector is obtained by solving the first subproblem with the Lagrange dual method. Then, the convex relaxation method is applied to the transmit power allocation subproblem to find a suboptimal solution. Eventually, the phase shift matrix subproblem is addressed by invoking the semidefinite relaxation. Simulation results validate the availability of IRS and the effectiveness of the proposed scheme in improving federated learning performance.
Recent attacks on federated learning demonstrate that keeping the training data on clients devices does not provide sufficient privacy, as the model parameters shared by clients can leak information about their training data. A secure aggregation pro tocol enables the server to aggregate clients models in a privacy-preserving manner. However, existing secure aggregation protocols incur high computation/communication costs, especially when the number of model parameters is larger than the number of clients participating in an iteration -- a typical scenario in federated learning. In this paper, we propose a secure aggregation protocol, FastSecAgg, that is efficient in terms of computation and communication, and robust to client dropouts. The main building block of FastSecAgg is a novel multi-secret sharing scheme, FastShare, based on the Fast Fourier Transform (FFT), which may be of independent interest. FastShare is information-theoretically secure, and achieves a trade-off between the number of secrets, privacy threshold, and dropout tolerance. Riding on the capabilities of FastShare, we prove that FastSecAgg is (i) secure against the server colluding with any subset of some constant fraction (e.g. $sim10%$) of the clients in the honest-but-curious setting; and (ii) tolerates dropouts of a random subset of some constant fraction (e.g. $sim10%$) of the clients. FastSecAgg achieves significantly smaller computation cost than existing schemes while achieving the same (orderwise) communication cost. In addition, it guarantees security against adaptive adversaries, which can perform client corruptions dynamically during the execution of the protocol.
85 - Jinhyun So , Basak Guler , 2020
Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this paper, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with $N$ users achieves a secure aggregation overhead of $O(Nlog{N})$, as opposed to $O(N^2)$, while tolerating up to a user dropout rate of $50%$. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to $40times$ speedup over the state-of-the-art protocols with up to $N=200$ users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا