ﻻ يوجد ملخص باللغة العربية
Federated learning has become prevalent in medical diagnosis due to its effectiveness in training a federated model among multiple health institutions (i.e. Data Islands (DIs)). However, increasingly massive DI-level poisoning attacks have shed light on a vulnerability in federated learning, which inject poisoned data into certain DIs to corrupt the availability of the federated model. Previous works on federated learning have been inadequate in ensuring the privacy of DIs and the availability of the final federated model. In this paper, we design a secure federated learning mechanism with multiple keys to prevent DI-level poisoning attacks for medical diagnosis, called SFPA. Concretely, SFPA provides privacy-preserving random forest-based federated learning by using the multi-key secure computation, which guarantees the confidentiality of DI-related information. Meanwhile, a secure defense strategy over encrypted locally-submitted models is proposed to defense DI-level poisoning attacks. Finally, our formal security analysis and empirical tests on a public cloud platform demonstrate the security and efficiency of SFPA as well as its capability of resisting DI-level poisoning attacks.
Federated machine learning which enables resource constrained node devices (e.g., mobile phones and IoT devices) to learn a shared model while keeping the training data local, can provide privacy, security and economic benefits by designing an effect
Secure federated learning is a privacy-preserving framework to improve machine learning models by training over large volumes of data collected by mobile users. This is achieved through an iterative process where, at each iteration, users update a gl
Recently researchers have studied input leakage problems in Federated Learning (FL) where a malicious party can reconstruct sensitive training inputs provided by users from shared gradient. It raises concerns about FL since input leakage contradicts
In cloud computing environments with many virtual machines, containers, and other systems, an epidemic of malware can be highly threatening to business processes. In this vision paper, we introduce a hierarchical approach to performing malware detect
Recently, a number of backdoor attacks against Federated Learning (FL) have been proposed. In such attacks, an adversary injects poisoned model updates into the federated model aggregation process with the goal of manipulating the aggregated model to