ترغب بنشر مسار تعليمي؟ اضغط هنا

NeuraCrypt: Hiding Private Health Data via Random Neural Networks for Public Training

127   0   0.0 ( 0 )
 نشر من قبل Adam Yala
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Balancing the needs of data privacy and predictive utility is a central challenge for machine learning in healthcare. In particular, privacy concerns have led to a dearth of public datasets, complicated the construction of multi-hospital cohorts and limited the utilization of external machine learning resources. To remedy this, new methods are required to enable data owners, such as hospitals, to share their datasets publicly, while preserving both patient privacy and modeling utility. We propose NeuraCrypt, a private encoding scheme based on random deep neural networks. NeuraCrypt encodes raw patient data using a randomly constructed neural network known only to the data-owner, and publishes both the encoded data and associated labels publicly. From a theoretical perspective, we demonstrate that sampling from a sufficiently rich family of encoding functions offers a well-defined and meaningful notion of privacy against a computationally unbounded adversary with full knowledge of the underlying data-distribution. We propose to approximate this family of encoding functions through random deep neural networks. Empirically, we demonstrate the robustness of our encoding to a suite of adversarial attacks and show that NeuraCrypt achieves competitive accuracy to non-private baselines on a variety of x-ray tasks. Moreover, we demonstrate that multiple hospitals, using independent private encoders, can collaborate to train improved x-ray models. Finally, we release a challenge dataset to encourage the development of new attacks on NeuraCrypt.



قيم البحث

اقرأ أيضاً

Emerging neural networks based machine learning techniques such as deep learning and its variants have shown tremendous potential in many application domains. However, they raise serious privacy concerns due to the risk of leakage of highly privacy-s ensitive data when data collected from users is used to train neural network models to support predictive tasks. To tackle such serious privacy concerns, several privacy-preserving approaches have been proposed in the literature that use either secure multi-party computation (SMC) or homomorphic encryption (HE) as the underlying mechanisms. However, neither of these cryptographic approaches provides an efficient solution towards constructing a privacy-preserving machine learning model, as well as supporting both the training and inference phases. To tackle the above issue, we propose a CryptoNN framework that supports training a neural network model over encrypted data by using the emerging functional encryption scheme instead of SMC or HE. We also construct a functional encryption scheme for basic arithmetic computation to support the requirement of the proposed CryptoNN framework. We present performance evaluation and security analysis of the underlying crypto scheme and show through our experiments that CryptoNN achieves accuracy that is similar to those of the baseline neural network models on the MNIST dataset.
Data hiding is referred to as the art of hiding secret data into a digital cover for covert communication. In this letter, we propose a novel method to disguise data hiding tools, including a data embedding tool and a data extraction tool, as a deep neural network (DNN) with an ordinary task. After training a DNN for both style transfer and data hiding, while the DNN can transfer the style of an image to a target one, it can be also used to hide secret data into a cover image or extract secret data from a stego image by inputting the trigger signal. In other words, the tools of data hiding are hidden to avoid arousing suspicion.
How can multiple distributed entities collaboratively train a shared deep net on their private data while preserving privacy? This paper introduces InstaHide, a simple encryption of training images, which can be plugged into existing distributed deep learning pipelines. The encryption is efficient and applying it during training has minor effect on test accuracy. InstaHide encrypts each training image with a one-time secret key which consists of mixing a number of randomly chosen images and applying a random pixel-wise mask. Other contributions of this paper include: (a) Using a large public dataset (e.g. ImageNet) for mixing during its encryption, which improves security. (b) Experimental results to show effectiveness in preserving privacy against known attacks with only minor effects on accuracy. (c) Theoretical analysis showing that successfully attacking privacy requires attackers to solve a difficult computational problem. (d) Demonstrating that use of the pixel-wise mask is important for security, since Mixup alone is shown to be insecure to some some efficient attacks. (e) Release of a challenge dataset https://github.com/Hazelsuko07/InstaHide_Challenge Our code is available at https://github.com/Hazelsuko07/InstaHide
Protecting the privacy of input data is of growing importance as machine learning methods reach new application domains. In this paper, we provide a unified training and inference framework for large DNNs while protecting input privacy and computatio n integrity. Our approach called DarKnight uses a novel data blinding strategy using matrix masking to create input obfuscation within a trusted execution environment (TEE). Our rigorous mathematical proof demonstrates that our blinding process provides information-theoretic privacy guarantee by bounding information leakage. The obfuscated data can then be offloaded to any GPU for accelerating linear operations on blinded data. The results from linear operations on blinded data are decoded before performing non-linear operations within the TEE. This cooperative execution allows DarKnight to exploit the computational power of GPUs to perform linear operations while exploiting TEEs to protect input privacy. We implement DarKnight on an Intel SGX TEE augmented with a GPU to evaluate its performance.
355 - Zhi Wang , Chaoge Liu , Xiang Cui 2021
While artificial intelligence (AI) is widely applied in various areas, it is also being used maliciously. It is necessary to study and predict AI-powered attacks to prevent them in advance. Turning neural network models into stegomalware is a malicio us use of AI, which utilizes the features of neural network models to hide malware while maintaining the performance of the models. However, the existing methods have a low malware embedding rate and a high impact on the model performance, making it not practical. Therefore, by analyzing the composition of the neural network models, this paper proposes new methods to embed malware in models with high capacity and no service quality degradation. We used 19 malware samples and 10 mainstream models to build 550 malware-embedded models and analyzed the models performance on ImageNet dataset. A new evaluation method that combines the embedding rate, the model performance impact and the embedding effort is proposed to evaluate the existing methods. This paper also designs a trigger and proposes an application scenario in attack tasks combining EvilModel with WannaCry. This paper further studies the relationship between neural network models embedding capacity and the model structure, layer and size. With the widespread application of artificial intelligence, utilizing neural networks for attacks is becoming a forwarding trend. We hope this work can provide a reference scenario for the defense of neural network-assisted attacks.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا