Do you want to publish a course? Click here

DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks

113   0   0.0 ( 0 )
 Added by Hanieh Hashemi
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Protecting the privacy of input data is of growing importance as machine learning methods reach new application domains. In this paper, we provide a unified training and inference framework for large DNNs while protecting input privacy and computation integrity. Our approach called DarKnight uses a novel data blinding strategy using matrix masking to create input obfuscation within a trusted execution environment (TEE). Our rigorous mathematical proof demonstrates that our blinding process provides information-theoretic privacy guarantee by bounding information leakage. The obfuscated data can then be offloaded to any GPU for accelerating linear operations on blinded data. The results from linear operations on blinded data are decoded before performing non-linear operations within the TEE. This cooperative execution allows DarKnight to exploit the computational power of GPUs to perform linear operations while exploiting TEEs to protect input privacy. We implement DarKnight on an Intel SGX TEE augmented with a GPU to evaluate its performance.



rate research

Read More

Privacy protection in electronic healthcare applications is an important consideration due to the sensitive nature of personal health data. Internet of Health Things (IoHT) networks have privacy requirements within a healthcare setting. However, these networks have unique challenges and security requirements (integrity, authentication, privacy and availability) must also be balanced with the need to maintain efficiency in order to conserve battery power, which can be a significant limitation in IoHT devices and networks. Data are usually transferred without undergoing filtering or optimization, and this traffic can overload sensors and cause rapid battery consumption when interacting with IoHT networks. This consequently poses restrictions on the practical implementation of these devices. As a solution to address the issues, this paper proposes a privacy-preserving two-tier data inference framework, this can conserve battery consumption by reducing the data size required to transmit through inferring the sensed data and can also protect the sensitive data from leakage to adversaries. Results from experimental evaluations on privacy show the validity of the proposed scheme as well as significant data savings without compromising the accuracy of the data transmission, which contributes to energy efficiency of IoHT sensor devices.
88 - Xiaoyu Zhang , Chao Chen , Yi Xie 2021
Deep Neural Network (DNN), one of the most powerful machine learning algorithms, is increasingly leveraged to overcome the bottleneck of effectively exploring and analyzing massive data to boost advanced scientific development. It is not a surprise that cloud computing providers offer the cloud-based DNN as an out-of-the-box service. Though there are some benefits from the cloud-based DNN, the interaction mechanism among two or multiple entities in the cloud inevitably induces new privacy risks. This survey presents the most recent findings of privacy attacks and defenses appeared in cloud-based neural network services. We systematically and thoroughly review privacy attacks and defenses in the pipeline of cloud-based DNN service, i.e., data manipulation, training, and prediction. In particular, a new theory, called cloud-based ML privacy game, is extracted from the recently published literature to provide a deep understanding of state-of-the-art research. Finally, the challenges and future work are presented to help researchers to continue to push forward the competitions between privacy attackers and defenders.
Emerging neural networks based machine learning techniques such as deep learning and its variants have shown tremendous potential in many application domains. However, they raise serious privacy concerns due to the risk of leakage of highly privacy-sensitive data when data collected from users is used to train neural network models to support predictive tasks. To tackle such serious privacy concerns, several privacy-preserving approaches have been proposed in the literature that use either secure multi-party computation (SMC) or homomorphic encryption (HE) as the underlying mechanisms. However, neither of these cryptographic approaches provides an efficient solution towards constructing a privacy-preserving machine learning model, as well as supporting both the training and inference phases. To tackle the above issue, we propose a CryptoNN framework that supports training a neural network model over encrypted data by using the emerging functional encryption scheme instead of SMC or HE. We also construct a functional encryption scheme for basic arithmetic computation to support the requirement of the proposed CryptoNN framework. We present performance evaluation and security analysis of the underlying crypto scheme and show through our experiments that CryptoNN achieves accuracy that is similar to those of the baseline neural network models on the MNIST dataset.
135 - Yin Yang 2021
Deep neural networks (DNNs) could be very useful in blockchain applications such as DeFi and NFT trading. However, training / running large-scale DNNs as part of a smart contract is infeasible on todays blockchain platforms, due to two fundamental design issues of these platforms. First, blockchains nowadays typically require that each node maintain the complete world state at any time, meaning that the node must execute all transactions in every block. This is prohibitively expensive for computationally intensive smart contracts involving DNNs. Second, existing blockchain platforms expect smart contract transactions to have deterministic, reproducible results and effects. In contrast, DNNs are usually trained / run lock-free on massively parallel computing devices such as GPUs, TPUs and / or computing clusters, which often do not yield deterministic results. This paper proposes novel platform designs, collectively called A New Hope (ANH), that address the above issues. The main ideas are (i) computing-intensive smart contract transactions are only executed by nodes who need their results, or by specialized serviced providers, and (ii) a non-deterministic smart contract transaction leads to uncertain results, which can still be validated, though at a relatively high cost; specifically for DNNs, the validation cost can often be reduced by verifying properties of the results instead of their exact values. In addition, we discuss various implications of ANH, including its effects on token fungibility, sharding, private transactions, and the fundamental meaning of a smart contract.
We introduce S++, a simple, robust, and deployable framework for training a neural network (NN) using private data from multiple sources, using secret-shared secure function evaluation. In short, consider a virtual third party to whom every data-holder sends their inputs, and which computes the neural network: in our case, this virtual third party is actually a set of servers which individually learn nothing, even with a malicious (but non-colluding) adversary. Previous work in this area has been limited to just one specific activation function: ReLU, rendering the approach impractical for many use-cases. For the first time, we provide fast and verifiable protocols for all common activation functions and optimize them for running in a secret-shared manner. The ability to quickly, verifiably, and robustly compute exponentiation, softmax, sigmoid, etc., allows us to use previously written NNs without modification, vastly reducing developer effort and complexity of code. In recent times, ReLU has been found to converge much faster and be more computationally efficient as compared to non-linear functions like sigmoid or tanh. However, we argue that it would be remiss not to extend the mechanism to non-linear functions such as the logistic sigmoid, tanh, and softmax that are fundamental due to their ability to express outputs as probabilities and their universal approximation property. Their contribution in RNNs and a few recent advancements also makes them more relevant.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا