ﻻ يوجد ملخص باللغة العربية
Powered by machine learning services in the cloud, numerous learning-driven mobile applications are gaining popularity in the market. As deep learning tasks are mostly computation-intensive, it has become a trend to process raw data on devices and send the deep neural network (DNN) features to the cloud, where the features are further processed to return final results. However, there is always unexpected leakage with the release of features, with which an adversary could infer a significant amount of information about the original data. We propose a privacy-preserving reinforcement learning framework on top of the mobile cloud infrastructure from the perspective of DNN structures. The framework aims to learn a policy to modify the base DNNs to prevent information leakage while maintaining high inference accuracy. The policy can also be readily transferred to large-size DNNs to speed up learning. Extensive evaluations on a variety of DNNs have shown that our framework can successfully find privacy-preserving DNN structures to defend different privacy attacks.
This work presents Origami, which provides privacy-preserving inference for large deep neural network (DNN) models through a combination of enclave execution, cryptographic blinding, interspersed with accelerator-based computation. Origami partitions
This paper attempts to answer the question whether neural network pruning can be used as a tool to achieve differential privacy without losing much data utility. As a first step towards understanding the relationship between neural network pruning an
The explosion of data collection has raised serious privacy concerns in users due to the possibility that sharing data may also reveal sensitive information. The main goal of a privacy-preserving mechanism is to prevent a malicious third party from i
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care. The ability to accurately and efficiently predict mood from easily collectible data has several important implications for the early detecti
Ensuring differential privacy of models learned from sensitive user data is an important goal that has been studied extensively in recent years. It is now known that for some basic learning problems, especially those involving high-dimensional data,