ﻻ يوجد ملخص باللغة العربية
In this work, we formally study the membership privacy risk of generative models and propose a membership privacy estimation framework. We formulate the membership privacy risk as a statistical divergence between training samples and hold-out samples, and propose sample-based methods to estimate this divergence. Unlike previous works, our proposed metric and estimators make realistic and flexible assumptions. First, we offer a generalizable metric as an alternative to accuracy for imbalanced datasets. Second, our estimators are capable of estimating the membership privacy risk given any scalar or vector valued attributes from the learned model, while prior work require access to specific attributes. This allows our framework to provide data-driven certificates for trained generative models in terms of membership privacy risk. Finally, we show a connection to differential privacy, which allows our proposed estimators to be used to understand the privacy budget epsilon needed for differentially private generative models. We demonstrate the utility of our framework through experimental demonstrations on different generative models using various model attributes yielding some new insights about membership leakage and vulnerabilities of models.
The huge computation demand of deep learning models and limited computation resources on the edge devices calls for the cooperation between edge device and cloud service by splitting the deep models into two halves. However, transferring the intermed
Machine learning models are increasingly made available to the masses through public query interfaces. Recent academic work has demonstrated that malicious users who can query such models are able to infer sensitive information about records within t
Membership inference attacks seek to infer the membership of individual training instances of a privately trained model. This paper presents a membership privacy analysis and evaluation system, called MPLens, with three unique contributions. First, t
The emerging public awareness and government regulations of data privacy motivate new paradigms of collecting and analyzing data transparent and acceptable to data owners. We present a new concept of privacy and corresponding data formats, mechanisms
Membership Inference Attack (MIA) determines the presence of a record in a machine learning models training data by querying the model. Prior work has shown that the attack is feasible when the model is overfitted to its training data or when the adv