Do you want to publish a course? Click here

Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale

80   0   0.0 ( 0 )
 Added by David Saranchak
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Individuals are gaining more control of their personal data through recent data privacy laws such the General Data Protection Regulation and the California Consumer Privacy Act. One aspect of these laws is the ability to request a business to delete private information, the so called right to be forgotten or right to erasure. These laws have serious financial implications for companies and organizations that train large, highly accurate deep neural networks (DNNs) using these valuable consumer data sets. However, a received redaction request poses complex technical challenges on how to comply with the law while fulfilling core business operations. We introduce a DNN model lifecycle maintenance process that establishes how to handle specific data redaction requests and minimize the need to completely retrain the model. Our process is based upon the membership inference attack as a compliance tool for every point in the training set. These attack models quantify the privacy risk of all training data points and form the basis of follow-on data redaction from an accurate deployed model; excision is implemented through incorrect label assignment within incremental model updates.

rate research

Read More

Data deletion algorithms aim to remove the influence of deleted data points from trained models at a cheaper computational cost than fully retraining those models. However, for sequences of deletions, most prior work in the non-convex setting gives valid guarantees only for sequences that are chosen independently of the models that are published. If people choose to delete their data as a function of the published models (because they dont like what the models reveal about them, for example), then the update sequence is adaptive. In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information. Combined with ideas from prior work which give guarantees for non-adaptive deletion sequences, this leads to extremely flexible algorithms able to handle arbitrary model classes and training methodologies, giving strong provable deletion guarantees for adaptive deletion sequences. We show in theory how prior work for non-convex models fails against adaptive deletion sequences, and use this intuition to design a practical attack against the SISA algorithm of Bourtoule et al. [2021] on CIFAR-10, MNIST, Fashion-MNIST.
We present a large-scale characterization of attacker activity across 111 real-world enterprise organizations. We develop a novel forensic technique for distinguishing between attacker activity and benign activity in compromised enterprise accounts that yields few false positives and enables us to perform fine-grained analysis of attacker behavior. Applying our methods to a set of 159 compromised enterprise accounts, we quantify the duration of time attackers are active in accounts and examine thematic patterns in how attackers access and leverage these hijacked accounts. We find that attackers frequently dwell in accounts for multiple days to weeks, suggesting that delayed (non-real-time) detection can still provide significant value. Based on an analysis of the attackers timing patterns, we observe two distinct modalities in how attackers access compromised accounts, which could be explained by the existence of a specialized market for hijacked enterprise accounts: where one class of attackers focuses on compromising and selling account access to another class of attackers who exploit the access such hijacked accounts provide. Ultimately, our analysis sheds light on the state of enterprise account hijacking and highlights fruitful directions for a broader space of detection methods, ranging from new features that home in on malicious account behavior to the development of non-real-time detection methods that leverage malicious activity after an attacks initial point of compromise to more accurately identify attacks.
We study the problem of machine unlearning and identify a notion of algorithmic stability, Total Variation (TV) stability, which we argue, is suitable for the goal of exact unlearning. For convex risk minimization problems, we design TV-stable algorithms based on noisy Stochastic Gradient Descent (SGD). Our key contribution is the design of corresponding efficient unlearning algorithms, which are based on constructing a (maximal) coupling of Markov chains for the noisy SGD procedure. To understand the trade-offs between accuracy and unlearning efficiency, we give upper and lower bounds on excess empirical and populations risk of TV stable algorithms for convex risk minimization. Our techniques generalize to arbitrary non-convex functions, and our algorithms are differentially private as well.
472 - Yang Liu , Zhuo Ma , Ximeng Liu 2020
Nowadays, machine learning models, especially neural networks, become prevalent in many real-world applications.These models are trained based on a one-way trip from user data: as long as users contribute their data, there is no way to withdraw; and it is well-known that a neural network memorizes its training data. This contradicts the right to be forgotten clause of GDPR, potentially leading to law violations. To this end, machine unlearning becomes a popular research topic, which allows users to eliminate memorization of their private data from a trained machine learning model.In this paper, we propose the first uniform metric called for-getting rate to measure the effectiveness of a machine unlearning method. It is based on the concept of membership inference and describes the transformation rate of the eliminated data from memorized to unknown after conducting unlearning. We also propose a novel unlearning method calledForsaken. It is superior to previous work in either utility or efficiency (when achieving the same forgetting rate). We benchmark Forsaken with eight standard datasets to evaluate its performance. The experimental results show that it can achieve more than 90% forgetting rate on average and only causeless than 5% accuracy loss.
The rise of IoT devices has led to the proliferation of smart buildings, offices, and homes worldwide. Although commodity IoT devices are employed by ordinary end-users, complex environments such as smart buildings, smart offices, conference rooms, or hospitality require customized and highly reliable solutions. Those systems called Enterprise Internet of Things (EIoT) connect such environments to the Internet and are professionally managed solutions usually offered by dedicated vendors. As EIoT systems require specialized training, software, and equipment to deploy, this has led to very little research investigating the security of EIoT systems and their components. In effect, EIoT systems in smart settings such as smart buildings present an unprecedented and unexplored threat vector for an attacker. In this work, we explore EIoT system vulnerabilities and insecure development practices. Specifically, focus on the usage of drivers as an attack mechanism, and introduce PoisonIvy, a number of novel attacks that demonstrate an attacker can easily compromise EIoT system controllers using malicious drivers. Specifically, we show how drivers used to integrate third-party devices to EIoT systems can be misused in a systematic fashion. To demonstrate the capabilities of attackers, we implement and evaluate PoisonIvy using a testbed of real EIoT devices. We show that an attacker can perform DoS attacks, gain remote control, and maliciously abuse system resources of EIoT systems. To the best of our knowledge, this is the first work to analyze the (in)securities of EIoT deployment practices and demonstrate the associated vulnerabilities in this ecosystem. With this work, we raise awareness on the (in)secure development practices used for EIoT systems, the consequences of which can largely impact the security, privacy, reliability, and performance of millions of EIoT systems worldwide.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا