Do you want to publish a course? Click here

Universally Utility-Maximizing Privacy Mechanisms

265   0   0.0 ( 0 )
 Added by Mukund Sundararajan
 Publication date 2009
and research's language is English




Ask ChatGPT about the research

A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Privacy can be rigorously quantified using the framework of {em differential privacy}, which requires that a mechanisms output distribution is nearly the same whether or not a given database row is included or excluded. The goal of this paper is strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a loss function). Our main result is: for each fixed count query and differential privacy level, there is a {em geometric mechanism} $M^*$ -- a discrete variant of the simple and well-studied Laplace mechanism -- that is {em simultaneously expected loss-minimizing} for every possible user, subject to the differential privacy constraint. This is an extremely strong utility guarantee: {em every} potential user $u$, no matter what its side information and preferences, derives as much utility from $M^*$ as from interacting with a differentially private mechanism $M_u$ that is optimally tailored to $u$.



rate research

Read More

LDP (Local Differential Privacy) has been widely studied to estimate statistics of personal data (e.g., distribution underlying the data) while protecting users privacy. Although LDP does not require a trusted third party, it regards all personal data equally sensitive, which causes excessive obfuscation hence the loss of utility. In this paper, we introduce the notion of ULDP (Utility-optimized LDP), which provides a privacy guarantee equivalent to LDP only for sensitive data. We first consider the setting where all users use the same obfuscation mechanism, and propose two mechanisms providing ULDP: utility-optimized randomized response and utility-optimized RAPPOR. We then consider the setting where the distinction between sensitive and non-sensitive data can be different from user to user. For this setting, we propose a personalized ULDP mechanism with semantic tags to estimate the distribution of personal data with high utility while keeping secret what is sensitive for each user. We show theoretically and experimentally that our mechanisms provide much higher utility than the existing LDP mechanisms when there are a lot of non-sensitive data. We also show that when most of the data are non-sensitive, our mechanisms even provide almost the same utility as non-private mechanisms in the low privacy regime.
Differential privacy (DP) and local differential privacy (LPD) are frameworks to protect sensitive information in data collections. They are both based on obfuscation. In DP the noise is added to the result of queries on the dataset, whereas in LPD the noise is added directly on the individual records, before being collected. The main advantage of LPD with respect to DP is that it does not need to assume a trusted third party. The main disadvantage is that the trade-off between privacy and utility is usually worse than in DP, and typically to retrieve reasonably good statistics from the locally sanitized data it is necessary to have a huge collection of them. In this paper, we focus on the problem of estimating counting queries from collections of noisy answers, and we propose a variant of LDP based on the addition of geometric noise. Our main result is that the geometric noise has a better statistical utility than other LPD mechanisms from the literature.
275 - Di Zhuang , J. Morris Chang 2020
In the big data era, more and more cloud-based data-driven applications are developed that leverage individual data to provide certain valuable services (the utilities). On the other hand, since the same set of individual data could be utilized to infer the individuals certain sensitive information, it creates new channels to snoop the individuals privacy. Hence it is of great importance to develop techniques that enable the data owners to release privatized data, that can still be utilized for certain premised intended purpose. Existing data releasing approaches, however, are either privacy-emphasized (no consideration on utility) or utility-driven (no guarantees on privacy). In this work, we propose a two-step perturbation-based utility-aware privacy-preserving data releasing framework. First, certain predefined privacy and utility problems are learned from the public domain data (background knowledge). Later, our approach leverages the learned knowledge to precisely perturb the data owners data into privatized data that can be successfully utilized for certain intended purpose (learning to succeed), without jeopardizing certain predefined privacy (training to fail). Extensive experiments have been conducted on Human Activity Recognition, Census Income and Bank Marketing datasets to demonstrate the effectiveness and practicality of our framework.
We study the problem of matching agents who arrive at a marketplace over time and leave after d time periods. Agents can only be matched while they are present in the marketplace. Each pair of agents can yield a different match value, and the planners goal is to maximize the total value over a finite time horizon. We study matching algorithms that perform well over any sequence of arrivals when there is no a priori information about the match values or arrival times. Our main contribution is a 1/4-competitive algorithm. The algorithm randomly selects a subset of agents who will wait until right before their departure to get matched, and maintains a maximum-weight matching with respect to the other agents. The primal-dual analysis of the algorithm hinges on a careful comparison between the initial dual value associated with an agent when it first arrives, and the final value after d time steps. It is also shown that no algorithm is 1/2-competitive. We extend the model to the case in which departure times are drawn i.i.d from a distribution with non-decreasing hazard rate, and establish a 1/8-competitive algorithm in this setting. Finally we show on real-world data that a modified version of our algorithm performs well in practice.
Fuzzy systems have good modeling capabilities in several data science scenarios, and can provide human-explainable intelligence models with explainability and interpretability. In contrast to transaction data, which have been extensively studied, sequence data are more common in real-life applications. To obtain a human-explainable data intelligence model for decision making, in this study, we investigate explainable fuzzy-theoretic utility mining on multi-sequences. Meanwhile, a more normative formulation of the problem of fuzzy utility mining on sequences is formulated. By exploring fuzzy set theory for utility mining, we propose a novel method termed pattern growth fuzzy utility mining (PGFUM) for mining fuzzy high-utility sequences with linguistic meaning. In the case of sequence data, PGFUM reflects the fuzzy quantity and utility regions of sequences. To improve the efficiency and feasibility of PGFUM, we develop two compressed data structures with explainable fuzziness. Furthermore, one existing and two new upper bounds on the explainable fuzzy utility of candidates are adopted in three proposed pruning strategies to substantially reduce the search space and thus expedite the mining process. Finally, the proposed PGFUM algorithm is compared with PFUS, which is the only currently available method for the same task, through extensive experimental evaluation. It is demonstrated that PGFUM achieves not only human-explainable mining results that contain the original nature of revealable intelligibility, but also high efficiency in terms of runtime and memory cost.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا