ترغب بنشر مسار تعليمي؟ اضغط هنا

The Privacy Paradox and Optimal Bias-Variance Trade-offs in Data Acquisition

167   0   0.0 ( 0 )
 نشر من قبل Yu Su
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

While users claim to be concerned about privacy, often they do little to protect their privacy in their online actions. One prominent explanation for this privacy paradox is that when an individual shares her data, it is not just her privacy that is compromised; the privacy of other individuals with correlated data is also compromised. This information leakage encourages oversharing of data and significantly impacts the incentives of individuals in online platforms. In this paper, we study the design of mechanisms for data acquisition in settings with information leakage and verifiable data. We design an incentive compatible mechanism that optimizes the worst-case trade-off between bias and variance of the estimation subject to a budget constraint, where the worst-case is over the unknown correlation between costs and data. Additionally, we characterize the structure of the optimal mechanism in closed form and study monotonicity and non-monotonicity properties of the marketplace.

قيم البحث

اقرأ أيضاً

We consider the least-squares regression problem and provide a detailed asymptotic analysis of the performance of averaged constant-step-size stochastic gradient descent (a.k.a. least-mean-squares). In the strongly-convex case, we provide an asymptot ic expansion up to explicit exponentially decaying terms. Our analysis leads to new insights into stochastic approximation algorithms: (a) it gives a tighter bound on the allowed step-size; (b) the generalization error may be divided into a variance term which is decaying as O(1/n), independently of the step-size $gamma$, and a bias term that decays as O(1/$gamma$ 2 n 2); (c) when allowing non-uniform sampling, the choice of a good sampling density depends on whether the variance or bias terms dominate. In particular, when the variance term dominates, optimal sampling densities do not lead to much gain, while when the bias term dominates, we can choose larger step-sizes that leads to significant improvements.
Many socially valuable activities depend on sensitive information, such as medical research, public health policies, political coordination, and personalized digital services. This is often posed as an inherent privacy trade-off: we can benefit from data analysis or retain data privacy, but not both. Across several disciplines, a vast amount of effort has been directed toward overcoming this trade-off to enable productive uses of information without also enabling undesired misuse, a goal we term `structured transparency. In this paper, we provide an overview of the frontier of research seeking to develop structured transparency. We offer a general theoretical framework and vocabulary, including characterizing the fundamental components -- input privacy, output privacy, input verification, output verification, and flow governance -- and fundamental problems of copying, bundling, and recursive oversight. We argue that these barriers are less fundamental than they often appear. Recent progress in developing `privacy-enhancing technologies (PETs), such as secure computation and federated learning, may substantially reduce lingering use-misuse trade-offs in a number of domains. We conclude with several illustrations of structured transparency -- in open research, energy management, and credit scoring systems -- and a discussion of the risks of misuse of these tools.
265 - Xiaoming Duan , Zhe Xu , Rui Yan 2021
We study privacy-utility trade-offs where users share privacy-correlated useful information with a service provider to obtain some utility. The service provider is adversarial in the sense that it can infer the users private information based on the shared useful information. To minimize the privacy leakage while maintaining a desired level of utility, the users carefully perturb the useful information via a probabilistic privacy mapping before sharing it. We focus on the setting in which the adversary attempting an inference attack on the users privacy has potentially biased information about the statistical correlation between the private and useful variables. This information asymmetry between the users and the limited adversary leads to better privacy guarantees than the case of the omniscient adversary under the same utility requirement. We first identify assumptions on the adversarys information so that the inference costs are well-defined and finite. Then, we characterize the impact of the information asymmetry and show that it increases the inference costs for the adversary. We further formulate the design of the privacy mapping against a limited adversary using a difference of convex functions program and solve it via the concave-convex procedure. When the adversarys information is not precisely available, we adopt a Bayesian view and represent the adversarys information by a probability distribution. In this case, the expected cost for the adversary does not admit a closed-form expression, and we establish and maximize a lower bound of the expected cost. We provide a numerical example regarding a census data set to illustrate the theoretical results.
Since the global spread of Covid-19 began to overwhelm the attempts of governments to conduct manual contact-tracing, there has been much interest in using the power of mobile phones to automate the contact-tracing process through the development of exposure notification applications. The rough idea is simple: use Bluetooth or other data-exchange technologies to record contacts between users, enable users to report positive diagnoses, and alert users who have been exposed to sick users. Of course, there are many privacy concerns associated with this idea. Much of the work in this area has been concerned with designing mechanisms for tracing contacts and alerting users that do not leak additional information about users beyond the existence of exposure events. However, although designing practical protocols is of crucial importance, it is essential to realize that notifying users about exposure events may itself leak confidential information (e.g. that a particular contact has been diagnosed). Luckily, while digital contact tracing is a relatively new task, the generic problem of privacy and data disclosure has been studied for decades. Indeed, the framework of differential privacy further permits provable query privacy by adding random noise. In this article, we translate two results from statistical privacy and social recommendation algorithms to exposure notification. We thus prove some naive bounds on the degree to which accuracy must be sacrificed if exposure notification frameworks are to be made more private through the injection of noise.
How to contain the spread of the COVID-19 virus is a major concern for most countries. As the situation continues to change, various countries are making efforts to reopen their economies by lifting some restrictions and enforcing new measures to pre vent the spread. In this work, we review some approaches that have been adopted to contain the COVID-19 virus such as contact tracing, clusters identification, movement restrictions, and status validation. Specifically, we classify available techniques based on some characteristics such as technology, architecture, trade-offs (privacy vs utility), and the phase of adoption. We present a novel approach for evaluating privacy using both qualitative and quantitative measures of privacy-utility assessment of contact tracing applications. In this new method, we classify utility at three (3) distinct levels: no privacy, 100% privacy, and at k where k is set by the system providing the utility or privacy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا