ترغب بنشر مسار تعليمي؟ اضغط هنا

Privacy-Utility Trade-Offs Against Limited Adversaries

266   0   0.0 ( 0 )
 نشر من قبل Xiaoming Duan
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study privacy-utility trade-offs where users share privacy-correlated useful information with a service provider to obtain some utility. The service provider is adversarial in the sense that it can infer the users private information based on the shared useful information. To minimize the privacy leakage while maintaining a desired level of utility, the users carefully perturb the useful information via a probabilistic privacy mapping before sharing it. We focus on the setting in which the adversary attempting an inference attack on the users privacy has potentially biased information about the statistical correlation between the private and useful variables. This information asymmetry between the users and the limited adversary leads to better privacy guarantees than the case of the omniscient adversary under the same utility requirement. We first identify assumptions on the adversarys information so that the inference costs are well-defined and finite. Then, we characterize the impact of the information asymmetry and show that it increases the inference costs for the adversary. We further formulate the design of the privacy mapping against a limited adversary using a difference of convex functions program and solve it via the concave-convex procedure. When the adversarys information is not precisely available, we adopt a Bayesian view and represent the adversarys information by a probability distribution. In this case, the expected cost for the adversary does not admit a closed-form expression, and we establish and maximize a lower bound of the expected cost. We provide a numerical example regarding a census data set to illustrate the theoretical results.

قيم البحث

اقرأ أيضاً

We consider a user releasing her data containing some personal information in return of a service. We model users personal information as two correlated random variables, one of them, called the secret variable, is to be kept private, while the other , called the useful variable, is to be disclosed for utility. We consider active sequential data release, where at each time step the user chooses from among a finite set of release mechanisms, each revealing some information about the users personal information, i.e., the true hypotheses, albeit with different statistics. The user manages data release in an online fashion such that maximum amount of information is revealed about the latent useful variable, while the confidence for the sensitive variable is kept below a predefined level. For the utility, we consider both the probability of correct detection of the useful variable and the mutual information (MI) between the useful variable and released data. We formulate both problems as a Markov decision process (MDP), and numerically solve them by advantage actor-critic (A2C) deep reinforcement learning (RL).
In this work we consider the communication of information in the presence of an online adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword x=x_1,...,x_n symbol-by-symbol ov er a communication channel. The adversarial jammer can view the transmitted symbols x_i one at a time, and can change up to a p-fraction of them. However, the decisions of the jammer must be made in an online or causal manner. More generally, for a delay parameter 0<d<1, we study the scenario in which the jammers decision on the corruption of x_i must depend solely on x_j for j < i - dn. In this work, we initiate the study of codes for online adversaries, and present a tight characterization of the amount of information one can transmit in both the 0-delay and, more generally, the d-delay online setting. We prove tight results for both additive and overwrite jammers when the transmitted symbols are assumed to be over a sufficiently large field F. Finally, we extend our results to a jam-or-listen online model, where the online adversary can either jam a symbol or eavesdrop on it. We again provide a tight characterization of the achievable rate for several variants of this model. The rate-regions we prove for each model are informational-theoretic in nature and hold for computationally unbounded adversaries. The rate regions are characterized by simple piecewise linear functions of p and d. The codes we construct to attain the optimal rate for each scenario are computationally efficient.
Network coding is studied when an adversary controls a subset of nodes in the network of limited quantity but unknown location. This problem is shown to be more difficult than when the adversary controls a given number of edges in the network, in tha t linear codes are insufficient. To solve the node problem, the class of Polytope Codes is introduced. Polytope Codes are constant composition codes operating over bounded polytopes in integer vector fields. The polytope structure creates additional complexity, but it induces properties on marginal distributions of code vectors so that validities of codewords can be checked by internal nodes of the network. It is shown that Polytope Codes achieve a cut-set bound for a class of planar networks. It is also shown that this cut-set bound is not always tight, and a tighter bound is given for an example network.
How to contain the spread of the COVID-19 virus is a major concern for most countries. As the situation continues to change, various countries are making efforts to reopen their economies by lifting some restrictions and enforcing new measures to pre vent the spread. In this work, we review some approaches that have been adopted to contain the COVID-19 virus such as contact tracing, clusters identification, movement restrictions, and status validation. Specifically, we classify available techniques based on some characteristics such as technology, architecture, trade-offs (privacy vs utility), and the phase of adoption. We present a novel approach for evaluating privacy using both qualitative and quantitative measures of privacy-utility assessment of contact tracing applications. In this new method, we classify utility at three (3) distinct levels: no privacy, 100% privacy, and at k where k is set by the system providing the utility or privacy.
Unlike traditional file transfer where only total delay matters, streaming applications impose delay constraints on each packet and require them to be in order. To achieve fast in-order packet decoding, we have to compromise on the throughput. We stu dy this trade-off between throughput and smoothness in packet decoding. We first consider a point-to-point streaming and analyze how the trade-off is affected by the frequency of block-wise feedback, whereby the source receives full channel state feedback at periodic intervals. We show that frequent feedback can drastically improve the throughput-smoothness trade-off. Then we consider the problem of multicasting a packet stream to two users. For both point-to-point and multicast streaming, we propose a spectrum of coding schemes that span different throughput-smoothness tradeoffs. One can choose an appropriate coding scheme from these, depending upon the delay-sensitivity and bandwidth limitations of the application. This work introduces a novel style of analysis using renewal processes and Markov chains to analyze coding schemes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا