ﻻ يوجد ملخص باللغة العربية
The collection and sharing of individuals data has become commonplace in many industries. Local differential privacy (LDP) is a rigorous approach to preserving data privacy even from a database administrator, unlike the more standard central differential privacy. To achieve LDP, one traditionally adds noise directly to each data dimension, but for high-dimensional data the level of noise required for sufficient anonymization all but entirely destroys the datas utility. In this paper, we introduce a novel LDP mechanism that leverages representation learning to overcome the prohibitive noise requirements of direct methods. We demonstrate that, rather than simply estimating aggregate statistics of the privatized data as is the norm in LDP applications, our method enables the training of performant machine learning models. Unique applications of our approach include private novel-class classification and the augmentation of clean datasets with additional privatized features. Methods that rely on central differential privacy are not applicable to such tasks. Our approach achieves significant performance gains on these tasks relative to state-of-the-art LDP benchmarks that noise data directly.
Privacy-preserving genomic data sharing is prominent to increase the pace of genomic research, and hence to pave the way towards personalized genomic medicine. In this paper, we introduce ($epsilon , T$)-dependent local differential privacy (LDP) for
In deep learning with differential privacy (DP), the neural network achieves the privacy usually at the cost of slower convergence (and thus lower performance) than its non-private counterpart. This work gives the first convergence analysis of the DP
Black-box machine learning models are used in critical decision-making domains, giving rise to several calls for more algorithmic transparency. The drawback is that model explanations can leak information about the training data and the explanation d
Traditional learning approaches for classification implicitly assume that each mistake has the same cost. In many real-world problems though, the utility of a decision depends on the underlying context $x$ and decision $y$. However, directly incorpor
We consider the problem of reinforcing federated learning with formal privacy guarantees. We propose to employ Bayesian differential privacy, a relaxation of differential privacy for similarly distributed data, to provide sharper privacy loss bounds.