ﻻ يوجد ملخص باللغة العربية
Side channels represent a broad class of security vulnerabilities that have been demonstrated to exist in many applications. Because completely eliminating side channels often leads to prohibitively high overhead, there is a need for a principled trade-off between cost and leakage. In this paper, we make a case for the use of maximal leakage to analyze such trade-offs. Maximal leakage is an operationally interpretable leakage metric designed for side channels. We present the most useful theoretical properties of maximal leakage from previous work and demonstrate empirically that conventional metrics such as mutual information and channel capacity underestimate the threat posed by side channels whereas maximal leakage does not. We also study the cost-leakage trade-off as an optimization problem using maximal leakage. We demonstrate that not only can this problem be represented as a linear program, but also that optimal protection can be achieved using a combination of at most two deterministic schemes.
Blowfish privacy is a recent generalisation of differential privacy that enables improved utility while maintaining privacy policies with semantic guarantees, a factor that has driven the popularity of differential privacy in computer science. This p
We study the information leakage to a guessing adversary in zero-error source coding. The source coding problem is defined by a confusion graph capturing the distinguishability between source symbols. The information leakage is measured by the ratio
Most methods for publishing data with privacy guarantees introduce randomness into datasets which reduces the utility of the published data. In this paper, we study the privacy-utility tradeoff by taking maximal leakage as the privacy measure and the
In the federated learning system, parameter gradients are shared among participants and the central modulator, while the original data never leave their protected source domain. However, the gradient itself might carry enough information for precise
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients (edge devices). FL offers default client privacy by allowing clients to keep their sensitive data on local device