ترغب بنشر مسار تعليمي؟ اضغط هنا

The Permute-and-Flip Mechanism is Identical to Report-Noisy-Max with Exponential Noise

77   0   0.0 ( 0 )
 نشر من قبل Zeyu Ding
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The permute-and-flip mechanism is a recently proposed differentially private selection algorithm that was shown to outperform the exponential mechanism. In this paper, we show that permute-and-flip is equivalent to the well-known report noisy max algorithm with exponential noise.

قيم البحث

اقرأ أيضاً

We consider the problem of differentially private selection. Given a finite set of candidate items and a quality score for each item, our goal is to design a differentially private mechanism that returns an item with a score that is as high as possib le. The most commonly used mechanism for this task is the exponential mechanism. In this work, we propose a new mechanism for this task based on a careful analysis of the privacy constraints. The expected score of our mechanism is always at least as large as the exponential mechanism, and can offer improvements up to a factor of two. Our mechanism is simple to implement and runs in linear time.
Private selection algorithms, such as the Exponential Mechanism, Noisy Max and Sparse Vector, are used to select items (such as queries with large answers) from a set of candidates, while controlling privacy leakage in the underlying data. Such algor ithms serve as building blocks for more complex differentially private algorithms. In this paper we show that these algorithms can release additional information related to the gaps between the selected items and the other candidates for free (i.e., at no additional privacy cost). This free gap information can improve the accuracy of certain follow-up counting queries by up to 66%. We obtain these results from a careful privacy analysis of these algorithms. Based on this analysis, we further propose novel hybrid algorithms that can dynamically save additional privacy budget.
We introduce a new $(epsilon_p, delta_p)$-differentially private algorithm for the $k$-means clustering problem. Given a dataset in Euclidean space, the $k$-means clustering problem requires one to find $k$ points in that space such that the sum of s quares of Euclidean distances between each data point and its closest respective point among the $k$ returned is minimised. Although there exist privacy-preserving methods with good theoretical guarantees to solve this problem [Balcan et al., 2017; Kaplan and Stemmer, 2018], in practice it is seen that it is the additive error which dictates the practical performance of these methods. By reducing the problem to a sequence of instances of maximum coverage on a grid, we are able to derive a new method that achieves lower additive error then previous works. For input datasets with cardinality $n$ and diameter $Delta$, our algorithm has an $O(Delta^2 (k log^2 n log(1/delta_p)/epsilon_p + ksqrt{d log(1/delta_p)}/epsilon_p))$ additive error whilst maintaining constant multiplicative error. We conclude with some experiments and find an improvement over previously implemented work for this problem.
When current flows through a magnetic tunnel junction (MTJ), there is spin accumulation at the electrode-barrier interfaces if the magnetic moments of the two ferromagnetic electrodes are not aligned. Here we report that such nonequilibrium spin accu mulation generates its own characteristic low frequency noise (LFN). Past work viewed the LFN in MTJs as an equilibrium effect arising from resistance fluctuations ($S_R$) which a passively applied current ($I$) converts to measurable voltage fluctuations ($S_{V}=I^{2}S_{R}$). We treat the LFN associated with spin accumulation as a nonequilibrium effect, and find that the noise power can be fitted in terms of the spin-polarized current by $S_{I}f=aIcoth(frac{I}{b})-ab$, resembling the form of the shot noise for a tunnel junction, but with current now taking the role of the bias voltage, and spin-flip probability taking the role of tunneling probability.
63 - Alexey M.Kulik 2007
The mild sufficient conditions for exponential ergodicity of a Markov process, defined as the solution to SDE with a jump noise, are given. These conditions include three principal claims: recurrence condition R, topological irreducibility condition S and non-degeneracy condition N, the latter formulated in the terms of a certain random subspace of Re^m, associated with the initial equation. The examples are given, showing that, in general, none of three principal claims can be removed without losing ergodicity of the process. The key point in the approach, developed in the paper, is that the local Doeblin condition can be derived from N and S via the stratification method and criterium for the convergence in variations of the family of induced measures on Re^m.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا