ﻻ يوجد ملخص باللغة العربية
$f$-DP has recently been proposed as a generalization of classical definitions of differential privacy allowing a lossless analysis of composition, post-processing, and privacy amplification via subsampling. In the setting of $f$-DP, we propose the concept canonical noise distribution (CND) which captures whether an additive privacy mechanism is appropriately tailored for a given $f$, and give a construction that produces a CND given an arbitrary tradeoff function $f$. We show that private hypothesis tests are intimately related to CNDs, allowing for the release of private $p$-values at no additional privacy cost as well as the construction of uniformly most powerful (UMP) tests for binary data. We apply our techniques to the problem of difference of proportions testing, and construct a UMP unbiased semi-private test which upper bounds the performance of any DP test. Using this as a benchmark we propose a private test, based on the inversion of characteristic functions, which allows for optimal inference for the two population parameters and is nearly as powerful as the semi-private UMPU. When specialized to the case of $(epsilon,0)$-DP, we show empirically that our proposed test is more powerful than any $(epsilon/sqrt 2)$-DP test and has more accurate type I errors than the classic normal approximation test.
We study the problem of estimating finite sample confidence intervals of the mean of a normal population under the constraint of differential privacy. We consider both the known and unknown variance cases and construct differentially private algorith
For testing two random vectors for independence, we consider testing whether the distance of one vector from a center point is independent from the distance of the other vector from a center point by a univariate test. In this paper we provide condit
We give a bijection between a quotient space of the parameters and the space of moments for any $A$-hypergeometric distribution. An algorithmic method to compute the inverse image of the map is proposed utilizing the holonomic gradient method and an
The prior distribution on parameters of a likelihood is the usual starting point for Bayesian uncertainty quantification. In this paper, we present a different perspective. Given a finite data sample $Y_{1:n}$ of size $n$ from an infinite population,
We develop a unified approach to hypothesis testing for various types of widely used functional linear models, such as scalar-on-function, function-on-function and function-on-scalar models. In addition, the proposed test applies to models of mixed t