ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal Identity Testing with High Probability

66   0   0.0 ( 0 )
 نشر من قبل John Peebles
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the problem of testing identity against a given distribution with a focus on the high confidence regime. More precisely, given samples from an unknown distribution $p$ over $n$ elements, an explicitly given distribution $q$, and parameters $0< epsilon, delta < 1$, we wish to distinguish, {em with probability at least $1-delta$}, whether the distributions are identical versus $varepsilon$-far in total variation distance. Most prior work focused on the case that $delta = Omega(1)$, for which the sample complexity of identity testing is known to be $Theta(sqrt{n}/epsilon^2)$. Given such an algorithm, one can achieve arbitrarily small values of $delta$ via black-box amplification, which multiplies the required number of samples by $Theta(log(1/delta))$. We show that black-box amplification is suboptimal for any $delta = o(1)$, and give a new identity tester that achieves the optimal sample complexity. Our new upper and lower bounds show that the optimal sample complexity of identity testing is [ Thetaleft( frac{1}{epsilon^2}left(sqrt{n log(1/delta)} + log(1/delta) right)right) ] for any $n, varepsilon$, and $delta$. For the special case of uniformity testing, where the given distribution is the uniform distribution $U_n$ over the domain, our new tester is surprisingly simple: to test whether $p = U_n$ versus $d_{mathrm TV}(p, U_n) geq varepsilon$, we simply threshold $d_{mathrm TV}(widehat{p}, U_n)$, where $widehat{p}$ is the empirical probability distribution. The fact that this simple plug-in estimator is sample-optimal is surprising, even in the constant $delta$ case. Indeed, it was believed that such a tester would not attain sublinear sample complexity even for constant values of $varepsilon$ and $delta$.



قيم البحث

اقرأ أيضاً

We investigate the problem of identity testing for multidimensional histogram distributions. A distribution $p: D rightarrow mathbb{R}_+$, where $D subseteq mathbb{R}^d$, is called a $k$-histogram if there exists a partition of the domain into $k$ ax is-aligned rectangles such that $p$ is constant within each such rectangle. Histograms are one of the most fundamental nonparametric families of distributions and have been extensively studied in computer science and statistics. We give the first identity tester for this problem with {em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound. In more detail, let $q$ be an unknown $d$-dimensional $k$-histogram distribution in fixed dimension $d$, and $p$ be an explicitly given $d$-dimensional $k$-histogram. We want to correctly distinguish, with probability at least $2/3$, between the case that $p = q$ versus $|p-q|_1 geq epsilon$. We design an algorithm for this hypothesis testing problem with sample complexity $O((sqrt{k}/epsilon^2) 2^{d/2} log^{2.5 d}(k/epsilon))$ that runs in sample-polynomial time. Our algorithm is robust to model misspecification, i.e., succeeds even if $q$ is only promised to be {em close} to a $k$-histogram. Moreover, for $k = 2^{Omega(d)}$, we show a sample complexity lower bound of $(sqrt{k}/epsilon^2) cdot Omega(log(k)/d)^{d-1}$ when $dgeq 2$. That is, for any fixed dimension $d$, our upper and lower bounds are nearly matching. Prior to our work, the sample complexity of the $d=1$ case was well-understood, but no algorithm with sub-learning sample complexity was known, even for $d=2$. Our new upper and lower bounds have interesting conceptual implications regarding the relation between learning and testing in this setting.
We study the fundamental problems of (i) uniformity testing of a discrete distribution, and (ii) closeness testing between two discrete distributions with bounded $ell_2$-norm. These problems have been extensively studied in distribution testing and sample-optimal estimators are known for them~cite{Paninski:08, CDVV14, VV14, DKN:15}. In this work, we show that the original collision-based testers proposed for these problems ~cite{GRdist:00, BFR+:00} are sample-optimal, up to constant factors. Previous analyses showed sample complexity upper bounds for these testers that are optimal as a function of the domain size $n$, but suboptimal by polynomial factors in the error parameter $epsilon$. Our main contribution is a new tight analysis establishing that these collision-based testers are information-theoretically optimal, up to constant factors, both in the dependence on $n$ and in the dependence on $epsilon$.
In the group testing problem the aim is to identify a small set of $ksim n^theta$ infected individuals out of a population size $n$, $0<theta<1$. We avail ourselves of a test procedure capable of testing groups of individuals, with the test returning a positive result iff at least one individual in the group is infected. The aim is to devise a test design with as few tests as possible so that the set of infected individuals can be identified correctly with high probability. We establish an explicit sharp information-theoretic/algorithmic phase transition $minf$ for non-adaptive group testing, where all tests are conducted in parallel. Thus, with more than $minf$ tests the infected individuals can be identified in polynomial time whp, while learning the set of infected individuals is information-theoretically impossible with fewer tests. In addition, we develop an optimal adaptive scheme where the tests are conducted in two stages.
Minimum storage regenerating (MSR) codes are MDS codes which allow for recovery of any single erased symbol with optimal repair bandwidth, based on the smallest possible fraction of the contents downloaded from each of the other symbols. Recently, ce rtain Reed-Solomon codes were constructed which are MSR. However, the sub-packetization of these codes is exponentially large, growing like $n^{Omega(n)}$ in the constant-rate regime. In this work, we study the relaxed notion of $epsilon$-MSR codes, which incur a factor of $(1+epsilon)$ higher than the optimal repair bandwidth, in the context of Reed-Solomon codes. We give constructions of constant-rate $epsilon$-MSR Reed-Solomon codes with polynomial sub-packetization of $n^{O(1/epsilon)}$ and thereby giving an explicit tradeoff between the repair bandwidth and sub-packetization.
We propose a new setting for testing properties of distributions while receiving samples from several distributions, but few samples per distribution. Given samples from $s$ distributions, $p_1, p_2, ldots, p_s$, we design testers for the following p roblems: (1) Uniformity Testing: Testing whether all the $p_i$s are uniform or $epsilon$-far from being uniform in $ell_1$-distance (2) Identity Testing: Testing whether all the $p_i$s are equal to an explicitly given distribution $q$ or $epsilon$-far from $q$ in $ell_1$-distance, and (3) Closeness Testing: Testing whether all the $p_i$s are equal to a distribution $q$ which we have sample access to, or $epsilon$-far from $q$ in $ell_1$-distance. By assuming an additional natural condition about the source distributions, we provide sample optimal testers for all of these problems.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا