ترغب بنشر مسار تعليمي؟ اضغط هنا

Nonparametric Composite Hypothesis Testing in an Asymptotic Regime

64   0   0.0 ( 0 )
 نشر من قبل Qunwei Li
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We investigate the nonparametric, composite hypothesis testing problem for arbitrary unknown distributions in the asymptotic regime where both the sample size and the number of hypotheses grow exponentially large. Such asymptotic analysis is important in many practical problems, where the number of variations that can exist within a family of distributions can be countably infinite. We introduce the notion of emph{discrimination capacity}, which captures the largest exponential growth rate of the number of hypotheses relative to the sample size so that there exists a test with asymptotically vanishing probability of error. Our approach is based on various distributional distance metrics in order to incorporate the generative model of the data. We provide analyses of the error exponent using the maximum mean discrepancy (MMD) and Kolmogorov-Smirnov (KS) distance and characterize the corresponding discrimination rates, i.e., lower bounds on the discrimination capacity, for these tests. Finally, an upper bound on the discrimination capacity based on Fanos inequality is developed. Numerical results are presented to validate the theoretical results.



قيم البحث

اقرأ أيضاً

330 - Eli Haim , Yuval Kochman 2017
We consider the problem of distributed binary hypothesis testing of two sequences that are generated by an i.i.d. doubly-binary symmetric source. Each sequence is observed by a different terminal. The two hypotheses correspond to different levels of correlation between the two source components, i.e., the crossover probability between the two. The terminals communicate with a decision function via rate-limited noiseless links. We analyze the tradeoff between the exponential decay of the two error probabilities associated with the hypothesis test and the communication rates. We first consider the side-information setting where one encoder is allowed to send the full sequence. For this setting, previous work exploits the fact that a decoding error of the source does not necessarily lead to an erroneous decision upon the hypothesis. We provide improved achievability results by carrying out a tighter analysis of the effect of binning error; the results are also more complete as they cover the full exponent tradeoff and all possible correlations. We then turn to the setting of symmetric rates for which we utilize Korner-Marton coding to generalize the results, with little degradation with respect to the performance with a one-sided constraint (side-information setting).
In this paper, we propose a Bayesian Hypothesis Testing Algorithm (BHTA) for sparse representation. It uses the Bayesian framework to determine active atoms in sparse representation of a signal. The Bayesian hypothesis testing based on three assump tions, determines the active atoms from the correlations and leads to the activity measure as proposed in Iterative Detection Estimation (IDE) algorithm. In fact, IDE uses an arbitrary decreasing sequence of thresholds while the proposed algorithm is based on a sequence which derived from hypothesis testing. So, Bayesian hypothesis testing framework leads to an improved version of the IDE algorithm. The simulations show that Hard-version of our suggested algorithm achieves one of the best results in terms of estimation accuracy among the algorithms which have been implemented in our simulations, while it has the greatest complexity in terms of simulation time.
We study a hypothesis testing problem in which data is compressed distributively and sent to a detector that seeks to decide between two possible distributions for the data. The aim is to characterize all achievable encoding rates and exponents of th e type 2 error probability when the type 1 error probability is at most a fixed value. For related problems in distributed source coding, schemes based on random binning perform well and often optimal. For distributed hypothesis testing, however, the use of binning is hindered by the fact that the overall error probability may be dominated by errors in binning process. We show that despite this complication, binning is optimal for a class of problems in which the goal is to test against conditional independence. We then use this optimality result to give an outer bound for a more general class of instances of the problem.
We study the problem of mismatched binary hypothesis testing between i.i.d. distributions. We analyze the tradeoff between the pairwise error probability exponents when the actual distributions generating the observation are different from the distri butions used in the likelihood ratio test, sequential probability ratio test, and Hoeffdings generalized likelihood ratio test in the composite setting. When the real distributions are within a small divergence ball of the test distributions, we find the deviation of the worst-case error exponent of each test with respect to the matched error exponent. In addition, we consider the case where an adversary tampers with the observation, again within a divergence ball of the observation type. We show that the tests are more sensitive to distribution mismatch than to adversarial observation tampering.
71 - Lin Zhou , Yun Wei , Alfred Hero 2020
We revisit the universal outlier hypothesis testing (Li emph{et al.}, TIT 2014) and derive fundamental limits for the optimal test. In outlying hypothesis testing, one is given multiple observed sequences, where most sequences are generated i.i.d. fr om a nominal distribution. The task is to discern the set of outlying sequences that are generated according to anomalous distributions. The nominal and anomalous distributions are emph{unknown}. We study the tradeoff among the probabilities of misclassification error, false alarm and false reject for tests that satisfy weak conditions on the rate of decrease of these error probabilities as a function of sequence length. Specifically, we propose a threshold-based universal test that ensures exponential decay of misclassification error and false alarm probabilities. We study two constraints on the false reject probabilities, one is that it be a non-vanishing constant and the other is that it have an exponential decay rate. For both cases, we characterize bounds on the false reject probability, as a function of the threshold, for each pair of nominal and anomalous distributions and demonstrate the optimality of our test in the generalized Neyman-Pearson sense. We first consider the case of at most one outlier and then generalize our results to the case of multiple outliers where the number of outliers is unknown and each outlier can follow a different anomalous distribution.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا