ﻻ يوجد ملخص باللغة العربية
The achievable error-exponent pairs for the type I and type II errors are characterized in a hypothesis testing setup where the observation consists of independent and identically distributed samples from either a known joint probability distribution or an unknown product distribution. The empirical mutual information test, the Hoeffding test, and the generalized likelihood-ratio test are all shown to be asymptotically optimal. An expression based on a Renyi measure of dependence is shown to be the Fenchel biconjugate of the error-exponent function obtained by fixing one error exponent and optimizing the other. An example is provided where the error-exponent function is not convex and thus not equal to its Fenchel biconjugate.
This paper gives upper and lower bounds on the minimum error probability of Bayesian $M$-ary hypothesis testing in terms of the Arimoto-Renyi conditional entropy of an arbitrary order $alpha$. The improved tightness of these bounds over their specializ
During the last two decades, concentration of measure has been a subject of various exciting developments in convex geometry, functional analysis, statistical physics, high-dimensional statistics, probability theory, information theory, communication
For gambling on horses, a one-parameter family of utility functions is proposed, which contains Kellys logarithmic criterion and the expected-return criterion as special cases. The strategies that maximize the utility function are derived, and the co
Renyi divergence is related to Renyi entropy much like Kullback-Leibler divergence is related to Shannons entropy, and comes up in many settings. It was introduced by Renyi as a measure of information that satisfies almost the same axioms as Kullback
Two maximization problems of Renyi entropy rate are investigated: the maximization over all stochastic processes whose marginals satisfy a linear constraint, and the Burg-like maximization over all stochastic processes whose autocovariance function b