ترغب بنشر مسار تعليمي؟ اضغط هنا

Evaluating Multiple Guesses by an Adversary via a Tunable Loss Function

81   0   0.0 ( 0 )
 نشر من قبل Gowtham Raghunath Kurri
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider a problem of guessing, wherein an adversary is interested in knowing the value of the realization of a discrete random variable $X$ on observing another correlated random variable $Y$. The adversary can make multiple (say, $k$) guesses. The adversarys guessing strategy is assumed to minimize $alpha$-loss, a class of tunable loss functions parameterized by $alpha$. It has been shown before that this loss function captures well known loss functions including the exponential loss ($alpha=1/2$), the log-loss ($alpha=1$) and the $0$-$1$ loss ($alpha=infty$). We completely characterize the optimal adversarial strategy and the resulting expected $alpha$-loss, thereby recovering known results for $alpha=infty$. We define an information leakage measure from the $k$-guesses setup and derive a condition under which the leakage is unchanged from a single guess.

قيم البحث

اقرأ أيضاً

We introduce a tunable GAN, called $alpha$-GAN, parameterized by $alpha in (0,infty]$, which interpolates between various $f$-GANs and Integral Probability Metric based GANs (under constrained discriminator set). We construct $alpha$-GAN using a supe rvised loss function, namely, $alpha$-loss, which is a tunable loss function capturing several canonical losses. We show that $alpha$-GAN is intimately related to the Arimoto divergence, which was first proposed by {O}sterriecher (1996), and later studied by Liese and Vajda (2006). We posit that the holistic understanding that $alpha$-GAN introduces will have practical benefits of addressing both the issues of vanishing gradients and mode collapse.
A loss function measures the discrepancy between the true values (observations) and their estimated fits, for a given instance of data. A loss function is said to be proper (unbiased, Fisher consistent) if the fits are defined over a unit simplex, an d the minimizer of the expected loss is the true underlying probability of the data. Typical examples are the zero-one loss, the quadratic loss and the Bernoulli log-likelihood loss (log-loss). In this work we show that for binary classification problems, the divergence associated with smooth, proper and convex loss functions is bounded from above by the Kullback-Leibler (KL) divergence, up to a multiplicative normalization constant. It implies that by minimizing the log-loss (associated with the KL divergence), we minimize an upper bound to any choice of loss functions from this set. This property justifies the broad use of log-loss in regression, decision trees, deep neural networks and many other applications. In addition, we show that the KL divergence bounds from above any separable Bregman divergence that is convex in its second argument (up to a multiplicative normalization constant). This result introduces a new set of divergence inequalities, similar to the well-known Pinsker inequality.
If Alice must communicate with Bob over a channel shared with the adversarial Eve, then Bob must be able to validate the authenticity of the message. In particular we consider the model where Alice and Eve share a discrete memoryless multiple access channel with Bob, thus allowing simultaneous transmissions from Alice and Eve. By traditional random coding arguments, we demonstrate an inner bound on the rate at which Alice may transmit, while still granting Bob the ability to authenticate. Furthermore this is accomplished in spite of Alice and Bob lacking a pre-shared key, as well as allowing Eve prior knowledge of both the codebook Alice and Bob share and the messages Alice transmits.
Lattice codes used under the Compute-and-Forward paradigm suggest an alternative strategy for the standard Gaussian multiple-access channel (MAC): The receiver successively decodes integer linear combinations of the messages until it can invert and r ecover all messages. In this paper, a multiple-access technique called CFMA (Compute-Forward Multiple Access) is proposed and analyzed. For the two-user MAC, it is shown that without time-sharing, the entire capacity region can be attained using CFMA with a single-user decoder as soon as the signal-to-noise ratios are above $1+sqrt{2}$. A partial analysis is given for more than two users. Lastly the strategy is extended to the so-called dirty MAC where two interfering signals are known non-causally to the two transmitters in a distributed fashion. Our scheme extends the previously known results and gives new achievable rate regions.
65 - Tal Philosof , Ram Zamir 2008
For general memoryless systems, the typical information theoretic solution - when exists - has a single-letter form. This reflects the fact that optimum performance can be approached by a random code (or a random binning scheme), generated using inde pendent and identically distributed copies of some single-letter distribution. Is that the form of the solution of any (information theoretic) problem? In fact, some counter examples are known. The most famous is the two help one problem: Korner and Marton showed that if we want to decode the modulo-two sum of two binary sources from their independent encodings, then linear coding is better than random coding. In this paper we provide another counter example, the doubly-dirty multiple access channel (MAC). Like the Korner-Marton problem, this is a multi-terminal scenario where side information is distributed among several terminals; each transmitter knows part of the channel interference but the receiver is not aware of any part of it. We give an explicit solution for the capacity region of a binary version of the doubly-dirty MAC, demonstrate how the capacity region can be approached using a linear coding scheme, and prove that the best known single-letter region is strictly contained in it. We also state a conjecture regarding a similar rate loss of single letter characterization in the Gaussian case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا