ﻻ يوجد ملخص باللغة العربية
We consider a problem of guessing, wherein an adversary is interested in knowing the value of the realization of a discrete random variable $X$ on observing another correlated random variable $Y$. The adversary can make multiple (say, $k$) guesses. The adversarys guessing strategy is assumed to minimize $alpha$-loss, a class of tunable loss functions parameterized by $alpha$. It has been shown before that this loss function captures well known loss functions including the exponential loss ($alpha=1/2$), the log-loss ($alpha=1$) and the $0$-$1$ loss ($alpha=infty$). We completely characterize the optimal adversarial strategy and the resulting expected $alpha$-loss, thereby recovering known results for $alpha=infty$. We define an information leakage measure from the $k$-guesses setup and derive a condition under which the leakage is unchanged from a single guess.
We introduce a tunable GAN, called $alpha$-GAN, parameterized by $alpha in (0,infty]$, which interpolates between various $f$-GANs and Integral Probability Metric based GANs (under constrained discriminator set). We construct $alpha$-GAN using a supe
A loss function measures the discrepancy between the true values (observations) and their estimated fits, for a given instance of data. A loss function is said to be proper (unbiased, Fisher consistent) if the fits are defined over a unit simplex, an
If Alice must communicate with Bob over a channel shared with the adversarial Eve, then Bob must be able to validate the authenticity of the message. In particular we consider the model where Alice and Eve share a discrete memoryless multiple access
Lattice codes used under the Compute-and-Forward paradigm suggest an alternative strategy for the standard Gaussian multiple-access channel (MAC): The receiver successively decodes integer linear combinations of the messages until it can invert and r
For general memoryless systems, the typical information theoretic solution - when exists - has a single-letter form. This reflects the fact that optimum performance can be approached by a random code (or a random binning scheme), generated using inde