ﻻ يوجد ملخص باللغة العربية
We introduce a tunable GAN, called $alpha$-GAN, parameterized by $alpha in (0,infty]$, which interpolates between various $f$-GANs and Integral Probability Metric based GANs (under constrained discriminator set). We construct $alpha$-GAN using a supervised loss function, namely, $alpha$-loss, which is a tunable loss function capturing several canonical losses. We show that $alpha$-GAN is intimately related to the Arimoto divergence, which was first proposed by {O}sterriecher (1996), and later studied by Liese and Vajda (2006). We posit that the holistic understanding that $alpha$-GAN introduces will have practical benefits of addressing both the issues of vanishing gradients and mode collapse.
We consider a problem of guessing, wherein an adversary is interested in knowing the value of the realization of a discrete random variable $X$ on observing another correlated random variable $Y$. The adversary can make multiple (say, $k$) guesses. T
We propose Shotgun, a parallel coordinate descent algorithm for minimizing L1-regularized losses. Though coordinate descent seems inherently sequential, we prove convergence bounds for Shotgun which predict linear speedups, up to a problem-dependent
In this paper, we are interested in what we term the federated private bandits framework, that combines differential privacy with multi-agent bandit learning. We explore how differential privacy based Upper Confidence Bound (UCB) methods can be appli
Change detection (CD) in time series data is a critical problem as it reveal changes in the underlying generative processes driving the time series. Despite having received significant attention, one important unexplored aspect is how to efficiently
Sparsity-based subspace clustering algorithms have attracted significant attention thanks to their excellent performance in practical applications. A prominent example is the sparse subspace clustering (SSC) algorithm by Elhamifar and Vidal, which pe