Do you want to publish a course? Click here

Sparse Communication via Mixed Distributions

72   0   0.0 ( 0 )
 Added by Ant\\'onio Farinhas
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Neural networks and other machine learning models compute continuous representations, while humans communicate mostly through discrete symbols. Reconciling these two forms of communication is desirable for generating human-readable interpretations or learning discrete latent variable models, while maintaining end-to-end differentiability. Some existing approaches (such as the Gumbel-Softmax transformation) build continuous relaxations that are discrete approximations in the zero-temperature limit, while others (such as sparsemax transformations and the Hard Concrete distribution) produce discrete/continuous hybrids. In this paper, we build rigorous theoretical foundations for these hybrids, which we call mixed random variables. Our starting point is a new direct sum base measure defined on the face lattice of the probability simplex. From this measure, we introduce new entropy and Kullback-Leibler divergence functions that subsume the discrete and differential cases and have interpretations in terms of code optimality. Our framework suggests two strategies for representing and sampling mixed random variables, an extrinsic (sample-and-project) and an intrinsic one (based on face stratification). We experiment with both approaches on an emergent communication benchmark and on modeling MNIST and Fashion-MNIST data with variational auto-encoders with mixed latent variables.



rate research

Read More

We consider the problem of estimating sparse discrete distributions under local differential privacy (LDP) and communication constraints. We characterize the sample complexity for sparse estimation under LDP constraints up to a constant factor and the sample complexity under communication constraints up to a logarithmic factor. Our upper bounds under LDP are based on the Hadamard Response, a private coin scheme that requires only one bit of communication per user. Under communication constraints, we propose public coin schemes based on random hashing functions. Our tight lower bounds are based on the recently proposed method of chi squared contractions.
Exponential families are widely used in machine learning; they include many distributions in continuous and discrete domains (e.g., Gaussian, Dirichlet, Poisson, and categorical distributions via the softmax transformation). Distributions in each of these families have fixed support. In contrast, for finite domains, there has been recent works on sparse alternatives to softmax (e.g. sparsemax, $alpha$-entmax, and fusedmax) and corresponding losses, which have varying support. This paper expands that line of work in several directions: first, it extends $Omega$-regularized prediction maps and Fenchel-Young losses to arbitrary domains (possibly countably infinite or continuous). For linearly parametrized families, we show that minimization of Fenchel-Young losses is equivalent to moment matching of the statistics, generalizing a fundamental property of exponential families. When $Omega$ is a Tsallis negentropy with parameter $alpha$, we obtain deformed exponential families, which include $alpha$-entmax and sparsemax ($alpha$ = 2) as particular cases. For quadratic energy functions in continuous domains, the resulting densities are $beta$-Gaussians, an instance of elliptical distributions that contain as particular cases the Gaussian, biweight, triweight and Epanechnikov densities, and for which we derive closed-form expressions for the variance, Tsallis entropy, and Fenchel-Young loss. When $Omega$ is a total variation or Sobolev regularizer, we obtain a continuous version of the fusedmax. Finally, we introduce continuous-domain attention mechanisms, deriving efficient gradient backpropagation algorithms for $alpha in {1, 4/3, 3/2, 2}$. Using them, we demonstrate our sparse continuous distributions for attention-based audio classification and visual question answering, showing that they allow attending to time intervals and compact regions.
106 - Tianyi Chen , Bo Ji , Yixin Shi 2020
The compression of deep neural networks (DNNs) to reduce inference cost becomes increasingly important to meet realistic deployment requirements of various applications. There have been a significant amount of work regarding network compression, while most of them are heuristic rule-based or typically not friendly to be incorporated into varying scenarios. On the other hand, sparse optimization yielding sparse solutions naturally fits the compression requirement, but due to the limited study of sparse optimization in stochastic learning, its extension and application onto model compression is rarely well explored. In this work, we propose a model compression framework based on the recent progress on sparse stochastic optimization. Compared to existing model compression techniques, our method is effective and requires fewer extra engineering efforts to incorporate with varying applications, and has been numerically demonstrated on benchmark compression tasks. Particularly, we achieve up to 7.2 and 2.9 times FLOPs reduction with the same level of evaluation accuracy on VGG16 for CIFAR10 and ResNet50 for ImageNet compared to the baseline heavy models, respectively.
159 - Shuang Liang , Yuanming Shi , 2021
Massive connectivity is a critical challenge of Internet of Things (IoT) networks. In this paper, we consider the grant-free uplink transmission of an IoT network with a multi-antenna base station (BS) and a large number of single-antenna IoT devices. Due to the sporadic nature of IoT devices, we formulate the joint activity detection and channel estimation (JADCE) problem as a group-sparse matrix estimation problem. Although many algorithms have been proposed to solve the JADCE problem, most of them are developed based on compressive sensing technique, yielding suboptimal solutions. In this paper, we first develop an efficient weighted $l_1$-norm minimization algorithm to better approximate the group sparsity than the existing mixed $l_1/l_2$-norm minimization. Although an enhanced estimation performance in terms of the mean squared error (MSE) can be achieved, the weighted $l_1$-norm minimization algorithm is still a convex relaxation of the original group-sparse matrix estimation problem, yielding a suboptimal solution. To this end, we further reformulate the JADCE problem as a mixed integer programming (MIP) problem, which can be solved by using the branch-and-bound method. As a result, we are able to obtain an optimal solution of the JADCE problem, which can be adopted as an upper bound to evaluate the effectiveness of the existing algorithms. Moreover, we also derive the minimum pilot sequence length required to fully recover the estimated matrix in the noiseless scenario. Simulation results show the performance gains of the proposed optimal algorithm over the proposed weighted $l_1$-norm algorithm and the conventional mixed $l_1/l_2$-norm algorithm. Results also show that the proposed algorithms require a short pilot sequence than the conventional algorithm to achieve the same estimation performance.
115 - Yuan Yin , Zhewei Wei 2019
Graph embedding learns low-dimensional representations for nodes in a graph and effectively preserves the graph structure. Recently, a significant amount of progress has been made toward this emerging research area. However, there are several fundamental problems that remain open. First, existing methods fail to preserve the out-degree distributions on directed graphs. Second, many existing methods employ random walk based proximities and thus suffer from conflicting optimization goals on undirected graphs. Finally, existing factorization methods are unable to achieve scalability and non-linearity simultaneously. This paper presents an in-depth study on graph embedding techniques on both directed and undirected graphs. We analyze the fundamental reasons that lead to the distortion of out-degree distributions and to the conflicting optimization goals. We propose {em transpose proximity}, a unified approach that solves both problems. Based on the concept of transpose proximity, we design strap, a factorization based graph embedding algorithm that achieves scalability and non-linearity simultaneously. strap makes use of the {em backward push} algorithm to efficiently compute the sparse {em Personalized PageRank (PPR)} as its transpose proximities. By imposing the sparsity constraint, we are able to apply non-linear operations to the proximity matrix and perform efficient matrix factorization to derive the embedding vectors. Finally, we present an extensive experimental study that evaluates the effectiveness of various graph embedding algorithms, and we show that strap outperforms the state-of-the-art methods in terms of effectiveness and scalability.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا