ترغب بنشر مسار تعليمي؟ اضغط هنا

Democratic Source Coding: An Optimal Fixed-Length Quantization Scheme for Distributed Optimization Under Communication Constraints

416   0   0.0 ( 0 )
 نشر من قبل Rajarshi Saha
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The communication cost of distributed optimization algorithms is a major bottleneck in their scalability. This work considers a parameter-server setting in which the worker is constrained to communicate information to the server using only $R$ bits per dimension. We show that $mathbf{democratic}$ $mathbf{embeddings}$ from random matrix theory are significantly useful for designing efficient and optimal vector quantizers that respect this bit budget. The resulting polynomial complexity source coding schemes are used to design distributed optimization algorithms with convergence rates matching the minimax optimal lower bounds for (i) Smooth and Strongly-Convex objectives with access to an Exact Gradient oracle, as well as (ii) General Convex and Non-Smooth objectives with access to a Noisy Subgradient oracle. We further propose a relaxation of this coding scheme which is nearly minimax optimal. Numerical simulations validate our theoretical claims.



قيم البحث

اقرأ أيضاً

For reliable transmission across a noisy communication channel, classical results from information theory show that it is asymptotically optimal to separate out the source and channel coding processes. However, this decomposition can fall short in th e finite bit-length regime, as it requires non-trivial tuning of hand-crafted codes and assumes infinite computational power for decoding. In this work, we propose to jointly learn the encoding and decoding processes using a new discrete variational autoencoder model. By adding noise into the latent codes to simulate the channel during training, we learn to both compress and error-correct given a fixed bit-length and computational budget. We obtain codes that are not only competitive against several separation schemes, but also learn useful robust representations of the data for downstream tasks such as classification. Finally, inference amortization yields an extremely fast neural decoder, almost an order of magnitude faster compared to standard decoding methods based on iterative belief propagation.
A central issue of distributed computing systems is how to optimally allocate computing and storage resources and design data shuffling strategies such that the total execution time for computing and data shuffling is minimized. This is extremely cri tical when the computation, storage and communication resources are limited. In this paper, we study the resource allocation and coding scheme for the MapReduce-type framework with limited resources. In particular, we focus on the coded distributed computing (CDC) approach proposed by Li et al.. We first extend the asymmetric CDC (ACDC) scheme proposed by Yu et al. to the cascade case where each output function is computed by multiple servers. Then we demonstrate that whether CDC or ACDC is better depends on system parameters (e.g., number of computing servers) and task parameters (e.g., number of input files), implying that neither CDC nor ACDC is optimal. By merging the ideas of CDC and ACDC, we propose a hybrid scheme and show that it can strictly outperform CDC and ACDC. Furthermore, we derive an information-theoretic converse showing that for the MapReduce task using a type of weakly symmetric Reduce assignment, which includes the Reduce assignments of CDC and ACDC as special cases, the hybrid scheme with a corresponding resource allocation strategy is optimal, i.e., achieves the minimum execution time, for an arbitrary amount of computing servers and storage memories.
Distributed source coding is the task of encoding an input in the absence of correlated side information that is only available to the decoder. Remarkably, Slepian and Wolf showed in 1973 that an encoder that has no access to the correlated side info rmation can asymptotically achieve the same compression rate as when the side information is available at both the encoder and the decoder. While there is significant prior work on this topic in information theory, practical distributed source coding has been limited to synthetic datasets and specific correlation structures. Here we present a general framework for lossy distributed source coding that is agnostic to the correlation structure and can scale to high dimensions. Rather than relying on hand-crafted source-modeling, our method utilizes a powerful conditional deep generative model to learn the distributed encoder and decoder. We evaluate our method on realistic high-dimensional datasets and show substantial improvements in distributed compression performance.
Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using a n erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a node failure is for a new node to download subsets of data stored at a number of surviving nodes, reconstruct a lost coded block using the downloaded data, and store it at the new node. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to download emph{functions} of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.
We study distributed estimation methods under communication constraints in a distributed version of the nonparametric random design regression model. We derive minimax lower bounds and exhibit methods that attain those bounds. Moreover, we show that adaptive estimation is possible in this setting.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا