ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributed Task Encoding

68   0   0.0 ( 0 )
 نشر من قبل Christoph Pfister
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The rate region of the task-encoding problem for two correlated sources is characterized using a novel parametric family of dependence measures. The converse uses a new expression for the $rho$-th moment of the list size, which is derived using the relative $alpha$-entropy.

قيم البحث

اقرأ أيضاً

In general coding theory, we often assume that error is observed in transferring or storing encoded symbols, while the process of encoding itself is error-free. Motivated by recent applications of coding theory, in this paper, we consider the case wh ere the process of encoding is distributed and prone to error. We introduce the problem of distributed encoding, comprising of $Kinmathbb{N}$ isolated source nodes and $Ninmathbb{N}$ encoding nodes. Each source node has one symbol from a finite field and sends it to all encoding nodes. Each encoding node stores an encoded symbol, as a function of the received symbols. However, some of the source nodes are controlled by the adversary and may send different symbols to different encoding nodes. Depending on the number of adversarial nodes, denoted by $betainmathbb{N}$, and the number of symbols that each one generates, denoted by $vinmathbb{N}$, the process of decoding from the encoded symbols could be impossible. Assume that a decoder connects to an arbitrary subset of $t inmathbb{N}$ encoding nodes and wants to decode the symbols of the honest nodes correctly, without necessarily identifying the sets of honest and adversarial nodes. In this paper, we study $t^*inmathbb{N}$, the minimum of $t$, which is a function of $K$, $N$, $beta$, and $v$. We show that when the encoding nodes use linear coding, $t^*_{textrm{linear}}=K+2beta(v-1)$, if $Nge K+2beta(v-1)$, and $t^*_{textrm{linear}}=N$, if $Nle K+2beta(v-1)$. In order to achieve $t^*_{textrm{linear}}$, we use random linear coding and show that in any feasible solution that the decoder finds, the messages of the honest nodes are decoded correctly. For the converse of the fundamental limit, we show that when the adversary behaves in a particular way, it can always confuse the decoder between two feasible solutions that differ in the message of at least one honest node.
The problem of $X$-secure $T$-colluding symmetric Private Polynomial Computation (PPC) from coded storage system with $B$ Byzantine and $U$ unresponsive servers is studied in this paper. Specifically, a dataset consisting of $M$ files are stored acro ss $N$ distributed servers according to $(N,K+X)$ Maximum Distance Separable (MDS) codes such that any group of up to $X$ colluding servers can not learn anything about the data files. A user wishes to privately evaluate one out of a set of candidate polynomial functions over the $M$ files from the system, while guaranteeing that any $T$ colluding servers can not learn anything about the identity of the desired function and the user can not learn anything about the $M$ data files more than the desired polynomial function, in the presence of $B$ Byzantine servers that can send arbitrary responses maliciously to confuse the user and $U$ unresponsive servers that will not respond any information at all. Two novel symmetric PPC schemes using Lagrange encoding are proposed. Both the two schemes achieve the same PPC rate $1-frac{G(K+X-1)+T+2B}{N-U}$, secrecy rate $frac{G(K+X-1)+T}{N-(G(K+X-1)+T+2B+U)}$, finite field size and decoding complexity, where $G$ is the maximum degree over all the candidate polynomial functions. Particularly, the first scheme focuses on the general case that the candidate functions are consisted of arbitrary polynomials, and the second scheme restricts the candidate functions to be a finite-dimensional vector space (or sub-space) of polynomials over $mathbb{F}_p$ but requires less upload cost, query complexity and server computation complexity. Remarkably, the PPC setup studied in this paper generalizes all the previous MDS coded PPC setups and the two degraded schemes strictly outperform the best known schemes in terms of (asymptotical) PPC rate, which is the main concern of the PPC schemes.
Distributed Compressive Sensing (DCS) improves the signal recovery performance of multi signal ensembles by exploiting both intra- and inter-signal correlation and sparsity structure. However, the existing DCS was proposed for a very limited ensemble of signals that has single common information cite{Baron:2009vd}. In this paper, we propose a generalized DCS (GDCS) which can improve sparse signal detection performance given arbitrary types of common information which are classified into not just full common information but also a variety of partial common information. The theoretical bound on the required number of measurements using the GDCS is obtained. Unfortunately, the GDCS may require much a priori-knowledge on various inter common information of ensemble of signals to enhance the performance over the existing DCS. To deal with this problem, we propose a novel algorithm that can search for the correlation structure among the signals, with which the proposed GDCS improves detection performance even without a priori-knowledge on correlation structure for the case of arbitrarily correlated multi signal ensembles.
This chapter deals with the topic of designing reliable and efficient codes for the storage and retrieval of large quantities of data over storage devices that are prone to failure. For long, the traditional objective has been one of ensuring reliabi lity against data loss while minimizing storage overhead. More recently, a third concern has surfaced, namely of the need to efficiently recover from the failure of a single storage unit, corresponding to recovery from the erasure of a single code symbol. We explain here, how coding theory has evolved to tackle this fresh challenge.
The goal of threshold group testing is to identify up to $d$ defective items among a population of $n$ items, where $d$ is usually much smaller than $n$. A test is positive if it has at least $u$ defective items and negative otherwise. Our objective is to identify defective items in sublinear time the number of items, e.g., $mathrm{poly}(d, ln{n}),$ by using the number of tests as low as possible. In this paper, we reduce the number of tests to $O left( h times frac{d^2 ln^2{n}}{mathsf{W}^2(d ln{n})} right)$ and the decoding time to $O left( mathrm{dec}_0 times h right),$ where $mathrm{dec}_0 = O left( frac{d^{3.57} ln^{6.26}{n}}{mathsf{W}^{6.26}(d ln{n})} right) + O left( frac{d^6 ln^4{n}}{mathsf{W}^4(d ln{n})} right)$, $h = Oleft( frac{d_0^2 ln{frac{n}{d_0}}}{(1-p)^2} right)$ , $d_0 = max{u, d - u }$, $p in [0, 1),$ and $mathsf{W}(x) = Theta left( ln{x} - ln{ln{x}} right).$ If the number of tests is increased to $Oleft( h times frac{d^2ln^3{n}}{mathsf{W}^2(d ln{n})} right),$ the decoding complexity is reduced to $O left(mathrm{dec}_1 times h right),$ where $mathrm{dec}_1 = max left{ frac{d^2 ln^3{n}}{mathsf{W}^2(d ln{n})}, frac{ud ln^4{n}}{mathsf{W}^3(d ln{n})} right}.$ Moreover, our proposed scheme is capable of handling errors in test outcomes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا