Do you want to publish a course? Click here

Error Correction for Index Coding With Coded Side Information

152   0   0.0 ( 0 )
 Added by Eimear Byrne
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Index coding is a source coding problem in which a broadcaster seeks to meet the different demands of several users, each of whom is assumed to have some prior information on the data held by the sender. If the sender knows its clients requests and their side-information sets, then the number of packet transmissions required to satisfy all users demands can be greatly reduced if the data is encoded before sending. The collection of side-information indices as well as the indices of the requested data is described as an instance of the index coding with side-information (ICSI) problem. The encoding function is called the index code of the instance, and the number of transmissions employed by the code is referred to as its length. The main ICSI problem is to determine the optimal length of an index code for and instance. As this number is hard to compute, bounds approximating it are sought, as are algorithms to compute efficient index codes. Two interesting generalizations of the problem that have appeared in the literature are the subject of this work. The first of these is the case of index coding with coded side information, in which linear combinations of the source data are both requested by and held as users side-information. The second is the introduction of error-correction in the problem, in which the broadcast channel is subject to noise. In this paper we characterize the optimal length of a scalar or vector linear index code with coded side information (ICCSI) over a finite field in terms of a generalized min-rank and give bounds on this number based on constructions of random codes for an arbitrary instance. We furthermore consider the length of an optimal error correcting code for an instance of the ICCSI problem and obtain bounds on this number, both for the Hamming metric and for rank-metric errors. We describe decoding algorithms for both categories of errors.



rate research

Read More

This letter investigates a new class of index coding problems. One sender broadcasts packets to multiple users, each desiring a subset, by exploiting prior knowledge of linear combinations of packets. We refer to this class of problems as index coding with coded side-information. Our aim is to characterize the minimum index code length that the sender needs to transmit to simultaneously satisfy all user requests. We show that the optimal binary vector index code length is equal to the minimum rank (minrank) of a matrix whose elements consist of the sets of desired packet indices and side- information encoding matrices. This is the natural extension of matrix minrank in the presence of coded side information. Using the derived expression, we propose a greedy randomized algorithm to minimize the rank of the derived matrix.
We consider linear network error correction (LNEC) coding when errors may occur on edges of a communication network of which the topology is known. In this paper, we first revisit and explore the framework of LNEC coding, and then unify two well-known LNEC coding approaches. Furthermore, by developing a graph-theoretic approach to the framework of LNEC coding, we obtain a significantly enhanced characterization of the error correction capability of LNEC codes in terms of the minimum distances at the sink nodes. In LNEC coding, the minimum required field size for the existence of LNEC codes, in particular LNEC maximum distance separable (MDS) codes which are a type of most important optimal codes, is an open problem not only of theoretical interest but also of practical importance, because it is closely related to the implementation of the coding scheme in terms of computational complexity and storage requirement. By applying the graph-theoretic approach, we obtain an improved upper bound on the minimum required field size. The improvement over the existing results is in general significant. The improved upper bound, which is graph-theoretic, depends only on the network topology and requirement of the error correction capability but not on a specific code construction. However, this bound is not given in an explicit form. We thus develop an efficient algorithm that can compute the bound in linear time. In developing the upper bound and the efficient algorithm for computing this bound, various graph-theoretic concepts are introduced. These concepts appear to be of fundamental interest in graph theory and they may have further applications in graph theory and beyond.
This paper focuses on the structural properties of test channels, of Wyners operational information rate distortion function (RDF), $overline{R}(Delta_X)$, of a tuple of multivariate correlated, jointly independent and identically distributed Gaussian random variables (RVs), ${X_t, Y_t}_{t=1}^infty$, $X_t: Omega rightarrow {mathbb R}^{n_x}$, $Y_t: Omega rightarrow {mathbb R}^{n_y}$, with average mean-square error at the decoder, $frac{1}{n} {bf E}sum_{t=1}^n||X_t - widehat{X}_t||^2leq Delta_X$, when ${Y_t}_{t=1}^infty$ is the side information available to the decoder only. We construct optimal test channel realizations, which achieve the informational RDF, $overline{R}(Delta_X) triangleqinf_{{cal M}(Delta_X)} I(X;Z|Y)$, where ${cal M}(Delta_X)$ is the set of auxiliary RVs $Z$ such that, ${bf P}_{Z|X,Y}={bf P}_{Z|X}$, $widehat{X}=f(Y,Z)$, and ${bf E}{||X-widehat{X}||^2}leq Delta_X$. We show the fundamental structural properties: (1) Optimal test channel realizations that achieve the RDF, $overline{R}(Delta_X)$, satisfy conditional independence, $ {bf P}_{X|widehat{X}, Y, Z}={bf P}_{X|widehat{X},Y}={bf P}_{X|widehat{X}}, hspace{.2in} {bf E}Big{XBig|widehat{X}, Y, ZBig}={bf E}Big{XBig|widehat{X}Big}=widehat{X} $ and (2) similarly for the conditional RDF, ${R}_{X|Y}(Delta_X) triangleq inf_{{bf P}_{widehat{X}|X,Y}:{bf E}{||X-widehat{X}||^2} leq Delta_X} I(X; widehat{X}|Y)$, when ${Y_t}_{t=1}^infty$ is available to both the encoder and decoder, and the equality $overline{R}(Delta_X)={R}_{X|Y}(Delta_X)$.
An encoder, subject to a rate constraint, wishes to describe a Gaussian source under squared error distortion. The decoder, besides receiving the encoders description, also observes side information consisting of uncompressed source symbol subject to slow fading and noise. The decoder knows the fading realization but the encoder knows only its distribution. The rate-distortion function that simultaneously satisfies the distortion constraints for all fading states was derived by Heegard and Berger. A layered encoding strategy is considered in which each codeword layer targets a given fading state. When the side-information channel has two discrete fading states, the expected distortion is minimized by optimally allocating the encoding rate between the two codeword layers. For multiple fading states, the minimum expected distortion is formulated as the solution of a convex optimization problem with linearly many variables and constraints. Through a limiting process on the primal and dual solutions, it is shown that single-layer rate allocation is optimal when the fading probability density function is continuous and quasiconcave (e.g., Rayleigh, Rician, Nakagami, and log-normal). In particular, under Rayleigh fading, the optimal single codeword layer targets the least favorable state as if the side information was absent.
Decreasing transistor sizes and lower voltage swings cause two distinct problems for communication in integrated circuits. First, decreasing inter-wire spacing increases interline capacitive coupling, which adversely affects transmission energy and delay. Second, lower voltage swings render the transmission susceptible to various noise sources. Coding can be used to address both these problems. So-called crosstalk-avoidance codes mitigate capacitive coupling, and traditional error-correction codes introduce resilience against channel errors. Unfortunately, crosstalk-avoidance and error-correction codes cannot be combined in a straightforward manner. On the one hand, crosstalk-avoidance encoding followed by error-correction encoding destroys the crosstalk-avoidance property. On the other hand, error-correction encoding followed by crosstalk-avoidance encoding causes the crosstalk-avoidance decoder to fail in the presence of errors. Existing approaches circumvent this difficulty by using additional bus wires to protect the parities generated from the output of the error-correction encoder, and are therefore inefficient. In this work we propose a novel joint crosstalk-avoidance and error-correction coding and decoding scheme that provides higher bus transmission rates compared to existing approaches. Our joint approach carefully embeds the parities such that the crosstalk-avoidance property is preserved. We analyze the rate and minimum distance of the proposed scheme. We also provide a density evolution analysis and predict iterative decoding thresholds for reliable communication under random bus erasures. This density evolution analysis is nonstandard, since the crosstalk-avoidance constraints are inherently nonlinear.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا