Do you want to publish a course? Click here

Refined Upper Bounds on Stopping Redundancy of Binary Linear Codes

190   0   0.0 ( 0 )
 Added by Yauhen Yakimenka
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

The $l$-th stopping redundancy $rho_l(mathcal C)$ of the binary $[n, k, d]$ code $mathcal C$, $1 le l le d$, is defined as the minimum number of rows in the parity-check matrix of $mathcal C$, such that the smallest stopping set is of size at least $l$. The stopping redundancy $rho(mathcal C)$ is defined as $rho_d(mathcal C)$. In this work, we improve on the probabilistic analysis of stopping redundancy, proposed by Han, Siegel and Vardy, which yields the best bounds known today. In our approach, we judiciously select the first few rows in the parity-check matrix, and then continue with the probabilistic method. By using similar techniques, we improve also on the best known bounds on $rho_l(mathcal C)$, for $1 le l le d$. Our approach is compared to the existing methods by numerical computations.



rate research

Read More

155 - Michael B. Baer 2007
This paper presents new lower and upper bounds for the compression rate of binary prefix codes optimized over memoryless sources according to various nonlinear codeword length objectives. Like the most well-known redundancy bounds for minimum average redundancy coding - Huffman coding - these are in terms of a form of entropy and/or the probability of an input symbol, often the most probable one. The bounds here, some of which are tight, improve on known bounds of the form L in [H,H+1), where H is some form of entropy in bits (or, in the case of redundancy objectives, 0) and L is the length objective, also in bits. The objectives explored here include exponential-average length, maximum pointwise redundancy, and exponential-average pointwise redundancy (also called dth exponential redundancy). The first of these relates to various problems involving queueing, uncertainty, and lossless communications; the second relates to problems involving Shannon coding and universal modeling. For these two objectives we also explore the related problem of the necessary and sufficient conditions for the shortest codeword of a code being a specific length.
In this paper, we revisit the problem of finding the longest systematic-length $k$ for a linear minimum storage regenerating (MSR) code with optimal repair of only systematic part, for a given per-node storage capacity $l$ and an arbitrary number of parity nodes $r$. We study the problem by following a geometric analysis of linear subspaces and operators. First, a simple quadratic bound is given, which implies that $k=r+2$ is the largest number of systematic nodes in the emph{scalar} scenario. Second, an $r$-based-log bound is derived, which is superior to the upper bound on log-base $2$ in the prior work. Finally, an explicit upper bound depending on the value of $frac{r^2}{l}$ is introduced, which further extends the corresponding result in the literature.
Stopping sets play a crucial role in failure events of iterative decoders over a binary erasure channel (BEC). The $ell$-th stopping redundancy is the minimum number of rows in the parity-check matrix of a code, which contains no stopping sets of size up to $ell$. In this work, a notion of coverable stopping sets is defined. In order to achieve maximum-likelihood performance under iterative decoding over the BEC, the parity-check matrix should contain no coverable stopping sets of size $ell$, for $1 le ell le n-k$, where $n$ is the code length, $k$ is the code dimension. By estimating the number of coverable stopping sets, we obtain upper bounds on the $ell$-th stopping redundancy, $1 le ell le n-k$. The bounds are derived for both specific codes and code ensembles. In the range $1 le ell le d-1$, for specific codes, the new bounds improve on the results in the literature. Numerical calculations are also presented.
We propose a novel soft-aided iterative decoding algorithm for product codes (PCs). The proposed algorithm, named iterative bounded distance decoding with combined reliability (iBDD-CR), enhances the conventional iterative bounded distance decoding (iBDD) of PCs by exploiting some level of soft information. In particular, iBDD-CR can be seen as a modification of iBDD where the hard decisions of the row and column decoders are made based on a reliability estimate of the BDD outputs. The reliability estimates are derived using extrinsic message passing for generalized low-density-parity check (GLDPC) ensembles, which encompass PCs. We perform a density evolution analysis of iBDD-CR for transmission over the additive white Gaussian noise channel for the GLDPC ensemble. We consider both binary transmission and bit-interleaved coded modulation with quadrature amplitude modulation.We show that iBDD-CR achieves performance gains up to $0.51$ dB compared to iBDD with the same internal decoder data flow. This makes the algorithm an attractive solution for very high-throughput applications such as fiber-optic communications.
We prove that, for the binary erasure channel (BEC), the polar-coding paradigm gives rise to codes that not only approach the Shannon limit but do so under the best possible scaling of their block length as a~function of the gap to capacity. This result exhibits the first known family of binary codes that attain both optimal scaling and quasi-linear complexity of encoding and decoding. Our proof is based on the construction and analysis of binary polar codes with large kernels. When communicating reliably at rates within $varepsilon > 0$ of capacity, the code length $n$ often scales as $O(1/varepsilon^{mu})$, where the constant $mu$ is called the scaling exponent. It is known that the optimal scaling exponent is $mu=2$, and it is achieved by random linear codes. The scaling exponent of conventional polar codes (based on the $2times 2$ kernel) on the BEC is $mu=3.63$. This falls far short of the optimal scaling guaranteed by random codes. Our main contribution is a rigorous proof of the following result: for the BEC, there exist $elltimesell$ binary kernels, such that polar codes constructed from these kernels achieve scaling exponent $mu(ell)$ that tends to the optimal value of $2$ as $ell$ grows. We furthermore characterize precisely how large $ell$ needs to be as a function of the gap between $mu(ell)$ and $2$. The resulting binary codes maintain the recursive structure of conventional polar codes, and thereby achieve construction complexity $O(n)$ and encoding/decoding complexity $O(nlog n)$.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا