Do you want to publish a course? Click here

Projection decoding of some binary optimal linear codes of lengths 36 and 40

119   0   0.0 ( 0 )
 Added by Jon-Lark Kim
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Practically good error-correcting codes should have good parameters and efficient decoding algorithms. Some algebraically defined good codes such as cyclic codes, Reed-Solomon codes, and Reed-Muller codes have nice decoding algorithms. However, many optimal linear codes do not have an efficient decoding algorithm except for the general syndrome decoding which requires a lot of memory. Therefore, it is a natural question whether which optimal linear codes have an efficient decoding. We show that two binary optimal $[36,19,8]$ linear codes and two binary optimal $[40,22,8]$ codes have an efficient decoding algorithm. There was no known efficient decoding algorithm for the binary optimal $[36,19,8]$ and $[40,22,8]$ codes. We project them onto the much shorter length linear $[9,5,4]$ and $[10, 6, 4]$ codes over $GF(4)$, respectively. This decoding algorithms, called {em projection decoding}, can correct errors of weight up to 3. These $[36,19,8]$ and $[40,22,8]$ codes respectively have more codewords than any optimal self-dual $[36, 18, 8]$ and $[40,20,8]$ codes for given length and minimum weight, implying that these codes more practical.



rate research

Read More

A framework for linear-programming (LP) decoding of nonbinary linear codes over rings is developed. This framework facilitates linear-programming based reception for coded modulation systems which use direct modulation mapping of coded symbols. It is proved that the resulting LP decoder has the maximum-likelihood certificate property. It is also shown that the decoder output is the lowest cost pseudocodeword. Equivalence between pseudocodewords of the linear program and pseudocodewords of graph covers is proved. It is also proved that if the modulator-channel combination satisfies a particular symmetry condition, the codeword error rate performance is independent of the transmitted codeword. Two alternative polytopes for use with linear-programming decoding are studied, and it is shown that for many classes of codes these polytopes yield a complexity advantage for decoding. These polytope representations lead to polynomial-time decoders for a wide variety of classical nonbinary linear codes. LP decoding performance is illustrated for the [11,6] ternary Golay code with ternary PSK modulation over AWGN, and in this case it is shown that the performance of the LP decoder is comparable to codeword-error-rate-optimum hard-decision based decoding. LP decoding is also simulated for medium-length ternary and quaternary LDPC codes with corresponding PSK modulations over AWGN.
We prove that, for the binary erasure channel (BEC), the polar-coding paradigm gives rise to codes that not only approach the Shannon limit but do so under the best possible scaling of their block length as a~function of the gap to capacity. This result exhibits the first known family of binary codes that attain both optimal scaling and quasi-linear complexity of encoding and decoding. Our proof is based on the construction and analysis of binary polar codes with large kernels. When communicating reliably at rates within $varepsilon > 0$ of capacity, the code length $n$ often scales as $O(1/varepsilon^{mu})$, where the constant $mu$ is called the scaling exponent. It is known that the optimal scaling exponent is $mu=2$, and it is achieved by random linear codes. The scaling exponent of conventional polar codes (based on the $2times 2$ kernel) on the BEC is $mu=3.63$. This falls far short of the optimal scaling guaranteed by random codes. Our main contribution is a rigorous proof of the following result: for the BEC, there exist $elltimesell$ binary kernels, such that polar codes constructed from these kernels achieve scaling exponent $mu(ell)$ that tends to the optimal value of $2$ as $ell$ grows. We furthermore characterize precisely how large $ell$ needs to be as a function of the gap between $mu(ell)$ and $2$. The resulting binary codes maintain the recursive structure of conventional polar codes, and thereby achieve construction complexity $O(n)$ and encoding/decoding complexity $O(nlog n)$.
The conventional theory of linear network coding (LNC) is only over acyclic networks. Convolutional network coding (CNC) applies to all networks. It is also a form of LNC, but the linearity is w.r.t. the ring of rational power series rather than the field of data symbols. CNC has been generalized to LNC w.r.t. any discrete valuation ring (DVR) in order for flexibility in applications. For a causal DVR-based code, all possible source-generated messages form a free module, while incoming coding vectors to a receiver span the emph{received submodule}. An existing emph{time-invariant decoding} algorithm is at a delay equal to the largest valuation among all invariant factors of the received submodule. This intrinsic algebraic attribute is herein proved to be the optimal decoding delay. Meanwhile, emph{time-variant decoding} is formulated. The meaning of time-invariant decoding delay gets a new interpretation through being a special case of the time-variant counterpart. The optimal delay turns out to be the same for time-variant decoding, but the decoding algorithm is more flexible in terms of decodability check and decoding matrix design. All results apply, in particular, to CNC.
We propose a novel binary message passing decoding algorithm for product-like codes based on bounded distance decoding (BDD) of the component codes. The algorithm, dubbed iterative BDD with scaled reliability (iBDD-SR), exploits the channel reliabilities and is therefore soft in nature. However, the messages exchanged by the component decoders are binary (hard) messages, which significantly reduces the decoder data flow. The exchanged binary messages are obtained by combining the channel reliability with the BDD decoder output reliabilities, properly conveyed by a scaling factor applied to the BDD decisions. We perform a density evolution analysis for generalized low-density parity-check (GLDPC) code ensembles and spatially coupled GLDPC code ensembles, from which the scaling factors of the iBDD-SR for product and staircase codes, respectively, can be obtained. For the white additive Gaussian noise channel, we show performance gains up to $0.29$ dB and $0.31$ dB for product and staircase codes compared to conventional iterative BDD (iBDD) with the same decoder data flow. Furthermore, we show that iBDD-SR approaches the performance of ideal iBDD that prevents miscorrections.
We propose a binary message passing decoding algorithm for product codes based on generalized minimum distance decoding (GMDD) of the component codes, where the last stage of the GMDD makes a decision based on the Hamming distance metric. The proposed algorithm closes half of the gap between conventional iterative bounded distance decoding (iBDD) and turbo product decoding based on the Chase--Pyndiah algorithm, at the expense of some increase in complexity. Furthermore, the proposed algorithm entails only a limited increase in data flow compared to iBDD.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا