ترغب بنشر مسار تعليمي؟ اضغط هنا

Linear-Programming Decoding of Nonbinary Linear Codes

234   0   0.0 ( 0 )
 نشر من قبل Vitaly Skachek
 تاريخ النشر 2009
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A framework for linear-programming (LP) decoding of nonbinary linear codes over rings is developed. This framework facilitates linear-programming based reception for coded modulation systems which use direct modulation mapping of coded symbols. It is proved that the resulting LP decoder has the maximum-likelihood certificate property. It is also shown that the decoder output is the lowest cost pseudocodeword. Equivalence between pseudocodewords of the linear program and pseudocodewords of graph covers is proved. It is also proved that if the modulator-channel combination satisfies a particular symmetry condition, the codeword error rate performance is independent of the transmitted codeword. Two alternative polytopes for use with linear-programming decoding are studied, and it is shown that for many classes of codes these polytopes yield a complexity advantage for decoding. These polytope representations lead to polynomial-time decoders for a wide variety of classical nonbinary linear codes. LP decoding performance is illustrated for the [11,6] ternary Golay code with ternary PSK modulation over AWGN, and in this case it is shown that the performance of the LP decoder is comparable to codeword-error-rate-optimum hard-decision based decoding. LP decoding is also simulated for medium-length ternary and quaternary LDPC codes with corresponding PSK modulations over AWGN.



قيم البحث

اقرأ أيضاً

We consider transmission over a binary-input additive white Gaussian noise channel using low-density parity-check codes. One of the most popular techniques for decoding low-density parity-check codes is the linear programming decoder. In general, the linear programming decoder is suboptimal. I.e., the word error rate is higher than the optimal, maximum a posteriori decoder. In this paper we present a systematic approach to enhance the linear program decoder. More precisely, in the cases where the linear program outputs a fractional solution, we give a simple algorithm to identify frustrated cycles which cause the output of the linear program to be fractional. Then adding these cycles, adaptively to the basic linear program, we show improved word error rate performance.
The conventional theory of linear network coding (LNC) is only over acyclic networks. Convolutional network coding (CNC) applies to all networks. It is also a form of LNC, but the linearity is w.r.t. the ring of rational power series rather than the field of data symbols. CNC has been generalized to LNC w.r.t. any discrete valuation ring (DVR) in order for flexibility in applications. For a causal DVR-based code, all possible source-generated messages form a free module, while incoming coding vectors to a receiver span the emph{received submodule}. An existing emph{time-invariant decoding} algorithm is at a delay equal to the largest valuation among all invariant factors of the received submodule. This intrinsic algebraic attribute is herein proved to be the optimal decoding delay. Meanwhile, emph{time-variant decoding} is formulated. The meaning of time-invariant decoding delay gets a new interpretation through being a special case of the time-variant counterpart. The optimal delay turns out to be the same for time-variant decoding, but the decoding algorithm is more flexible in terms of decodability check and decoding matrix design. All results apply, in particular, to CNC.
High quality data is essential in deep learning to train a robust model. While in other fields data is sparse and costly to collect, in error decoding it is free to query and label thus allowing potential data exploitation. Utilizing this fact and in spired by active learning, two novel methods are introduced to improve Weighted Belief Propagation (WBP) decoding. These methods incorporate machine-learning concepts with error decoding measures. For BCH(63,36), (63,45) and (127,64) codes, with cycle-reduced parity-check matrices, improvement of up to 0.4dB at the waterfall region, and of up to 1.5dB at the errorfloor region in FER, over the original WBP, is demonstrated by smartly sampling the data, without increasing inference (decoding) complexity. The proposed methods constitutes an example guidelines for model enhancement by incorporation of domain knowledge from error-correcting field into a deep learning model. These guidelines can be adapted to any other deep learning based communication block.
118 - Lucky Galvez , Jon-Lark Kim 2019
Practically good error-correcting codes should have good parameters and efficient decoding algorithms. Some algebraically defined good codes such as cyclic codes, Reed-Solomon codes, and Reed-Muller codes have nice decoding algorithms. However, many optimal linear codes do not have an efficient decoding algorithm except for the general syndrome decoding which requires a lot of memory. Therefore, it is a natural question whether which optimal linear codes have an efficient decoding. We show that two binary optimal $[36,19,8]$ linear codes and two binary optimal $[40,22,8]$ codes have an efficient decoding algorithm. There was no known efficient decoding algorithm for the binary optimal $[36,19,8]$ and $[40,22,8]$ codes. We project them onto the much shorter length linear $[9,5,4]$ and $[10, 6, 4]$ codes over $GF(4)$, respectively. This decoding algorithms, called {em projection decoding}, can correct errors of weight up to 3. These $[36,19,8]$ and $[40,22,8]$ codes respectively have more codewords than any optimal self-dual $[36, 18, 8]$ and $[40,20,8]$ codes for given length and minimum weight, implying that these codes more practical.
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the larg e example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا