ترغب بنشر مسار تعليمي؟ اضغط هنا

Improved Linear Programming Decoding using Frustrated Cycles

202   0   0.0 ( 0 )
 نشر من قبل Shrinivas Kudekar Mr.
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider transmission over a binary-input additive white Gaussian noise channel using low-density parity-check codes. One of the most popular techniques for decoding low-density parity-check codes is the linear programming decoder. In general, the linear programming decoder is suboptimal. I.e., the word error rate is higher than the optimal, maximum a posteriori decoder. In this paper we present a systematic approach to enhance the linear program decoder. More precisely, in the cases where the linear program outputs a fractional solution, we give a simple algorithm to identify frustrated cycles which cause the output of the linear program to be fractional. Then adding these cycles, adaptively to the basic linear program, we show improved word error rate performance.



قيم البحث

اقرأ أيضاً

A framework for linear-programming (LP) decoding of nonbinary linear codes over rings is developed. This framework facilitates linear-programming based reception for coded modulation systems which use direct modulation mapping of coded symbols. It is proved that the resulting LP decoder has the maximum-likelihood certificate property. It is also shown that the decoder output is the lowest cost pseudocodeword. Equivalence between pseudocodewords of the linear program and pseudocodewords of graph covers is proved. It is also proved that if the modulator-channel combination satisfies a particular symmetry condition, the codeword error rate performance is independent of the transmitted codeword. Two alternative polytopes for use with linear-programming decoding are studied, and it is shown that for many classes of codes these polytopes yield a complexity advantage for decoding. These polytope representations lead to polynomial-time decoders for a wide variety of classical nonbinary linear codes. LP decoding performance is illustrated for the [11,6] ternary Golay code with ternary PSK modulation over AWGN, and in this case it is shown that the performance of the LP decoder is comparable to codeword-error-rate-optimum hard-decision based decoding. LP decoding is also simulated for medium-length ternary and quaternary LDPC codes with corresponding PSK modulations over AWGN.
A product code with single parity-check component codes can be described via the tools of a multi-kernel polar code, where the rows of the generator matrix are chosen according to the constraints imposed by the product code construction. Following th is observation, successive cancellation decoding of such codes is introduced. In particular, the error probability of single parity-check product codes over binary memoryless symmetric channels under successive cancellation decoding is characterized. A bridge with the analysis of product codes introduced by Elias is also established for the binary erasure channel. Successive cancellation list decoding of single parity-check product codes is then described. For the provided example, simulations over the binary input additive white Gaussian channel show that successive cancellation list decoding outperforms belief propagation decoding applied to the code graph. Finally, the performance of the concatenation of a product code with a high-rate outer code is investigated via distance spectrum analysis. Examples of concatenations performing within $0.7$ dB from the random coding union bound are provided.
In this paper we investigate the structure of the fundamental polytope used in the Linear Programming decoding introduced by Feldman, Karger and Wainwright. We begin by showing that for expander codes, every fractional pseudocodeword always has at le ast a constant fraction of non-integral bits. We then prove that for expander codes, the active set of any fractional pseudocodeword is smaller by a constant fraction than the active set of any codeword. We further exploit these geometrical properties to devise an improved decoding algorithm with the same complexity order as LP decoding that provably performs better, for any blocklength. It proceeds by guessing facets of the polytope, and then resolving the linear program on these facets. While the LP decoder succeeds only if the ML codeword has the highest likelihood over all pseudocodewords, we prove that the proposed algorithm, when applied to suitable expander codes, succeeds unless there exist a certain number of pseudocodewords, all adjacent to the ML codeword on the LP decoding polytope, and with higher likelihood than the ML codeword. We then describe an extended algorithm, still with polynomial complexity, that succeeds as long as there are at most polynomially many pseudocodewords above the ML codeword.
Staircase codes play an important role as error-correcting codes in optical communications. In this paper, a low-complexity method for resolving stall patterns when decoding staircase codes is described. Stall patterns are the dominating contributor to the error floor in the original decoding method. Our improvement is based on locating stall patterns by intersecting non-zero syndromes and flipping the corresponding bits. The approach effectively lowers the error floor and allows for a new range of block sizes to be considered for optical communications at a certain rate or, alternatively, a significantly decreased error floor for the same block size. Further, an improved error floor analysis is introduced which provides a more accurate estimation of the contributions to the error floor.
The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the larg e example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا