ترغب بنشر مسار تعليمي؟ اضغط هنا

Design of Capacity-Approaching Low-Density Parity-Check Codes using Recurrent Neural Networks

53   0   0.0 ( 0 )
 نشر من قبل Eleni Nisioti
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we model Density Evolution (DE) using Recurrent Neural Networks (RNNs) with the aim of designing capacity-approaching Irregular Low-Density Parity-Check (LDPC) codes for binary erasure channels. In particular, we present a method for determining the coefficients of the degree distributions, characterizing the structure of an LDPC code. We refer to our RNN architecture as Neural Density Evolution (NDE) and determine the weights of the RNN that correspond to optimal designs by minimizing a loss function that enforces the properties of asymptotically optimal design, as well as the desired structural characteristics of the code. This renders the LDPC design process highly configurable, as constraints can be added to meet applications requirements by means of modifying the loss function. In order to train the RNN, we generate data corresponding to the expected channel noise. We analyze the complexity and optimality of NDE theoretically, and compare it with traditional design methods that employ differential evolution. Simulations illustrate that NDE improves upon differential evolution both in terms of asymptotic performance and complexity. Although we focus on asymptotic settings, we evaluate designs found by NDE for finite codeword lengths and observe that performance remains satisfactory across a variety of channels.



قيم البحث

اقرأ أيضاً

The concept and existence of sphere-bound-achieving and capacity-achieving lattices has been explained on AWGN channels by Forney. LDPC lattices, introduced by Sadeghi, perform very well under iterative decoding algorithm. In this work, we focus on a n ensemble of regular LDPC lattices. We produce and investigate an ensemble of LDPC lattices with known properties. It is shown that these lattices are sphere-bound-achieving and capacity-achieving. As byproducts we find the minimum distance, coding gain, kissing number and an upper bound for probability of error for this special ensemble of regular LDPC lattices.
Consider transmission over a binary additive white gaussian noise channel using a fixed low-density parity check code. We consider the posterior measure over the code bits and the corresponding correlation between two codebits, averaged over the nois e realizations. We show that for low enough noise variance this average correlation decays exponentially fast with the graph distance between the code bits. One consequence of this result is that for low enough noise variance the GEXIT functions (further averaged over a standard code ensemble) of the belief propagation and optimal decoders are the same.
420 - Min-Hsiu Hsieh , Todd A. Brun , 2009
We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasi-cyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Shor-Steane (CSS) construction do not need to satisfy the dual-containing property as long as pre-shared entanglement is available to both sender and receiver. We can use this to avoid the many 4-cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.
Process Mining consists of techniques where logs created by operative systems are transformed into process models. In process mining tools it is often desired to be able to classify ongoing process instances, e.g., to predict how long the process wil l still require to complete, or to classify process instances to different classes based only on the activities that have occurred in the process instance thus far. Recurrent neural networks and its subclasses, such as Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM), have been demonstrated to be able to learn relevant temporal features for subsequent classification tasks. In this paper we apply recurrent neural networks to classifying process instances. The proposed model is trained in a supervised fashion using labeled process instances extracted from event log traces. This is the first time we know of GRU having been used in classifying business process instances. Our main experimental results shows that GRU outperforms LSTM remarkably in training time while giving almost identical accuracies to LSTM models. Additional contributions of our paper are improving the classification model training time by filtering infrequent activities, which is a technique commonly used, e.g., in Natural Language Processing (NLP).
We consider the effect of log-likelihood ratio saturation on belief propagation decoder low-density parity-check codes. Saturation is commonly done in practice and is known to have a significant effect on error floor performance. Our focus is on thre shold analysis and stability of density evolution. We analyze the decoder for standard low-density parity-check code ensembles and show that belief propagation decoding generally degrades gracefully with saturation. Stability of density evolution is, on the other hand, rather strongly effected by saturation and the asymptotic qualitative effect of saturation is similar to reduction by one of variable node degree. We also show under what conditions the block threshold for the saturated belief propagation corresponds with the bit threshold.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا