ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep neural networks (DNNs) based digital receivers can potentially operate in complex environments. However, the dynamic nature of communication channels implies that in some scenarios, DNN-based receivers should be periodically retrained in order t o track temporal variations in the channel conditions. To this aim, frequent transmissions of lengthy pilot sequences are generally required, at the cost of substantial overhead. In this work we propose a DNN-aided symbol detector, Meta-ViterbiNet, that tracks channel variations with reduced overhead by integrating three complementary techniques: 1) We leverage domain knowledge to implement a model-based/data-driven equalizer, ViterbiNet, that operates with a relatively small number of trainable parameters; 2) We tailor a meta-learning procedure to the symbol detection problem, optimizing the hyperparameters of the learning algorithm to facilitate rapid online adaptation; and 3) We adopt a decision-directed approach based on coded communications to enable online training with short-length pilot blocks. Numerical results demonstrate that Meta-ViterbiNet operates accurately in rapidly-varying channels, outperforming the previous best approach, based on ViterbiNet or conventional recurrent neural networks without meta-learning, by a margin of up to 0.6dB in bit error rate in various challenging scenarios.
Tail-biting convolutional codes extend the classical zero-termination convolutional codes: Both encoding schemes force the equality of start and end states, but under the tail-biting each state is a valid termination. This paper proposes a machine-le arning approach to improve the state-of-the-art decoding of tail-biting codes, focusing on the widely employed short length regime as in the LTE standard. This standard also includes a CRC code. First, we parameterize the circular Viterbi algorithm, a baseline decoder that exploits the circular nature of the underlying trellis. An ensemble combines multiple such weighted decoders, each decoder specializes in decoding words from a specific region of the channel words distribution. A region corresponds to a subset of termination states; the ensemble covers the entire states space. A non-learnable gating satisfies two goals: it filters easily decoded words and mitigates the overhead of executing multiple weighted decoders. The CRC criterion is employed to choose only a subset of experts for decoding purpose. Our method achieves FER improvement of up to 0.75dB over the CVA in the waterfall region for multiple code lengths, adding negligible computational complexity compared to the circular Viterbi algorithm in high SNRs.
Error correction codes are an integral part of communication applications, boosting the reliability of transmission. The optimal decoding of transmitted codewords is the maximum likelihood rule, which is NP-hard due to the curse of dimensionality. Fo r practical realizations, sub-optimal decoding algorithms are employed; yet limited theoretical insights prevent one from exploiting the full potential of these algorithms. One such insight is the choice of permutation in permutation decoding. We present a data-driven framework for permutation selection, combining domain knowledge with machine learning concepts such as node embedding and self-attention. Significant and consistent improvements in the bit error rate are introduced for all simulated codes, over the baseline decoders. To the best of the authors knowledge, this work is the first to leverage the benefits of the neural Transformer networks in physical layer communication systems.
Ensemble models are widely used to solve complex tasks by their decomposition into multiple simpler tasks, each one solved locally by a single member of the ensemble. Decoding of error-correction codes is a hard problem due to the curse of dimensiona lity, leading one to consider ensembles-of-decoders as a possible solution. Nonetheless, one must take complexity into account, especially in decoding. We suggest a low-complexity scheme where a single member participates in the decoding of each word. First, the distribution of feasible words is partitioned into non-overlapping regions. Thereafter, specialized experts are formed by independently training each member on a single region. A classical hard-decision decoder (HDD) is employed to map every word to a single expert in an injective manner. FER gains of up to 0.4dB at the waterfall region, and of 1.25dB at the error floor region are achieved for two BCH(63,36) and (63,45) codes with cycle-reduced parity-check matrices, compared to the previous best result of the paper Active Deep Decoding of Linear Codes.
High quality data is essential in deep learning to train a robust model. While in other fields data is sparse and costly to collect, in error decoding it is free to query and label thus allowing potential data exploitation. Utilizing this fact and in spired by active learning, two novel methods are introduced to improve Weighted Belief Propagation (WBP) decoding. These methods incorporate machine-learning concepts with error decoding measures. For BCH(63,36), (63,45) and (127,64) codes, with cycle-reduced parity-check matrices, improvement of up to 0.4dB at the waterfall region, and of up to 1.5dB at the errorfloor region in FER, over the original WBP, is demonstrated by smartly sampling the data, without increasing inference (decoding) complexity. The proposed methods constitutes an example guidelines for model enhancement by incorporation of domain knowledge from error-correcting field into a deep learning model. These guidelines can be adapted to any other deep learning based communication block.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا