High-Throughput VLSI Architecture for GRAND Markov Order


Abstract in English

Guessing Random Additive Noise Decoding (GRAND) is a recently proposed Maximum Likelihood (ML) decoding technique. Irrespective of the structure of the error correcting code, GRAND tries to guess the noise that corrupted the codeword in order to decode any linear error-correcting block code. GRAND Markov Order (GRAND-MO) is a variant of GRAND that is useful to decode error correcting code transmitted over communication channels with memory which are vulnerable to burst noise. Usually, interleavers and de-interleavers are used in communication systems to mitigate the effects of channel memory. Interleaving and de-interleaving introduce undesirable latency, which increases with channel memory. To prevent this added latency penalty, GRAND-MO can be directly used on the hard demodulated channel signals. This work reports the first GRAND-MO hardware architecture which achieves an average throughput of up to $52$ Gbps and $64$ Gbps for a code length of $128$ and $79$ respectively. Compared to the GRANDAB, hard-input variant of GRAND, the proposed architecture achieves $3$ dB gain in decoding performance for a target FER of $10^{-5}$. Similarly, comparing the GRAND-MO decoder with a decoder tailored for a $(79,64)$ BCH code showed that the proposed architecture achieves 33$%$ higher worst case throughput and $2$ dB gain in decoding performance.

Download