Generalized Nearest Neighbor Decoding


Abstract in English

It is well known that for linear Gaussian channels, a nearest neighbor decoding rule, which seeks the minimum Euclidean distance between a codeword and the received channel output vector, is the maximum likelihood solution and hence capacity-achieving. Nearest neighbor decoding remains a convenient and yet mismatched solution for general channels, and the key message of this paper is that the performance of the nearest neighbor decoding can be improved by generalizing its decoding metric to incorporate channel state dependent output processing and codeword scaling. Using generalized mutual information, which is a lower bound to the mismatched capacity under independent and identically distributed codebook ensemble, as the performance measure, this paper establishes the optimal generalized nearest neighbor decoding rule, under Gaussian channel input. Several suboptimal but reduced-complexity generalized nearest neighbor decoding rules are also derived and compared with existing solutions. The results are illustrated through several case studies for channels with nonlinear effects, and fading channels with receiver channel state information or with pilot-assisted training.

Download