No Arabic abstract
Designing codes that combat the noise in a communication medium has remained a significant area of research in information theory as well as wireless communications. Asymptotically optimal channel codes have been developed by mathematicians for communicating under canonical models after over 60 years of research. On the other hand, in many non-canonical channel settings, optimal codes do not exist and the codes designed for canonical models are adapted via heuristics to these channels and are thus not guaranteed to be optimal. In this work, we make significant progress on this problem by designing a fully end-to-end jointly trained neural encoder and decoder, namely, Turbo Autoencoder (TurboAE), with the following contributions: ($a$) under moderate block lengths, TurboAE approaches state-of-the-art performance under canonical channels; ($b$) moreover, TurboAE outperforms the state-of-the-art codes under non-canonical settings in terms of reliability. TurboAE shows that the development of channel coding design can be automated via deep learning, with near-optimal performance.
Inspired by the recent advances in deep learning (DL), this work presents a deep neural network aided decoding algorithm for binary linear codes. Based on the concept of deep unfolding, we design a decoding network by unfolding the alternating direction method of multipliers (ADMM)-penalized decoder. In addition, we propose two improv
This paper applies error-exponent and dispersion-style analyses to derive finite-blocklength achievability bounds for low-density parity-check (LDPC) codes over the point-to-point channel (PPC) and multiple access channel (MAC). The error-exponent analysis applies Gallagers error exponent to bound achievable symmetrical and asymmetrical rates in the MAC. The dispersion-style analysis begins with a generalization of the random coding union (RCU) bound from random code ensembles with i.i.d. codewords to random code ensembles in which codewords may be statistically dependent; this generalization is useful since the codewords of random linear codes such as random LDPC codes are dependent. Application of the RCU bound yields improved finite-blocklength error bounds and asymptotic achievability results for i.i.d. random codes and new finite-blocklength error bounds and achievability results for LDPC codes. For discrete, memoryless channels, these results show that LDPC codes achieve first- and second-order performance that is optimal for the PPC and identical to the best-prior results for the MAC.
This paper proposes a novel deep learning-based error correction coding scheme for AWGN channels under the constraint of one-bit quantization in the receivers. Specifically, it is first shown that the optimum error correction code that minimizes the probability of bit error can be obtained by perfectly training a special autoencoder, in which perfectly refers to converging the global minima. However, perfect training is not possible in most cases. To approach the performance of a perfectly trained autoencoder with a suboptimum training, we propose utilizing turbo codes as an implicit regularization, i.e., using a concatenation of a turbo code and an autoencoder. It is empirically shown that this design gives nearly the same performance as to the hypothetically perfectly trained autoencoder, and we also provide a theoretical proof of why that is so. The proposed coding method is as bandwidth efficient as the integrated (outer) turbo code, since the autoencoder exploits the excess bandwidth from pulse shaping and packs signals more intelligently thanks to sparsity in neural networks. Our results show that the proposed coding scheme at finite block lengths outperforms conventional turbo codes even for QPSK modulation. Furthermore, the proposed coding method can make one-bit quantization operational even for 16-QAM.
In a time-varying massive multiple-input multipleoutput (MIMO) system, the acquisition of the downlink channel state information at the base station (BS) is a very challenging task due to the prohibitively high overheads associated with downlink training and uplink feedback. In this paper, we consider the hybrid precoding structure at BS and examine the antennatime domain channel extrapolation. We design a latent ordinary differential equation (ODE)-based network under the variational auto-encoder (VAE) framework to learn the mapping function from the partial uplink channels to the full downlink ones at the BS side. Specifically, the gated recurrent unit is adopted for the encoder and the fully-connected neural network is used for the decoder. The end-to-end learning is utilized to optimize the network parameters. Simulation results show that the designed network can efficiently infer the full downlink channels from the partial uplink ones, which can significantly reduce the channel training overhead.
In this paper, we investigate in detail the performance of turbo codes in quasi-static fading channels both with and without antenna diversity. First, we develop a simple and accurate analytic technique to evaluate the performance of turbo codes in quasi-static fading channels. The proposed analytic technique relates the frame error rate of a turbo code to the iterative decoder convergence threshold, rather than to the turbo code distance spectrum. Subsequently, we compare the performance of various turbo codes in quasi-static fading channels. We show that, in contrast to the situation in the AWGN channel, turbo codes with different interleaver sizes or turbo codes based on RSC codes with different constraint lengths and generator polynomials exhibit identical performance. Moreover, we also compare the performance of turbo codes and convolutional codes in quasi-static fading channels under the condition of identical decoding complexity. In particular, we show that turbo codes do not outperform convolutional codes in quasi-static fading channels with no antenna diversity; and that turbo codes only outperform convolutional codes in quasi-static fading channels with antenna diversity.