Do you want to publish a course? Click here

One-Bit OFDM Receivers via Deep Learning

89   0   0.0 ( 0 )
 Added by Eren Balevi
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

This paper develops novel deep learning-based architectures and design methodologies for an orthogonal frequency division multiplexing (OFDM) receiver under the constraint of one-bit complex quantization. Single bit quantization greatly reduces complexity and power consumption, but makes accurate channel estimation and data detection difficult. This is particularly true for multicarrier waveforms, which have high peak-to-average ratio in the time domain and fragile subcarrier orthogonality in the frequency domain. The severe distortion for one-bit quantization typically results in an error floor even at moderately low signal-to-noise-ratio (SNR) such as 5 dB. For channel estimation (using pilots), we design a novel generative supervised deep neural network (DNN) that can be trained with a reasonable number of pilots. After channel estimation, a neural network-based receiver -- specifically, an autoencoder -- jointly learns a precoder and decoder for data symbol detection. Since quantization prevents end-to-end training, we propose a two-step sequential training policy for this model. With synthetic data, our deep learning-based channel estimation can outperform least squares (LS) channel estimation for unquantized (full-resolution) OFDM at average SNRs up to 14 dB. For data detection, our proposed design achieves lower bit error rate (BER) in fading than unquantized OFDM at average SNRs up to 10 dB.



rate research

Read More

Channel estimation and signal detection are essential steps to ensure the quality of end-to-end communication in orthogonal frequency-division multiplexing (OFDM) systems. In this paper, we develop a DDLSD approach, i.e., Data-driven Deep Learning for Signal Detection in OFDM systems. First, the OFDM system model is established. Then, the long short-term memory (LSTM) is introduced into the OFDM system model. Wireless channel data is generated through simulation, the preprocessed time series feature information is input into the LSTM to complete the offline training. Finally, the trained model is used for online recovery of transmitted signal. The difference between this scheme and existing OFDM receiver is that explicit estimated channel state information (CSI) is transformed into invisible estimated CSI, and the transmit symbol is directly restored. Simulation results show that the DDLSD scheme outperforms the existing traditional methods in terms of improving channel estimation and signal detection performance.
This paper proposes a novel deep learning-based error correction coding scheme for AWGN channels under the constraint of one-bit quantization in the receivers. Specifically, it is first shown that the optimum error correction code that minimizes the probability of bit error can be obtained by perfectly training a special autoencoder, in which perfectly refers to converging the global minima. However, perfect training is not possible in most cases. To approach the performance of a perfectly trained autoencoder with a suboptimum training, we propose utilizing turbo codes as an implicit regularization, i.e., using a concatenation of a turbo code and an autoencoder. It is empirically shown that this design gives nearly the same performance as to the hypothetically perfectly trained autoencoder, and we also provide a theoretical proof of why that is so. The proposed coding method is as bandwidth efficient as the integrated (outer) turbo code, since the autoencoder exploits the excess bandwidth from pulse shaping and packs signals more intelligently thanks to sparsity in neural networks. Our results show that the proposed coding scheme at finite block lengths outperforms conventional turbo codes even for QPSK modulation. Furthermore, the proposed coding method can make one-bit quantization operational even for 16-QAM.
69 - Yan Sun , Chao Wang , Huan Cai 2020
In this paper, we study the equalization design for multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems with insufficient cyclic prefix (CP). In particular, the signal detection performance is severely impaired by inter-carrier interference (ICI) and inter-symbol interference (ISI) when the multipath delay spread exceeding the length of CP. To tackle this problem, a deep learning-based equalizer is proposed for approximating the maximum likelihood detection. Inspired by the dependency between the adjacent subcarriers, a computationally efficient joint detection scheme is developed. Employing the proposed equalizer, an iterative receiver is also constructed and the detection performance is evaluated through simulations over measured multipath channels. Our results reveal that the proposed receiver can achieve significant performance improvement compared to two traditional baseline schemes.
68 - Vasileios Nakos 2017
Is it possible to obliviously construct a set of hyperplanes H such that you can approximate a unit vector x when you are given the side on which the vector lies with respect to every h in H? In the sparse recovery literature, where x is approximately k-sparse, this problem is called one-bit compressed sensing and has received a fair amount of attention the last decade. In this paper we obtain the first scheme that achieves almost optimal measurements and sublinear decoding time for one-bit compressed sensing in the non-uniform case. For a large range of parameters, we improve the state of the art in both the number of measurements and the decoding time.
In this paper, we investigate the model-driven deep learning (DL) for MIMO detection. In particular, the MIMO detector is specially designed by unfolding an iterative algorithm and adding some trainable parameters. Since the number of trainable parameters is much fewer than the data-driven DL based signal detector, the model-driven DL based MIMO detector can be rapidly trained with a much smaller data set. The proposed MIMO detector can be extended to soft-input soft-output detection easily. Furthermore, we investigate joint MIMO channel estimation and signal detection (JCESD), where the detector takes channel estimation error and channel statistics into consideration while channel estimation is refined by detected data and considers the detection error. Based on numerical results, the model-driven DL based MIMO detector significantly improves the performance of corresponding traditional iterative detector, outperforms other DL-based MIMO detectors and exhibits superior robustness to various mismatches.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا