ﻻ يوجد ملخص باللغة العربية
Deep learning (DL) based autoencoder is a promising architecture to implement end-to-end communication systems. One fundamental problem of such systems is how to increase the transmission rate. Two new schemes are proposed to address the limited data rate issue: adaptive transmission scheme and generalized data representation (GDR) scheme. In the first scheme, an adaptive transmission is designed to select the transmission vectors for maximizing the data rate under different channel conditions. The block error rate (BLER) of the first scheme is 80% lower than that of the conventional one-hot vector scheme. This implies that higher data rate can be achieved by the adaptive transmission scheme. In the second scheme, the GDR replaces the conventional one-hot representation. The GDR scheme can achieve higher data rate than the conventional one-hot vector scheme with comparable BLER performance. For example, when the vector size is eight, the proposed GDR scheme can double the date rate of the one-hot vector scheme. Besides, the joint scheme of the two proposed schemes can create further benefits. The effect of signal-to-noise ratio (SNR) is analyzed for these DL-based communication systems. Numerical results show that training the autoencoder using data set with various SNR values can attain robust BLER performance under different channel conditions.
Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave (mmWave) massive multiple-input and multiple-output systems. To solve this problem, we exploit a le
In this paper, we propose a model-driven deep learning network for multiple-input multiple-output (MIMO) detection. The structure of the network is specially designed by unfolding the iterative algorithm. Some trainable parameters are optimized throu
Ensemble models are widely used to solve complex tasks by their decomposition into multiple simpler tasks, each one solved locally by a single member of the ensemble. Decoding of error-correction codes is a hard problem due to the curse of dimensiona
This paper investigates a machine learning-based power allocation design for secure transmission in a cognitive radio (CR) network. In particular, a neural network (NN)-based approach is proposed to maximize the secrecy rate of the secondary receiver
By implementing machine learning at the network edge, edge learning trains models by leveraging rich data distributed at edge devices (e.g., smartphones and sensors) and in return endow on them capabilities of seeing, listening, and reasoning. In edg