Do you want to publish a course? Click here

Data-Rate Driven Transmission Strategy for Deep Learning Based Communication Systems

293   0   0.0 ( 0 )
 Added by Xiao Chen
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Deep learning (DL) based autoencoder is a promising architecture to implement end-to-end communication systems. One fundamental problem of such systems is how to increase the transmission rate. Two new schemes are proposed to address the limited data rate issue: adaptive transmission scheme and generalized data representation (GDR) scheme. In the first scheme, an adaptive transmission is designed to select the transmission vectors for maximizing the data rate under different channel conditions. The block error rate (BLER) of the first scheme is 80% lower than that of the conventional one-hot vector scheme. This implies that higher data rate can be achieved by the adaptive transmission scheme. In the second scheme, the GDR replaces the conventional one-hot representation. The GDR scheme can achieve higher data rate than the conventional one-hot vector scheme with comparable BLER performance. For example, when the vector size is eight, the proposed GDR scheme can double the date rate of the one-hot vector scheme. Besides, the joint scheme of the two proposed schemes can create further benefits. The effect of signal-to-noise ratio (SNR) is analyzed for these DL-based communication systems. Numerical results show that training the autoencoder using data set with various SNR values can attain robust BLER performance under different channel conditions.

rate research

Read More

Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave (mmWave) massive multiple-input and multiple-output systems. To solve this problem, we exploit a learned denoising-based approximate message passing (LDAMP) network. This neural network can learn channel structure and estimate channel from a large number of training data. Furthermore, we provide an analytical framework on the asymptotic performance of the channel estimator. Based on our analysis and simulation results, the LDAMP neural network significantly outperforms state-of-the-art compressed sensingbased algorithms even when the receiver is equipped with a small number of RF chains. Therefore, deep learning is a powerful tool for channel estimation in mmWave communications.
In this paper, we propose a model-driven deep learning network for multiple-input multiple-output (MIMO) detection. The structure of the network is specially designed by unfolding the iterative algorithm. Some trainable parameters are optimized through deep learning techniques to improve the detection performance. Since the number of trainable variables of the network is equal to that of the layers, the network can be easily trained within a very short time. Furthermore, the network can handle time-varying channel with only a single training. Numerical results show that the proposed approach can improve the performance of the iterative algorithm significantly under Rayleigh and correlated MIMO channels.
Ensemble models are widely used to solve complex tasks by their decomposition into multiple simpler tasks, each one solved locally by a single member of the ensemble. Decoding of error-correction codes is a hard problem due to the curse of dimensionality, leading one to consider ensembles-of-decoders as a possible solution. Nonetheless, one must take complexity into account, especially in decoding. We suggest a low-complexity scheme where a single member participates in the decoding of each word. First, the distribution of feasible words is partitioned into non-overlapping regions. Thereafter, specialized experts are formed by independently training each member on a single region. A classical hard-decision decoder (HDD) is employed to map every word to a single expert in an injective manner. FER gains of up to 0.4dB at the waterfall region, and of 1.25dB at the error floor region are achieved for two BCH(63,36) and (63,45) codes with cycle-reduced parity-check matrices, compared to the previous best result of the paper Active Deep Decoding of Linear Codes.
This paper investigates a machine learning-based power allocation design for secure transmission in a cognitive radio (CR) network. In particular, a neural network (NN)-based approach is proposed to maximize the secrecy rate of the secondary receiver under the constraints of total transmit power of secondary transmitter, and the interference leakage to the primary receiver, within which three different regularization schemes are developed. The key advantage of the proposed algorithm over conventional approaches is the capability to solve the power allocation problem with both perfect and imperfect channel state information. In a conventional setting, two completely different optimization frameworks have to be designed, namely the robust and non-robust designs. Furthermore, conventional algorithms are often based on iterative techniques, and hence, they require a considerable number of iterations, rendering them less suitable in future wireless networks where there are very stringent delay constraints. To meet the unprecedented requirements of future ultra-reliable low-latency networks, we propose an NN-based approach that can determine the power allocation in a CR network with significantly reduced computational time and complexity. As this trained NN only requires a small number of linear operations to yield the required power allocations, the approach can also be extended to different delay sensitive applications and services in future wireless networks. When evaluate the proposed method versus conventional approaches, using a suitable test set, the proposed approach can achieve more than 94% of the secrecy rate performance with less than 1% computation time and more than 93% satisfaction of interference leakage constraints. These results are obtained with significant reduction in computational time, which we believe that it is suitable for future real-time wireless applications.
66 - Yuqing Du , Kaibin Huang 2018
By implementing machine learning at the network edge, edge learning trains models by leveraging rich data distributed at edge devices (e.g., smartphones and sensors) and in return endow on them capabilities of seeing, listening, and reasoning. In edge learning, the need of high-mobility wireless data acquisition arises in scenarios where edge devices (or even servers) are mounted on the ground or aerial vehicles. In this paper, we present a novel solution, called fast analog transmission (FAT), for high- mobility data acquisition in edge-learning systems, which has several key features. First, FAT incurs low-latency. Specifically, FAT requires no source-and-channel coding and no channel training via the proposed technique of Grassmann analog encoding (GAE) that encodes data samples into subspace matrices. Second, FAT supports spatial multiplexing by directly transmitting analog vector data over an antenna array. Third, FAT can be seamlessly integrated with edge learning (i.e., training of a classifier model in this work). In particular, by applying a Grassmannian-classification algorithm from computer vision, the received GAE encoded data can be directly applied to training the model without decoding and conversion. This design is found by simulation to outperform conventional schemes in learning accuracy due to its robustness against data distortion induced by fast fading.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا