ﻻ يوجد ملخص باللغة العربية
Deep learning based physical layer design, i.e., using dense neural networks as encoders and decoders, has received considerable interest recently. However, while such an approach is naturally training data-driven, actions of the wireless channel are mimicked using standard channel models, which only partially reflect the physical ground truth. Very recently, neural network based mutual information (MI) estimators have been proposed that directly extract channel actions from the input-output measurements and feed these outputs into the channel encoder. This is a promising direction as such a new design paradigm is fully adaptive and training data-based. This paper implements further recent improvements of such MI estimators, analyzes theoretically their suitability for the channel coding problem, and compares their performance. To this end, a new MI estimator using a emph{``reverse Jensen} approach is proposed.
In this paper, we investigate the impacts of transmitter and receiver windows on orthogonal time-frequency space (OTFS) modulation and propose a window design to improve the OTFS channel estimation performance. Assuming ideal pulse shaping filters at
Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurat
Estimators for mutual information are typically biased. However, in the case of the Kozachenko-Leonenko estimator for metric spaces, a type of nearest neighbour estimator, it is possible to calculate the bias explicitly.
The Mutual Information (MI) is an often used measure of dependency between two random variables utilized in information theory, statistics and machine learning. Recently several MI estimators have been proposed that can achieve parametric MSE converg
We point out a limitation of the mutual information neural estimation (MINE) where the network fails to learn at the initial training phase, leading to slow convergence in the number of training iterations. To solve this problem, we propose a faster