ترغب بنشر مسار تعليمي؟ اضغط هنا

Meta-ViterbiNet: Online Meta-Learned Viterbi Equalization for Non-Stationary Channels

146   0   0.0 ( 0 )
 نشر من قبل Tomer Raviv
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Deep neural networks (DNNs) based digital receivers can potentially operate in complex environments. However, the dynamic nature of communication channels implies that in some scenarios, DNN-based receivers should be periodically retrained in order to track temporal variations in the channel conditions. To this aim, frequent transmissions of lengthy pilot sequences are generally required, at the cost of substantial overhead. In this work we propose a DNN-aided symbol detector, Meta-ViterbiNet, that tracks channel variations with reduced overhead by integrating three complementary techniques: 1) We leverage domain knowledge to implement a model-based/data-driven equalizer, ViterbiNet, that operates with a relatively small number of trainable parameters; 2) We tailor a meta-learning procedure to the symbol detection problem, optimizing the hyperparameters of the learning algorithm to facilitate rapid online adaptation; and 3) We adopt a decision-directed approach based on coded communications to enable online training with short-length pilot blocks. Numerical results demonstrate that Meta-ViterbiNet operates accurately in rapidly-varying channels, outperforming the previous best approach, based on ViterbiNet or conventional recurrent neural networks without meta-learning, by a margin of up to 0.6dB in bit error rate in various challenging scenarios.



قيم البحث

اقرأ أيضاً

Wireless communications over fast fading channels are challenging, requiring either frequent channel tracking or complicated signaling schemes such as orthogonal time frequency space (OTFS) modulation. In this paper, we propose low-complexity frequen cy domain equalizations to combat fast fading, based on novel discrete delay-time and frequency-Doppler channel models. Exploiting the circular stripe diagonal nature of the frequency-Doppler channel matrix, we introduce low-complexity frequency domain minimum mean square error (MMSE) equalization for OTFS systems with fully resolvable Doppler spreads. We also demonstrate that the proposed MMSE equalization is applicable to conventional orthogonal frequency division multiplexing (OFDM) and single carrier frequency domain equalization (SC-FDE) systems with short signal frames and partially resolvable Doppler spreads. After generalizing the input-output data symbol relationship, we analyze the equalization performance via channel matrix eigenvalue decomposition and derive a closed-form expression for the theoretical bit-error-rate. Simulation results for OTFS, OFDM, and SC-FDE modulations verify that the proposed low-complexity frequency domain equalization methods can effectively exploit the time diversity over fast fading channels. Even with partially resolvable Doppler spread, the conventional SC-FDE can achieve performance close to OTFS, especially in fast fading channels with a dominating line-of-sight path.
Wireless applications that use high-reliability low-latency links depend critically on the capability of the system to predict link quality. This dependence is especially acute at the high carrier frequencies used by mmWave and THz systems, where the links are susceptible to blockages. Predicting blockages with high reliability requires a large number of data samples to train effective machine learning modules. With the aim of mitigating data requirements, we introduce a framework based on meta-learning, whereby data from distinct deployments are leveraged to optimize a shared initialization that decreases the data set size necessary for any new deployment. Predictors of two different events are studied: (1) at least one blockage occurs in a time window, and (2) the link is blocked for the entire time window. The results show that an RNN-based predictor trained using meta-learning is able to predict blockages after observing fewer samples than predictors trained using standard methods.
Tail-biting convolutional codes extend the classical zero-termination convolutional codes: Both encoding schemes force the equality of start and end states, but under the tail-biting each state is a valid termination. This paper proposes a machine-le arning approach to improve the state-of-the-art decoding of tail-biting codes, focusing on the widely employed short length regime as in the LTE standard. This standard also includes a CRC code. First, we parameterize the circular Viterbi algorithm, a baseline decoder that exploits the circular nature of the underlying trellis. An ensemble combines multiple such weighted decoders, each decoder specializes in decoding words from a specific region of the channel words distribution. A region corresponds to a subset of termination states; the ensemble covers the entire states space. A non-learnable gating satisfies two goals: it filters easily decoded words and mitigates the overhead of executing multiple weighted decoders. The CRC criterion is employed to choose only a subset of experts for decoding purpose. Our method achieves FER improvement of up to 0.75dB over the CVA in the waterfall region for multiple code lengths, adding negligible computational complexity compared to the circular Viterbi algorithm in high SNRs.
83 - Jiuyu Liu 2021
In this paper, a novel spatially non-stationary channel model is proposed for link-level computer simulations of massive multiple-input multiple-output (mMIMO) with extremely large aperture array (ELAA). The proposed channel model allows a mix of non -line-of-sight (NLoS) and LoS links between a user and service antennas. The NLoS/LoS state of each link is characterized by a binary random variable, which obeys a correlated Bernoulli distribution. The correlation is described in the form of an exponentially decaying window. In addition, the proposed model incorporates shadowing effects which are non-identical for NLoS and LoS states. It is demonstrated, through computer emulation, that the proposed model can capture almost all spatially non-stationary fading behaviors of the ELAA-mMIMO channel. Moreover, it has a low implementational complexity. With the proposed channel model, Monte-Carlo simulations are carried out to evaluate the channel capacity of ELAA-mMIMO. It is shown that the ELAA-mMIMO channel capacity has considerably different stochastic characteristics from the conventional mMIMO due to the presence of channel spatial non-stationarity.
Typically, loss functions, regularization mechanisms and other important aspects of training parametric models are chosen heuristically from a limited set of options. In this paper, we take the first step towards automating this process, with the vie w of producing models which train faster and more robustly. Concretely, we present a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures. We develop a pipeline for meta-training such loss functions, targeted at maximizing the performance of the model trained under them. The loss landscape produced by our learned losses significantly improves upon the original task-specific losses in both supervised and reinforcement learning tasks. Furthermore, we show that our meta-learning framework is flexible enough to incorporate additional information at meta-train time. This information shapes the learned loss function such that the environment does not need to provide this information during meta-test time. We make our code available at https://sites.google.com/view/mlthree.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا