ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning-Based Multi-Channel Access in 5G and Beyond Networks with Fast Time-Varying Channels

184   0   0.0 ( 0 )
 نشر من قبل Tiejun Lv
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a learning-based scheme to investigate the dynamic multi-channel access (DMCA) problem in the fifth generation (5G) and beyond networks with fast time-varying channels wherein the channel parameters are unknown. The proposed learning-based scheme can maintain near-optimal performance for a long time, even in the sharp changing channels. This scheme greatly reduces processing delay, and effectively alleviates the error due to decision lag, which is cased by the non-immediacy of the information acquisition and processing. We first propose a psychology-based personalized quality of service model after introducing the network model with unknown channel parameters and the streaming model. Then, two access criteria are presented for the living streaming model and the buffered streaming model. Their corresponding optimization problems are also formulated. The optimization problems are solved by learning-based DMCA scheme, which combines the recurrent neural network with deep reinforcement learning. In the learning-based DMCA scheme, the agent mainly invokes the proposed prediction-based deep deterministic policy gradient algorithm as the learning algorithm. As a novel technical paradigm, our scheme has strong universality, since it can be easily extended to solve other problems in wireless communications. The real channel data-based simulation results validate that the performance of the learning-based scheme approaches that derived from the exhaustive search when making a decision at each time-slot, and is superior to the exhaustive search method when making a decision at every few time-slots.



قيم البحث

اقرأ أيضاً

This paper presents DeepIA, a deep learning solution for faster and more accurate initial access (IA) in 5G millimeter wave (mmWave) networks when compared to conventional IA. By utilizing a subset of beams in the IA process, DeepIA removes the need for an exhaustive beam search thereby reducing the beam sweep time in IA. A deep neural network (DNN) is trained to learn the complex mapping from the received signal strengths (RSSs) collected with a reduced number of beams to the optimal spatial beam of the receiver (among a larger set of beams). In test time, DeepIA measures RSSs only from a small number of beams and runs the DNN to predict the best beam for IA. We show that DeepIA reduces the IA time by sweeping fewer beams and significantly outperforms the conventional IAs beam prediction accuracy in both line of sight (LoS) and non-line of sight (NLoS) mmWave channel conditions.
In this work, we focus on the model-mismatch problem for model-based subspace channel tracking in the correlated underwater acoustic channel. A model based on the underwater acoustic channels correlation can be used as the state-space model in the Ka lman filter to improve the underwater acoustic channel tracking compared that without a model. Even though the data support the assumption that the model is slow-varying and uncorrelated to some degree, to improve the tracking performance further, we can not ignore the model-mismatch problem because most channel models encounter this problem in the underwater acoustic channel. Therefore, in this work, we provide a dynamic time-variant state-space model for underwater acoustic channel tracking. This model is tolerant to the slight correlation after decorrelation. Moreover, a forward-backward Kalman filter is combined to further improve the tracking performance. The performance of our proposed algorithm is demonstrated with the same at-sea data as that used for conventional channel tracking. Compared with the conventional algorithms, the proposed algorithm shows significant improvement, especially in rough sea conditions in which the channels are fast-varying.
In this paper, we propose a joint radio and core resource allocation framework for NFV-enabled networks. In the proposed system model, the goal is to maximize energy efficiency (EE), by guaranteeing end-to-end (E2E) quality of service (QoS) for diffe rent service types. To this end, we formulate an optimization problem in which power and spectrum resources are allocated in the radio part. In the core part, the chaining, placement, and scheduling of functions are performed to ensure the QoS of all users. This joint optimization problem is modeled as a Markov decision process (MDP), considering time-varying characteristics of the available resources and wireless channels. A soft actor-critic deep reinforcement learning (SAC-DRL) algorithm based on the maximum entropy framework is subsequently utilized to solve the above MDP. Numerical results reveal that the proposed joint approach based on the SAC-DRL algorithm could significantly reduce energy consumption compared to the case in which R-RA and NFV-RA problems are optimized separately.
Todays wireless networks allocate radio resources to users based on the orthogonal multiple access (OMA) principle. However, as the number of users increases, OMA based approaches may not meet the stringent emerging requirements including very high s pectral efficiency, very low latency, and massive device connectivity. Nonorthogonal multiple access (NOMA) principle emerges as a solution to improve the spectral efficiency while allowing some degree of multiple access interference at receivers. In this tutorial style paper, we target providing a unified model for NOMA, including uplink and downlink transmissions, along with the extensions tomultiple inputmultiple output and cooperative communication scenarios. Through numerical examples, we compare the performances of OMA and NOMA networks. Implementation aspects and open issues are also detailed.
Deep learning provides powerful means to learn from spectrum data and solve complex tasks in 5G and beyond such as beam selection for initial access (IA) in mmWave communications. To establish the IA between the base station (e.g., gNodeB) and user e quipment (UE) for directional transmissions, a deep neural network (DNN) can predict the beam that is best slanted to each UE by using the received signal strengths (RSSs) from a subset of possible narrow beams. While improving the latency and reliability of beam selection compared to the conventional IA that sweeps all beams, the DNN itself is susceptible to adversarial attacks. We present an adversarial attack by generating adversarial perturbations to manipulate the over-the-air captured RSSs as the input to the DNN. This attack reduces the IA performance significantly and fools the DNN into choosing the beams with small RSSs compared to jamming attacks with Gaussian or uniform noise.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا