ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning-based Load Balancing Handover in Mobile Millimeter Wave Networks

71   0   0.0 ( 0 )
 نشر من قبل Sara Khosravi
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Millimeter-wave (mmWave) communication is a promising solution to the high data rate demands in the upcoming 5G and beyond communication networks. When it comes to supporting seamless connectivity in mobile scenarios, resource and handover management are two of the main challenges in mmWave networks. In this paper, we address these two problems jointly and propose a learning-based load balancing handover in multi-user mobile mmWave networks. Our handover algorithm selects a backup base station and allocates the resource to maximize the sum rate of all the users while ensuring a target rate threshold and preventing excessive handovers. We model the user association as a non-convex optimization problem. Then, by applying a deep deterministic policy gradient (DDPG) method, we approximate the solution of the optimization problem. Through simulations, we show that our proposed algorithm minimizes the number of the events where a users rate is less than its minimum rate requirement and minimizes the number of handovers while increasing the sum rate of all users.



قيم البحث

اقرأ أيضاً

Millimeter-wave (mmWave) communication is considered as a key enabler of ultra-high data rates in the future cellular and wireless networks. The need for directional communication between base stations (BSs) and users in mmWave systems, that is achie ved through beamforming, increases the complexity of the channel estimation. Moreover, in order to provide better coverage, dense deployment of BSs is required which causes frequent handovers and increased association overhead. In this paper, we present an approach that jointly addresses the beamforming and handover problems. Our solution entails an efficient beamforming method with a minimum number of pilots and a learning-based handover method supporting mobile scenarios. We use reinforcement learning algorithm to learn the optimal choices of the backup BSs in different locations of a mobile user. We show that our method provides high rate and reliability in all locations of the users trajectory with a minimal number of handovers. Simulation results in an outdoor environment based on geometric mmWave channel modeling and real building map data show the superior performance of our proposed solution in achievable instantaneous rate and trajectory rate.
87 - Zhuo Li , Xu Zhou , Junruo Gao 2021
Aiming at the local overload of multi-controller deployment in software-defined networks, a load balancing mechanism of SDN controller based on reinforcement learning is designed. The initial paired migrate-out domain and migrate-in domain are obtain ed by calculating the load ratio deviation between the controllers, a preliminary migration triplet, contains migration domain mentioned above and a group of switches which are subordinated to the migrate-out domain, makes the migration efficiency reach the local optimum. Under the constraint of the best efficiency of migration in the whole and without migration conflict, selecting multiple sets of triples based on reinforcement learning, as the final migration of this round to attain the global optimal controller load balancing with minimum cost. The experimental results illustrate that the mechanism can make full use of the controllers resources, quickly balance the load between controllers, reduce unnecessary migration overhead and get a faster response rate of the packet-in request.
Integrating efficient connectivity, positioning and sensing functionalities into 5G New Radio (NR) and beyond mobile cellular systems is one timely research paradigm, especially at mm-wave and sub-THz bands. In this article, we address the radio-base d sensing and environment mapping prospect with specific emphasis on the user equipment (UE) side. We first describe an efficient l1-regularized least-squares (LS) approach to obtain sparse range--angle charts at individual measurement or sensing locations. For the subsequent environment mapping, we then introduce a novel state model for mapping diffuse and specular scattering, which allows efficient tracking of individual scatterers over time using interacting multiple model (IMM) extended Kalman filter and smoother. We provide extensive numerical indoor mapping results at the 28~GHz band deploying OFDM-based 5G NR uplink waveform with 400~MHz channel bandwidth, covering both accurate ray-tracing based as well as actual RF measurement results. The results illustrate the superiority of the dynamic tracking-based solutions, compared to static reference methods, while overall demonstrate the excellent prospects of radio-based mobile environment sensing and mapping in future mm-wave networks.
This article investigates beam alignment for multi-user millimeter wave (mmWave) massive multi-input multi-output system. Unlike the existing works using machine learning (ML), an alignment method with partial beams using ML (AMPBML) is proposed with out any prior knowledge such as user location information. The neural network (NN) for the AMPBML is trained offline using simulated environments according to the mmWave channel model and is then deployed online to predict the beam distribution vector using partial beams. Afterwards, the beams for all users are all aligned simultaneously based on the indices of the dominant entries of the obtained beam distribution vector. Simulation results demonstrate that the AMPBML outperforms the existing methods, including the adaptive compressed sensing, hierarchical search, and multi-path decomposition and recovery, in terms of the total training time slots and the spectral efficiency.
In this paper, a joint task, spectrum, and transmit power allocation problem is investigated for a wireless network in which the base stations (BSs) are equipped with mobile edge computing (MEC) servers to jointly provide computational and communicat ion services to users. Each user can request one computational task from three types of computational tasks. Since the data size of each computational task is different, as the requested computational task varies, the BSs must adjust their resource (subcarrier and transmit power) and task allocation schemes to effectively serve the users. This problem is formulated as an optimization problem whose goal is to minimize the maximal computational and transmission delay among all users. A multi-stack reinforcement learning (RL) algorithm is developed to solve this problem. Using the proposed algorithm, each BS can record the historical resource allocation schemes and users information in its multiple stacks to avoid learning the same resource allocation scheme and users states, thus improving the convergence speed and learning efficiency. Simulation results illustrate that the proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا