Do you want to publish a course? Click here

Radio Resource and Beam Management in 5G mmWave Using Clustering and Deep Reinforcement Learning

121   0   0.0 ( 0 )
 Added by Medhat Elsayed
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

To optimally cover users in millimeter-Wave (mmWave) networks, clustering is needed to identify the number and direction of beams. The mobility of users motivates the need for an online clustering scheme to maintain up-to-date beams towards those clusters. Furthermore, mobility of users leads to varying patterns of clusters (i.e., users move from the coverage of one beam to another), causing dynamic traffic load per beam. As such, efficient radio resource allocation and beam management is needed to address the dynamicity that arises from mobility of users and their traffic. In this paper, we consider the coexistence of Ultra-Reliable Low-Latency Communication (URLLC) and enhanced Mobile BroadBand (eMBB) users in 5G mmWave networks and propose a Quality-of-Service (QoS) aware clustering and resource allocation scheme. Specifically, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is used for online clustering of users and the selection of the number of beams. In addition, Long Short Term Memory (LSTM)-based Deep Reinforcement Learning (DRL) scheme is used for resource block allocation. The performance of the proposed scheme is compared to a baseline that uses K-means and priority-based proportional fairness for clustering and resource allocation, respectively. Our simulation results show that the proposed scheme outperforms the baseline algorithm in terms of latency, reliability, and rate of URLLC users as well as rate of eMBB users.



rate research

Read More

Network slicing is born as an emerging business to operators, by allowing them to sell the customized slices to various tenants at different prices. In order to provide better-performing and cost-efficient services, network slicing involves challenging technical issues and urgently looks forward to intelligent innovations to make the resource management consistent with users activities per slice. In that regard, deep reinforcement learning (DRL), which focuses on how to interact with the environment by trying alternative actions and reinforcing the tendency actions producing more rewarding consequences, is assumed to be a promising solution. In this paper, after briefly reviewing the fundamental concepts of DRL, we investigate the application of DRL in solving some typical resource management for network slicing scenarios, which include radio resource slicing and priority-based core network slicing, and demonstrate the advantage of DRL over several competing schemes through extensive simulations. Finally, we also discuss the possible challenges to apply DRL in network slicing from a general perspective.
For millimeter-wave networks, this paper presents a paradigm shift for leveraging time-consecutive camera images in handover decision problems. While making handover decisions, it is important to predict future long-term performance---e.g., the cumulative sum of time-varying data rates---proactively to avoid making myopic decisions. However, this study experimentally notices that a time-variation in the received powers is not necessarily informative for proactively predicting the rapid degradation of data rates caused by moving obstacles. To overcome this challenge, this study proposes a proactive framework wherein handover timings are optimized while obstacle-caused data rate degradations are predicted before the degradations occur. The key idea is to expand a state space to involve time consecutive camera images, which comprises informative features for predicting such data rate degradations. To overcome the difficulty in handling the large dimensionality of the expanded state space, we use a deep reinforcement learning for deciding the handover timings. The evaluations performed based on the experimentally obtained camera images and received powers demonstrate that the expanded state space facilitates (i) the prediction of obstacle-caused data rate degradations from 500 ms before the degradations occur and (ii) superior performance to a handover framework without the state space expansion
Highly directional millimeter wave (mmWave) radios need to perform beam management to establish and maintain reliable links. To do so, existing solutions mostly rely on explicit coordination between the transmitter (TX) and the receiver (RX), which significantly reduces the airtime available for communication and further complicates the network protocol design. This paper advances the state of the art by presenting DeepBeam, a framework for beam management that does not require pilot sequences from the TX, nor any beam sweeping or synchronization from the RX. This is achieved by inferring (i) the Angle of Arrival (AoA) of the beam and (ii) the actual beam being used by the transmitter through waveform-level deep learning on ongoing transmissions between the TX to other receivers. In this way, the RX can associate Signal-to-Noise-Ratio (SNR) levels to beams without explicit coordination with the TX. This is possible because different beam patterns introduce different impairments to the waveform, which can be subsequently learned by a convolutional neural network (CNN). We conduct an extensive experimental data collection campaign where we collect more than 4 TB of mmWave waveforms with (i) 4 phased array antennas at 60.48 GHz, (ii) 2 codebooks containing 24 one-dimensional beams and 12 two-dimensional beams; (iii) 3 receiver gains; (iv) 3 different AoAs; (v) multiple TX and RX locations. Moreover, we collect waveform data with two custom-designed mmWave software-defined radios with fully-digital beamforming architectures at 58 GHz. Results show that DeepBeam (i) achieves accuracy of up to 96%, 84% and 77% with a 5-beam, 12-beam and 24-beam codebook, respectively; (ii) reduces latency by up to 7x with respect to the 5G NR initial beam sweep in a default configuration and with a 12-beam codebook. The waveform dataset and the full DeepBeam code repository are publicly available.
The paper presents a reinforcement learning solution to dynamic resource allocation for 5G radio access network slicing. Available communication resources (frequency-time blocks and transmit powers) and computational resources (processor usage) are allocated to stochastic arrivals of network slice requests. Each request arrives with priority (weight), throughput, computational resource, and latency (deadline) requirements, and if feasible, it is served with available communication and computational resources allocated over its requested duration. As each decision of resource allocation makes some of the resources temporarily unavailable for future, the myopic solution that can optimize only the current resource allocation becomes ineffective for network slicing. Therefore, a Q-learning solution is presented to maximize the network utility in terms of the total weight of granted network slicing requests over a time horizon subject to communication and computational constraints. Results show that reinforcement learning provides major improvements in the 5G network utility relative to myopic, random, and first come first served solutions. While reinforcement learning sustains scalable performance as the number of served users increases, it can also be effectively used to assign resources to network slices when 5G needs to share the spectrum with incumbent users that may dynamically occupy some of the frequency-time blocks.
LoRa wireless networks are considered as a key enabling technology for next generation internet of things (IoT) systems. New IoT deployments (e.g., smart city scenarios) can have thousands of devices per square kilometer leading to huge amount of power consumption to provide connectivity. In this paper, we investigate green LoRa wireless networks powered by a hybrid of the grid and renewable energy sources, which can benefit from harvested energy while dealing with the intermittent supply. This paper proposes resource management schemes of the limited number of channels and spreading factors (SFs) with the objective of improving the LoRa gateway energy efficiency. First, the problem of grid power consumption minimization while satisfying the systems quality of service demands is formulated. Specifically, both scenarios the uncorrelated and time-correlated channels are investigated. The optimal resource management problem is solved by decoupling the formulated problem into two sub-problems: channel and SF assignment problem and energy management problem. Since the optimal solution is obtained with high complexity, online resource management heuristic algorithms that minimize the grid energy consumption are proposed. Finally, taking into account the channel and energy correlation, adaptable resource management schemes based on Reinforcement Learning (RL), are developed. Simulations results show that the proposed resource management schemes offer efficient use of renewable energy in LoRa wireless networks.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا