ترغب بنشر مسار تعليمي؟ اضغط هنا

Radio access network (RAN) slicing is an important part of network slicing in 5G. The evolving network architecture requires the orchestration of multiple network resources such as radio and cache resources. In recent years, machine learning (ML) tec hniques have been widely applied for network slicing. However, most existing works do not take advantage of the knowledge transfer capability in ML. In this paper, we propose a transfer reinforcement learning (TRL) scheme for joint radio and cache resources allocation to serve 5G RAN slicing.We first define a hierarchical architecture for the joint resources allocation. Then we propose two TRL algorithms: Q-value transfer reinforcement learning (QTRL) and action selection transfer reinforcement learning (ASTRL). In the proposed schemes, learner agents utilize the expert agents knowledge to improve their performance on target tasks. The proposed algorithms are compared with both the model-free Q-learning and the model-based priority proportional fairness and time-to-live (PPF-TTL) algorithms. Compared with Q-learning, QTRL and ASTRL present 23.9% lower delay for Ultra Reliable Low Latency Communications slice and 41.6% higher throughput for enhanced Mobile Broad Band slice, while achieving significantly faster convergence than Q-learning. Moreover, 40.3% lower URLLC delay and almost twice eMBB throughput are observed with respect to PPF-TTL.
Appliance-level load forecasting plays a critical role in residential energy management, besides having significant importance for ancillary services performed by the utilities. In this paper, we propose to use an LSTM-based sequence-to-sequence (seq 2seq) learning model that can capture the load profiles of appliances. We use a real dataset collected fromfour residential buildings and compare our proposed schemewith three other techniques, namely VARMA, Dilated One Dimensional Convolutional Neural Network, and an LSTM model.The results show that the proposed LSTM-based seq2seq model outperforms other techniques in terms of prediction error in most cases.
5G is regarded as a revolutionary mobile network, which is expected to satisfy a vast number of novel services, ranging from remote health care to smart cities. However, heterogeneous Quality of Service (QoS) requirements of different services and li mited spectrum make the radio resource allocation a challenging problem in 5G. In this paper, we propose a multi-agent reinforcement learning (MARL) method for radio resource slicing in 5G. We model each slice as an intelligent agent that competes for limited radio resources, and the correlated Q-learning is applied for inter-slice resource block (RB) allocation. The proposed correlated Q-learning based interslice RB allocation (COQRA) scheme is compared with Nash Q-learning (NQL), Latency-Reliability-Throughput Q-learning (LRTQ) methods, and the priority proportional fairness (PPF) algorithm. Our simulation results show that the proposed COQRA achieves 32.4% lower latency and 6.3% higher throughput when compared with LRTQ, and 5.8% lower latency and 5.9% higher throughput than NQL. Significantly higher throughput and lower packet drop rate (PDR) is observed in comparison to PPF.
In this paper, we address inter-beam inter-cell interference mitigation in 5G networks that employ millimeter-wave (mmWave), beamforming and non-orthogonal multiple access (NOMA) techniques. Those techniques play a key role in improving network capac ity and spectral efficiency by multiplexing users on both spatial and power domains. In addition, the coverage area of multiple beams from different cells can intersect, allowing more flexibility in user-cell association. However, the intersection of coverage areas also implies increased inter-beam inter-cell interference, i.e. interference among beams formed by nearby cells. Therefore, joint user-cell association and inter-beam power allocation stand as a promising solution to mitigate inter-beam, inter-cell interference. In this paper, we consider a 5G mmWave network and propose a reinforcement learning algorithm to perform joint user-cell association and inter-beam power allocation to maximize the sum rate of the network. The proposed algorithm is compared to a uniform power allocation that equally divides power among beams per cell. Simulation results present a performance enhancement of 13-30% in networks sum-rate corresponding to the lowest and highest traffic loads, respectively.
To optimally cover users in millimeter-Wave (mmWave) networks, clustering is needed to identify the number and direction of beams. The mobility of users motivates the need for an online clustering scheme to maintain up-to-date beams towards those clu sters. Furthermore, mobility of users leads to varying patterns of clusters (i.e., users move from the coverage of one beam to another), causing dynamic traffic load per beam. As such, efficient radio resource allocation and beam management is needed to address the dynamicity that arises from mobility of users and their traffic. In this paper, we consider the coexistence of Ultra-Reliable Low-Latency Communication (URLLC) and enhanced Mobile BroadBand (eMBB) users in 5G mmWave networks and propose a Quality-of-Service (QoS) aware clustering and resource allocation scheme. Specifically, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is used for online clustering of users and the selection of the number of beams. In addition, Long Short Term Memory (LSTM)-based Deep Reinforcement Learning (DRL) scheme is used for resource block allocation. The performance of the proposed scheme is compared to a baseline that uses K-means and priority-based proportional fairness for clustering and resource allocation, respectively. Our simulation results show that the proposed scheme outperforms the baseline algorithm in terms of latency, reliability, and rate of URLLC users as well as rate of eMBB users.
A plethora of demanding services and use cases mandate a revolutionary shift in the management of future wireless network resources. Indeed, when tight quality of service demands of applications are combined with increased complexity of the network, legacy network management routines will become unfeasible in 6G. Artificial Intelligence (AI) is emerging as a fundamental enabler to orchestrate the network resources from bottom to top. AI-enabled radio access and AI-enabled core will open up new opportunities for automated configuration of 6G. On the other hand, there are many challenges in AI-enabled networks that need to be addressed. Long convergence time, memory complexity, and complex behaviour of machine learning algorithms under uncertainty as well as highly dynamic channel, traffic and mobility conditions of the network contribute to the challenges. In this paper, we survey the state-of-art research in utilizing machine learning techniques in improving the performance of wireless networks. In addition, we identify challenges and open issues to provide a roadmap for the researchers.
Microgrids (MG) are anticipated to be important players in the future smart grid. For proper operation of MGs an Energy Management System (EMS) is essential. The EMS of an MG could be rather complicated when renewable energy resources (RER), energy s torage system (ESS) and demand side management (DSM) need to be orchestrated. Furthermore, these systems may belong to different entities and competition may exist between them. Nash equilibrium is most commonly used for coordination of such entities however the convergence and existence of Nash equilibrium can not always be guaranteed. To this end, we use the correlated equilibrium to coordinate agents, whose convergence can be guaranteed. In this paper, we build an energy trading model based on mid-market rate, and propose a correlated Q-learning (CEQ) algorithm to maximize the revenue of each agent. Our results show that CEQ is able to balance the revenue of agents without harming total benefit. In addition, compared with Q-learning without correlation, CEQ could save 19.3% cost for the DSM agent and 44.2% more benefits for the ESS agent.
Microgrid (MG) energy management is an important part of MG operation. Various entities are generally involved in the energy management of an MG, e.g., energy storage system (ESS), renewable energy resources (RER) and the load of users, and it is cru cial to coordinate these entities. Considering the significant potential of machine learning techniques, this paper proposes a correlated deep Q-learning (CDQN) based technique for the MG energy management. Each electrical entity is modeled as an agent which has a neural network to predict its own Q-values, after which the correlated Q-equilibrium is used to coordinate the operation among agents. In this paper, the Long Short Term Memory networks (LSTM) based deep Q-learning algorithm is introduced and the correlated equilibrium is proposed to coordinate agents. The simulation result shows 40.9% and 9.62% higher profit for ESS agent and photovoltaic (PV) agent, respectively.
In this paper, we aim at interference mitigation in 5G millimeter-Wave (mm-Wave) communications by employing beamforming and Non-Orthogonal Multiple Access (NOMA) techniques with the aim of improving networks aggregate rate. Despite the potential cap acity gains of mm-Wave and NOMA, many technical challenges might hinder that performance gain. In particular, the performance of Successive Interference Cancellation (SIC) diminishes rapidly as the number of users increases per beam, which leads to higher intra-beam interference. Furthermore, intersection regions between adjacent cells give rise to inter-beam inter-cell interference. To mitigate both interference levels, optimal selection of the number of beams in addition to best allocation of users to those beams is essential. In this paper, we address the problem of joint user-cell association and selection of number of beams for the purpose of maximizing the aggregate network capacity. We propose three machine learning-based algorithms; transfer Q-learning (TQL), Q-learning, and Best SINR association with Density-based Spatial Clustering of Applications with Noise (BSDC) algorithms and compare their performance under different scenarios. Under mobility, TQL and Q-learning demonstrate 12% rate improvement over BSDC at the highest offered traffic load. For stationary scenarios, Q-learning and BSDC outperform TQL, however TQL achieves about 29% convergence speedup compared to Q-learning.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا