ترغب بنشر مسار تعليمي؟ اضغط هنا

Multi-Agent Reinforcement Learning Based Resource Allocation for UAV Networks

184   0   0.0 ( 0 )
 نشر من قبل Jingjing Cui
 تاريخ النشر 2018
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Unmanned aerial vehicles (UAVs) are capable of serving as aerial base stations (BSs) for providing both cost-effective and on-demand wireless communications. This article investigates dynamic resource allocation of multiple UAVs enabled communication networks with the goal of maximizing long-term rewards. More particularly, each UAV communicates with a ground user by automatically selecting its communicating users, power levels and subchannels without any information exchange among UAVs. To model the uncertainty of environments, we formulate the long-term resource allocation problem as a stochastic game for maximizing the expected rewards, where each UAV becomes a learning agent and each resource allocation solution corresponds to an action taken by the UAVs. Afterwards, we develop a multi-agent reinforcement learning (MARL) framework that each agent discovers its best strategy according to its local observations using learning. More specifically, we propose an agent-independent method, for which all agents conduct a decision algorithm independently but share a common structure based on Q-learning. Finally, simulation results reveal that: 1) appropriate parameters for exploitation and exploration are capable of enhancing the performance of the proposed MARL based resource allocation algorithm; 2) the proposed MARL algorithm provides acceptable performance compared to the case with complete information exchanges among UAVs. By doing so, it strikes a good tradeoff between performance gains and information exchange overheads.

قيم البحث

اقرأ أيضاً

This paper investigates the problem of age of information (AoI) aware radio resource management for a platooning system. Multiple autonomous platoons exploit the cellular wireless vehicle-to-everything (C-V2X) communication technology to disseminate the cooperative awareness messages (CAMs) to their followers while ensuring timely delivery of safety-critical messages to the Road-Side Unit (RSU). Due to the challenges of dynamic channel conditions, centralized resource management schemes that require global information are inefficient and lead to large signaling overheads. Hence, we exploit a distributed resource allocation framework based on multi-agent reinforcement learning (MARL), where each platoon leader (PL) acts as an agent and interacts with the environment to learn its optimal policy. Existing MARL algorithms consider a holistic reward function for the groups collective success, which often ends up with unsatisfactory results and cannot guarantee an optimal policy for each agent. Consequently, motivated by the existing literature in RL, we propose a novel MARL framework that trains two critics with the following goals: A global critic which estimates the global expected reward and motivates the agents toward a cooperating behavior and an exclusive local critic for each agent that estimates the local individual reward. Furthermore, based on the tasks each agent has to accomplish, the individual reward of each agent is decomposed into multiple sub-reward functions where task-wise value functions are learned separately. Numerical results indicate our proposed algorithms effectiveness compared with the conventional RL methods applied in this area.
In this article, we study a Radio Resource Allocation (RRA) that was formulated as a non-convex optimization problem whose main aim is to maximize the spectral efficiency subject to satisfaction guarantees in multiservice wireless systems. This probl em has already been previously investigated in the literature and efficient heuristics have been proposed. However, in order to assess the performance of Machine Learning (ML) algorithms when solving optimization problems in the context of RRA, we revisit that problem and propose a solution based on a Reinforcement Learning (RL) framework. Specifically, a distributed optimization method based on multi-agent deep RL is developed, where each agent makes its decisions to find a policy by interacting with the local environment, until reaching convergence. Thus, this article focuses on an application of RL and our main proposal consists in a new deep RL based approach to jointly deal with RRA, satisfaction guarantees and Quality of Service (QoS) constraints in multiservice celular networks. Lastly, through computational simulations we compare the state-of-art solutions of the literature with our proposal and we show a near optimal performance of the latter in terms of throughput and outage rate.
This article investigates the cache-enabling unmanned aerial vehicle (UAV) cellular networks with massive access capability supported by non-orthogonal multiple access (NOMA). The delivery of a large volume of multimedia contents for ground users is assisted by a mobile UAV base station, which caches some popular contents for wireless backhaul link traffic offloading. In cache-enabling UAV NOMA networks, the caching placement of content caching phase and radio resource allocation of content delivery phase are crucial for network performance. To cope with the dynamic UAV locations and content requests in practical scenarios, we formulate the long-term caching placement and resource allocation optimization problem for content delivery delay minimization as a Markov decision process (MDP). The UAV acts as an agent to take actions for caching placement and resource allocation, which includes the user scheduling of content requests and the power allocation of NOMA users. In order to tackle the MDP, we propose a Q-learning based caching placement and resource allocation algorithm, where the UAV learns and selects action with emph{soft ${varepsilon}$-greedy} strategy to search for the optimal match between actions and states. Since the action-state table size of Q-learning grows with the number of states in the dynamic networks, we propose a function approximation based algorithm with combination of stochastic gradient descent and deep neural networks, which is suitable for large-scale networks. Finally, the numerical results show that the proposed algorithms provide considerable performance compared to benchmark algorithms, and obtain a trade-off between network performance and calculation complexity.
This paper investigates the application of deep deterministic policy gradient (DDPG) to intelligent reflecting surface (IRS) based unmanned aerial vehicles (UAV) assisted non-orthogonal multiple access (NOMA) downlink networks. The deployment of the UAV equipped with an IRS is important, as the UAV increases the flexibility of the IRS significantly, especially for the case of users who have no line of sight (LoS) path to the base station (BS). Therefore, the aim of this letter is to maximize the sum rate by jointly optimizing the power allocation of the BS, the phase shifting of the IRS and the horizontal position of the UAV. Because the formulated problem is not convex, the DDPG algorithm is utilized to solve it. The computer simulation results are provided to show the superior performance of the proposed DDPG based algorithm.
This article investigates the energy efficiency issue in non-orthogonal multiple access (NOMA)-enhanced Internet-of-Things (IoT) networks, where a mobile unmanned aerial vehicle (UAV) is exploited as a flying base station to collect data from ground devices via the NOMA protocol. With the aim of maximizing network energy efficiency, we formulate a joint problem of UAV deployment, device scheduling and resource allocation. First, we formulate the joint device scheduling and spectrum allocation problem as a three-sided matching problem, and propose a novel low-complexity near-optimal algorithm. We also introduce the novel concept of `exploration into the matching game for further performance improvement. By algorithm analysis, we prove the convergence and stability of the final matching state. Second, in an effort to allocate proper transmit power to IoT devices, we adopt the Dinkelbachs algorithm to obtain the optimal power allocation solution. Furthermore, we provide a simple but effective approach based on disk covering problem to determine the optimal number and locations of UAVs stop points to ensure that all IoT devices can be fully covered by the UAV via line-of-sight (LoS) links for the sake of better channel condition. Numerical results unveil that: i) the proposed joint UAV deployment, device scheduling and resource allocation scheme achieves much higher EE compared to predefined stationary UAV deployment case and fixed power allocation scheme, with acceptable complexity; and ii) the UAV-aided IoT networks with NOMA greatly outperforms the OMA case in terms of number of accessed devices.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا