ترغب بنشر مسار تعليمي؟ اضغط هنا

Path Planning for UAV-Mounted Mobile Edge Computing with Deep Reinforcement Learning

186   0   0.0 ( 0 )
 نشر من قبل Long Shi
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

In this letter, we study an unmanned aerial vehicle (UAV)-mounted mobile edge computing network, where the UAV executes computational tasks offloaded from mobile terminal users (TUs) and the motion of each TU follows a Gauss-Markov random model. To ensure the quality-of-service (QoS) of each TU, the UAV with limited energy dynamically plans its trajectory according to the locations of mobile TUs. Towards this end, we formulate the problem as a Markov decision process, wherein the UAV trajectory and UAV-TU association are modeled as the parameters to be optimized. To maximize the system reward and meet the QoS constraint, we develop a QoS-based action selection policy in the proposed algorithm based on double deep Q-network. Simulations show that the proposed algorithm converges more quickly and achieves a higher sum throughput than conventional algorithms.



قيم البحث

اقرأ أيضاً

242 - Yuwen Qian , Feifei Wang , Jun Li 2019
Mobile edge computing (MEC) provides computational services at the edge of networks by offloading tasks from user equipments (UEs). This letter employs an unmanned aerial vehicle (UAV) as the edge computing server to execute offloaded tasks from the ground UEs. We jointly optimize user association, UAV trajectory, and uploading power of each UE to maximize sum bits offloaded from all UEs to the UAV, subject to energy constraint of the UAV and quality of service (QoS) of each UE. To address the non-convex optimization problem, we first decompose it into three subproblems that are solved with integer programming and successive convex optimization methods respectively. Then, we tackle the overall problem by the multi-variable iterative optimization algorithm. Simulations show that the proposed algorithm can achieve a better performance than other baseline schemes.
Age of Information (AoI), defined as the time elapsed since the generation of the latest received update, is a promising performance metric to measure data freshness for real-time status monitoring. In many applications, status information needs to b e extracted through computing, which can be processed at an edge server enabled by mobile edge computing (MEC). In this paper, we aim to minimize the average AoI within a given deadline by jointly scheduling the transmissions and computations of a series of update packets with deterministic transmission and computing times. The main analytical results are summarized as follows. Firstly, the minimum deadline to guarantee the successful transmission and computing of all packets is given. Secondly, a emph{no-wait computing} policy which intuitively attains the minimum AoI is introduced, and the feasibility condition of the policy is derived. Finally, a closed-form optimal scheduling policy is obtained on the condition that the deadline exceeds a certain threshold. The behavior of the optimal transmission and computing policy is illustrated by numerical results with different values of the deadline, which validates the analytical results.
351 - Zhifei Lin , Feng Wang , 2021
This paper considers an energy harvesting (EH) based multiuser mobile edge computing (MEC) system, where each user utilizes the harvested energy from renewable energy sources to execute its computation tasks via computation offloading and local compu ting. Towards maximizing the systems weighted computation rate (i.e., the number of weighted users computing bits within a finite time horizon) subject to the users energy causality constraints due to dynamic energy arrivals, the decision for joint computation offloading and local computing over time is optimized {em over time}. Assuming that the profile of channel state information and dynamic task arrivals at the users is known in advance, the weighted computation rate maximization problem becomes a convex optimization problem. Building on the Lagrange duality method, the well-structured optimal solution is analytically obtained. Both the users local computing and offloading rates are shown to have a monotonically increasing structure. Numerical results show that the proposed design scheme can achieve a significant performance gain over the alternative benchmark schemes.
189 - Yang Wang , Zhen Gao , Jun Zhang 2021
In this paper, we investigate an unmanned aerial vehicle (UAV)-assisted Internet-of-Things (IoT) system in a sophisticated three-dimensional (3D) environment, where the UAVs trajectory is optimized to efficiently collect data from multiple IoT ground nodes. Unlike existing approaches focusing only on a simplified two-dimensional scenario and the availability of perfect channel state information (CSI), this paper considers a practical 3D urban environment with imperfect CSI, where the UAVs trajectory is designed to minimize data collection completion time subject to practical throughput and flight movement constraints. Specifically, inspired from the state-of-the-art deep reinforcement learning approaches, we leverage the twin-delayed deep deterministic policy gradient (TD3) to design the UAVs trajectory and present a TD3-based trajectory design for completion time minimization (TD3-TDCTM) algorithm. In particular, we set an additional information, i.e., the merged pheromone, to represent the state information of UAV and environment as a reference of reward which facilitates the algorithm design. By taking the service statuses of IoT nodes, the UAVs position, and the merged pheromone as input, the proposed algorithm can continuously and adaptively learn how to adjust the UAVs movement strategy. By interacting with the external environment in the corresponding Markov decision process, the proposed algorithm can achieve a near-optimal navigation strategy. Our simulation results show the superiority of the proposed TD3-TDCTM algorithm over three conventional non-learning based baseline methods.
Reconfigurable intelligent surface (RIS) has emerged as a promising technology for achieving high spectrum and energy efficiency in future wireless communication networks. In this paper, we investigate an RIS-aided single-cell multi-user mobile edge computing (MEC) system where an RIS is deployed to support the communication between a base station (BS) equipped with MEC servers and multiple single-antenna users. To utilize the scarce frequency resource efficiently, we assume that users communicate with BS based on a non-orthogonal multiple access (NOMA) protocol. Each user has a computation task which can be computed locally or partially/fully offloaded to the BS. We aim to minimize the sum energy consumption of all users by jointly optimizing the passive phase shifters, the size of transmission data, transmission rate, power control, transmission time and the decoding order. Since the resulting problem is non-convex, we use the block coordinate descent method to alternately optimize two separated subproblems. More specifically, we use the dual method to tackle a subproblem with given phase shift and obtain the closed-form solution; and then we utilize penalty method to solve another subproblem for given power control. Moreover, in order to demonstrate the effectiveness of our proposed algorithm, we propose three benchmark schemes: the time-division multiple access (TDMA)-MEC scheme, the full local computing scheme and the full offloading scheme. We use an alternating 1-D search method and the dual method that can solve the TDMA-based transmission problem well. Numerical results demonstrate that the proposed scheme can increase the energy efficiency and achieve significant performance gains over the three benchmark schemes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا