ترغب بنشر مسار تعليمي؟ اضغط هنا

Neural Combinatorial Deep Reinforcement Learning for Age-optimal Joint Trajectory and Scheduling Design in UAV-assisted Networks

149   0   0.0 ( 0 )
 نشر من قبل Mohamed A. Abd-Elmagid
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, an unmanned aerial vehicle (UAV)-assisted wireless network is considered in which a battery-constrained UAV is assumed to move towards energy-constrained ground nodes to receive status updates about their observed processes. The UAVs flight trajectory and scheduling of status updates are jointly optimized with the objective of minimizing the normalized weighted sum of Age of Information (NWAoI) values for different physical processes at the UAV. The problem is first formulated as a mixed-integer program. Then, for a given scheduling policy, a convex optimization-based solution is proposed to derive the UAVs optimal flight trajectory and time instants on updates. However, finding the optimal scheduling policy is challenging due to the combinatorial nature of the formulated problem. Therefore, to complement the proposed convex optimization-based solution, a finite-horizon Markov decision process (MDP) is used to find the optimal scheduling policy. Since the state space of the MDP is extremely large, a novel neural combinatorial-based deep reinforcement learning (NCRL) algorithm using deep Q-network (DQN) is proposed to obtain the optimal policy. However, for large-scale scenarios with numerous nodes, the DQN architecture cannot efficiently learn the optimal scheduling policy anymore. Motivated by this, a long short-term memory (LSTM)-based autoencoder is proposed to map the state space to a fixed-size vector representation in such large-scale scenarios. A lower bound on the minimum NWAoI is analytically derived which provides system design guidelines on the appropriate choice of importance weights for different nodes. The numerical results also demonstrate that the proposed NCRL approach can significantly improve the achievable NWAoI per process compared to the baseline policies, such as weight-based and discretized state DQN policies.

قيم البحث

اقرأ أيضاً

Unmanned aerial vehicles (UAVs) are expected to be a key component of the next-generation wireless systems. Due to their deployment flexibility, UAVs are being considered as an efficient solution for collecting information data from ground nodes and transmitting it wirelessly to the network. In this paper, a UAV-assisted wireless network is studied, in which energy-constrained ground nodes are deployed to observe different physical processes. In this network, a UAV that has a time constraint for its operation due to its limited battery, moves towards the ground nodes to receive status update packets about their observed processes. The flight trajectory of the UAV and scheduling of status update packets are jointly optimized with the objective of achieving the minimum weighted sum for the age-of-information (AoI) values of different processes at the UAV, referred to as weighted sum-AoI. The problem is modeled as a finite-horizon Markov decision process (MDP) with finite state and action spaces. Since the state space is extremely large, a deep reinforcement learning (RL) algorithm is proposed to obtain the optimal policy that minimizes the weighted sum-AoI, referred to as the age-optimal policy. Several simulation scenarios are considered to showcase the convergence of the proposed deep RL algorithm. Moreover, the results also demonstrate that the proposed deep RL approach can significantly improve the achievable sum-AoI per process compared to the baseline policies, such as the distance-based and random walk policies. The impact of various system design parameters on the optimal achievable sum-AoI per process is also shown through extensive simulations.
124 - Sixian Li , Bin Duo , Xiaojun Yuan 2019
Thanks to the line-of-sight (LoS) transmission and flexibility, unmanned aerial vehicles (UAVs) effectively improve the throughput of wireless networks. Nevertheless, the LoS links are prone to severe deterioration by complex propagation environments , especially in urban areas. Reconfigurable intelligent surfaces (RISs), as a promising technique, can significantly improve the propagation environment and enhance communication quality by intelligently reflecting the received signals. Motivated by this, the joint UAV trajectory and RISs passive beamforming design for a novel RIS-assisted UAV communication system is investigated to maximize the average achievable rate in this letter. To tackle the formulated non-convex problem, we divide it into two subproblems, namely, passive beamforming and trajectory optimization. We first derive a closed-form phase-shift solution for any given UAV trajectory to achieve the phase alignment of the received signals from different transmission paths. Then, with the optimal phase-shift solution, we obtain a suboptimal trajectory solution by using the successive convex approximation (SCA) method. Numerical results demonstrate that the proposed algorithm can considerably improve the average achievable rate of the system.
109 - Yao Tang , Man Hon Cheung , 2019
Unmanned aerial vehicles (UAVs) can enhance the performance of cellular networks, due to their high mobility and efficient deployment. In this paper, we present a first study on how the user mobility affects the UAVs trajectories of a multiple-UAV as sisted wireless communication system. Specifically, we consider the UAVs are deployed as aerial base stations to serve ground users who move between different regions. We maximize the throughput of ground users in the downlink communication by optimizing the UAVs trajectories, while taking into account the impact of the user mobility, propulsion energy consumption, and UAVs mutual interference. We formulate the problem as a route selection problem in an acyclic directed graph. Each vertex represents a task associated with a reward on the average user throughput in a region-time point, while each edge is associated with a cost on the energy propulsion consumption during flying and hovering. For the centralized trajectory design, we first propose the shortest path scheme that determines the optimal trajectory for the single UAV case. We also propose the centralized route selection (CRS) scheme to systematically compute the optimal trajectories for the more general multiple-UAV case. Due to the NP-hardness of the centralized problem, we consider the distributed trajectory design that each UAV selects its trajectory autonomously and propose the distributed route selection (DRS) scheme, which will converge to a pure strategy Nash equilibrium within a finite number of iterations.
197 - Mengjie Yi , Xijun Wang , Juan Liu 2020
Due to the flexibility and low operational cost, dispatching unmanned aerial vehicles (UAVs) to collect information from distributed sensors is expected to be a promising solution in Internet of Things (IoT), especially for time-critical applications . How to maintain the information freshness is a challenging issue. In this paper, we investigate the fresh data collection problem in UAV-assisted IoT networks. Particularly, the UAV flies towards the sensors to collect status update packets within a given duration while maintaining a non-negative residual energy. We formulate a Markov Decision Process (MDP) to find the optimal flight trajectory of the UAV and transmission scheduling of the sensors that minimizes the weighted sum of the age of information (AoI). A UAV-assisted data collection algorithm based on deep reinforcement learning (DRL) is further proposed to overcome the curse of dimensionality. Extensive simulation results demonstrate that the proposed DRL-based algorithm can significantly reduce the weighted sum of the AoI compared to other baseline algorithms.
In this paper, the problem of trajectory design of unmanned aerial vehicles (UAVs) for maximizing the number of satisfied users is studied in a UAV based cellular network where the UAV works as a flying base station that serves users, and the user in dicates its satisfaction in terms of completion of its data request within an allowable maximum waiting time. The trajectory design is formulated as an optimization problem whose goal is to maximize the number of satisfied users. To solve this problem, a machine learning framework based on double Q-learning algorithm is proposed. The algorithm enables the UAV to find the optimal trajectory that maximizes the number of satisfied users. Compared to the traditional learning algorithms, such as Q-learning that selects and evaluates the action using the same Q-table, the proposed algorithm can decouple the selection from the evaluation, therefore avoid overestimation which leads to sub-optimal policies. Simulation results show that the proposed algorithm can achieve up to 19.4% and 14.1% gains in terms of the number of satisfied users compared to random algorithm and Q-learning algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا