Do you want to publish a course? Click here

Deep Reinforcement Learning for Fresh Data Collection in UAV-assisted IoT Networks

198   0   0.0 ( 0 )
 Added by Xijun Wang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Due to the flexibility and low operational cost, dispatching unmanned aerial vehicles (UAVs) to collect information from distributed sensors is expected to be a promising solution in Internet of Things (IoT), especially for time-critical applications. How to maintain the information freshness is a challenging issue. In this paper, we investigate the fresh data collection problem in UAV-assisted IoT networks. Particularly, the UAV flies towards the sensors to collect status update packets within a given duration while maintaining a non-negative residual energy. We formulate a Markov Decision Process (MDP) to find the optimal flight trajectory of the UAV and transmission scheduling of the sensors that minimizes the weighted sum of the age of information (AoI). A UAV-assisted data collection algorithm based on deep reinforcement learning (DRL) is further proposed to overcome the curse of dimensionality. Extensive simulation results demonstrate that the proposed DRL-based algorithm can significantly reduce the weighted sum of the AoI compared to other baseline algorithms.

rate research

Read More

Unmanned aerial vehicles (UAVs) are expected to be a key component of the next-generation wireless systems. Due to their deployment flexibility, UAVs are being considered as an efficient solution for collecting information data from ground nodes and transmitting it wirelessly to the network. In this paper, a UAV-assisted wireless network is studied, in which energy-constrained ground nodes are deployed to observe different physical processes. In this network, a UAV that has a time constraint for its operation due to its limited battery, moves towards the ground nodes to receive status update packets about their observed processes. The flight trajectory of the UAV and scheduling of status update packets are jointly optimized with the objective of achieving the minimum weighted sum for the age-of-information (AoI) values of different processes at the UAV, referred to as weighted sum-AoI. The problem is modeled as a finite-horizon Markov decision process (MDP) with finite state and action spaces. Since the state space is extremely large, a deep reinforcement learning (RL) algorithm is proposed to obtain the optimal policy that minimizes the weighted sum-AoI, referred to as the age-optimal policy. Several simulation scenarios are considered to showcase the convergence of the proposed deep RL algorithm. Moreover, the results also demonstrate that the proposed deep RL approach can significantly improve the achievable sum-AoI per process compared to the baseline policies, such as the distance-based and random walk policies. The impact of various system design parameters on the optimal achievable sum-AoI per process is also shown through extensive simulations.
189 - Yang Wang , Zhen Gao , Jun Zhang 2021
In this paper, we investigate an unmanned aerial vehicle (UAV)-assisted Internet-of-Things (IoT) system in a sophisticated three-dimensional (3D) environment, where the UAVs trajectory is optimized to efficiently collect data from multiple IoT ground nodes. Unlike existing approaches focusing only on a simplified two-dimensional scenario and the availability of perfect channel state information (CSI), this paper considers a practical 3D urban environment with imperfect CSI, where the UAVs trajectory is designed to minimize data collection completion time subject to practical throughput and flight movement constraints. Specifically, inspired from the state-of-the-art deep reinforcement learning approaches, we leverage the twin-delayed deep deterministic policy gradient (TD3) to design the UAVs trajectory and present a TD3-based trajectory design for completion time minimization (TD3-TDCTM) algorithm. In particular, we set an additional information, i.e., the merged pheromone, to represent the state information of UAV and environment as a reference of reward which facilitates the algorithm design. By taking the service statuses of IoT nodes, the UAVs position, and the merged pheromone as input, the proposed algorithm can continuously and adaptively learn how to adjust the UAVs movement strategy. By interacting with the external environment in the corresponding Markov decision process, the proposed algorithm can achieve a near-optimal navigation strategy. Our simulation results show the superiority of the proposed TD3-TDCTM algorithm over three conventional non-learning based baseline methods.
This paper explores the feasibility of leveraging concepts from deep reinforcement learning (DRL) to enable dynamic resource management in Wi-Fi networks implementing distributed multi-user MIMO (D-MIMO). D-MIMO is a technique by which a set of wireless access points are synchronized and grouped together to jointly serve multiple users simultaneously. This paper addresses two dynamic resource management problems pertaining to D-MIMO Wi-Fi networks: (i) channel assignment of D-MIMO groups, and (ii) deciding how to cluster access points to form D-MIMO groups, in order to maximize user throughput performance. These problems are known to be NP-Hard and only heuristic solutions exist in literature. We construct a DRL framework through which a learning agent interacts with a D-MIMO Wi-Fi network, learns about the network environment, and is successful in converging to policies which address the aforementioned problems. Through extensive simulations and on-line training based on D-MIMO Wi-Fi networks, this paper demonstrates the efficacy of DRL in achieving an improvement of 20% in user throughput performance compared to heuristic solutions, particularly when network conditions are dynamic. This work also showcases the effectiveness of DRL in meeting multiple network objectives simultaneously, for instance, maximizing throughput of users as well as fairness of throughput among them.
Unmanned Aerial Vehicles (UAVs) have been emerging as an effective solution for IoT data collection networks thanks to their outstanding flexibility, mobility, and low operation costs. However, due to the limited energy and uncertainty from the data collection process, speed control is one of the most important factors to optimize the energy usage efficiency and performance for UAV collectors. This work aims to develop a novel autonomous speed control approach to address this issue. To that end, we first formulate the dynamic speed control task of a UAV as a Markov decision process taking into account its energy status and location. In this way, the Q-learning algorithm can be adopted to obtain the optimal speed control policy for the UAV. To further improve the system performance, we develop an highly-effective deep dueling double Q-learning algorithm utilizing outstanding features of the deep neural networks as well as advanced dueling architecture to quickly stabilize the learning process and obtain the optimal policy. Through simulation results, we show that our proposed solution can achieve up to 40% greater performance compared with other conventional methods. Importantly, the simulation results also reveal significant impacts of UAVs energy and charging time on the system performance.
In this paper, an unmanned aerial vehicle (UAV)-assisted wireless network is considered in which a battery-constrained UAV is assumed to move towards energy-constrained ground nodes to receive status updates about their observed processes. The UAVs flight trajectory and scheduling of status updates are jointly optimized with the objective of minimizing the normalized weighted sum of Age of Information (NWAoI) values for different physical processes at the UAV. The problem is first formulated as a mixed-integer program. Then, for a given scheduling policy, a convex optimization-based solution is proposed to derive the UAVs optimal flight trajectory and time instants on updates. However, finding the optimal scheduling policy is challenging due to the combinatorial nature of the formulated problem. Therefore, to complement the proposed convex optimization-based solution, a finite-horizon Markov decision process (MDP) is used to find the optimal scheduling policy. Since the state space of the MDP is extremely large, a novel neural combinatorial-based deep reinforcement learning (NCRL) algorithm using deep Q-network (DQN) is proposed to obtain the optimal policy. However, for large-scale scenarios with numerous nodes, the DQN architecture cannot efficiently learn the optimal scheduling policy anymore. Motivated by this, a long short-term memory (LSTM)-based autoencoder is proposed to map the state space to a fixed-size vector representation in such large-scale scenarios. A lower bound on the minimum NWAoI is analytically derived which provides system design guidelines on the appropriate choice of importance weights for different nodes. The numerical results also demonstrate that the proposed NCRL approach can significantly improve the achievable NWAoI per process compared to the baseline policies, such as weight-based and discretized state DQN policies.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا