Do you want to publish a course? Click here

Distributed Reinforcement Learning for Age of Information Minimization in Real-Time IoT Systems

120   0   0.0 ( 0 )
 Added by Sihua Wang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In this paper, the problem of minimizing the weighted sum of age of information (AoI) and total energy consumption of Internet of Things (IoT) devices is studied. In the considered model, each IoT device monitors a physical process that follows nonlinear dynamics. As the dynamics of the physical process vary over time, each device must find an optimal sampling frequency to sample the real-time dynamics of the physical system and send sampled information to a base station (BS). Due to limited wireless resources, the BS can only select a subset of devices to transmit their sampled information. Thus, edge devices must cooperatively sample their monitored dynamics based on the local observations and the BS must collect the sampled information from the devices immediately, hence avoiding the additional time and energy used for sampling and information transmission. To this end, it is necessary to jointly optimize the sampling policy of each device and the device selection scheme of the BS so as to accurately monitor the dynamics of the physical process using minimum energy. This problem is formulated as an optimization problem whose goal is to minimize the weighted sum of AoI cost and energy consumption. To solve this problem, we propose a novel distributed reinforcement learning (RL) approach for the sampling policy optimization. The proposed algorithm enables edge devices to cooperatively find the global optimal sampling policy using their own local observations. Given the sampling policy, the device selection scheme can be optimized thus minimizing the weighted sum of AoI and energy consumption of all devices. Simulations with real data of PM 2.5 pollution show that the proposed algorithm can reduce the sum of AoI by up to 17.8% and 33.9% and the total energy consumption by up to 13.2% and 35.1%, compared to a conventional deep Q network method and a uniform sampling policy.



rate research

Read More

441 - Xiongwei Wu , Xiuhua Li , Jun Li 2020
In most Internet of Things (IoT) networks, edge nodes are commonly used as to relays to cache sensing data generated by IoT sensors as well as provide communication services for data consumers. However, a critical issue of IoT sensing is that data are usually transient, which necessitates temporal updates of caching content items while frequent cache updates could lead to considerable energy cost and challenge the lifetime of IoT sensors. To address this issue, we adopt the Age of Information (AoI) to quantify data freshness and propose an online cache update scheme to obtain an effective tradeoff between the average AoI and energy cost. Specifically, we first develop a characterization of transmission energy consumption at IoT sensors by incorporating a successful transmission condition. Then, we model cache updating as a Markov decision process to minimize average weighted cost with judicious definitions of state, action, and reward. Since user preference towards content items is usually unknown and often temporally evolving, we therefore develop a deep reinforcement learning (DRL) algorithm to enable intelligent cache updates. Through trial-and-error explorations, an effective caching policy can be learned without requiring exact knowledge of content popularity. Simulation results demonstrate the superiority of the proposed framework.
Scheduling the transmission of time-sensitive information from a source node to multiple users over error-prone communication channels is studied with the goal of minimizing the long-term average age of information (AoI) at the users. A long-term average resource constraint is imposed on the source, which limits the average number of transmissions. The source can transmit only to a single user at each time slot, and after each transmission, it receives an instantaneous ACK/NACK feedback from the intended receiver, and decides when and to which user to transmit the next update. Assuming the channel statistics are known, the optimal scheduling policy is studied for both the standard automatic repeat request (ARQ) and hybrid ARQ (HARQ) protocols. Then, a reinforcement learning(RL) approach is introduced to find a near-optimal policy, which does not assume any a priori information on the random processes governing the channel states. Different RL methods including average-cost SARSAwith linear function approximation (LFA), upper confidence reinforcement learning (UCRL2), and deep Q-network (DQN) are applied and compared through numerical simulations
Unmanned aerial vehicles (UAVs) are expected to be a key component of the next-generation wireless systems. Due to their deployment flexibility, UAVs are being considered as an efficient solution for collecting information data from ground nodes and transmitting it wirelessly to the network. In this paper, a UAV-assisted wireless network is studied, in which energy-constrained ground nodes are deployed to observe different physical processes. In this network, a UAV that has a time constraint for its operation due to its limited battery, moves towards the ground nodes to receive status update packets about their observed processes. The flight trajectory of the UAV and scheduling of status update packets are jointly optimized with the objective of achieving the minimum weighted sum for the age-of-information (AoI) values of different processes at the UAV, referred to as weighted sum-AoI. The problem is modeled as a finite-horizon Markov decision process (MDP) with finite state and action spaces. Since the state space is extremely large, a deep reinforcement learning (RL) algorithm is proposed to obtain the optimal policy that minimizes the weighted sum-AoI, referred to as the age-optimal policy. Several simulation scenarios are considered to showcase the convergence of the proposed deep RL algorithm. Moreover, the results also demonstrate that the proposed deep RL approach can significantly improve the achievable sum-AoI per process compared to the baseline policies, such as the distance-based and random walk policies. The impact of various system design parameters on the optimal achievable sum-AoI per process is also shown through extensive simulations.
In delay-sensitive industrial internet of things (IIoT) applications, the age of information (AoI) is employed to characterize the freshness of information. Meanwhile, the emerging network function virtualization provides flexibility and agility for service providers to deliver a given network service using a sequence of virtual network functions (VNFs). However, suitable VNF placement and scheduling in these schemes is NP-hard and finding a globally optimal solution by traditional approaches is complex. Recently, deep reinforcement learning (DRL) has appeared as a viable way to solve such problems. In this paper, we first utilize single agent low-complex compound action actor-critic RL to cover both discrete and continuous actions and jointly minimize VNF cost and AoI in terms of network resources under end-to end Quality of Service constraints. To surmount the single-agent capacity limitation for learning, we then extend our solution to a multi-agent DRL scheme in which agents collaborate with each other. Simulation results demonstrate that single-agent schemes significantly outperform the greedy algorithm in terms of average network cost and AoI. Moreover, multi-agent solution decreases the average cost by dividing the tasks between the agents. However, it needs more iterations to be learned due to the requirement on the agents collaboration.
Timely status updating is crucial for future applications that involve remote monitoring and control, such as autonomous driving and Industrial Internet of Things (IIoT). Age of Information (AoI) has been proposed to measure the freshness of status updates. However, it is incapable of capturing critical systematic context information that indicates the time-varying importance of status information, and the dynamic evolution of status. In this paper, we propose a context-based metric, namely the Urgency of Information (UoI), to evaluate the timeliness of status updates. Compared to AoI, the new metric incorporates both time-varying context information and dynamic status evolution, which enables the analysis on context-based adaptive status update schemes, as well as more effective remote monitoring and control. The minimization of average UoI for a status update terminal with an updating frequency constraint is investigated, and an update-index-based adaptive scheme is proposed. Simulation results show that the proposed scheme achieves a near-optimal performance with a low computational complexity.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا