Do you want to publish a course? Click here

Optimal Status Update for Caching Enabled IoT Networks: A Dueling Deep R-Network Approach

66   0   0.0 ( 0 )
 Added by Chao Xu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In the Internet of Things (IoT) networks, caching is a promising technique to alleviate energy consumption of sensors by responding to users data requests with the data packets cached in the edge caching node (ECN). However, without an efficient status update strategy, the information obtained by users may be stale, which in return would inevitably deteriorate the accuracy and reliability of derived decisions for real-time applications. In this paper, we focus on striking the balance between the information freshness, in terms of age of information (AoI), experienced by users and energy consumed by sensors, by appropriately activating sensors to update their current status. Particularly, we first depict the evolutions of the AoI with each sensor from different users perspective with time steps of non-uniform duration, which are determined by both the users data requests and the ECNs status update decision. Then, we formulate a non-uniform time step based dynamic status update optimization problem to minimize the long-term average cost, jointly considering the average AoI and energy consumption. To this end, a Markov Decision Process is formulated and further, a dueling deep R-network based dynamic status update algorithm is devised by combining dueling deep Q-network and tabular R-learning, with which challenges from the curse of dimensionality and unknown of the environmental dynamics can be addressed. Finally, extensive simulations are conducted to validate the effectiveness of our proposed algorithm by comparing it with five baseline deep reinforcement learning algorithms and policies.

rate research

Read More

Caching has been regarded as a promising technique to alleviate energy consumption of sensors in Internet of Things (IoT) networks by responding to users requests with the data packets stored in the edge caching node (ECN). For real-time applications in caching enabled IoT networks, it is essential to develop dynamic status update strategies to strike a balance between the information freshness experienced by users and energy consumed by the sensor, which, however, is not well addressed. In this paper, we first depict the evolution of information freshness, in terms of age of information (AoI), at each user. Then, we formulate a dynamic status update optimization problem to minimize the expectation of a long term accumulative cost, which jointly considers the users AoI and sensors energy consumption. To solve this problem, a Markov Decision Process (MDP) is formulated to cast the status updating procedure, and a model-free reinforcement learning algorithm is proposed, with which the challenge brought by the unknown of the formulated MDPs dynamics can be addressed. Finally, simulations are conducted to validate the convergence of our proposed algorithm and its effectiveness compared with the zero-wait baseline policy.
71 - You Li , Yuan Zhuang , Xin Hu 2020
The Internet of Things (IoT) has started to empower the future of many industrial and mass-market applications. Localization techniques are becoming key to add location context to IoT data without human perception and intervention. Meanwhile, the newly-emerged Low-Power Wide-Area Network (LPWAN) technologies have advantages such as long-range, low power consumption, low cost, massive connections, and the capability for communication in both indoor and outdoor areas. These features make LPWAN signals strong candidates for mass-market localization applications. However, there are various error sources that have limited localization performance by using such IoT signals. This paper reviews the IoT localization system through the following sequence: IoT localization system review -- localization data sources -- localization algorithms -- localization error sources and mitigation -- localization performance evaluation. Compared to the related surveys, this paper has a more comprehensive and state-of-the-art review on IoT localization methods, an original review on IoT localization error sources and mitigation, an original review on IoT localization performance evaluation, and a more comprehensive review of IoT localization applications, opportunities, and challenges. Thus, this survey provides comprehensive guidance for peers who are interested in enabling localization ability in the existing IoT systems, using IoT systems for localization, or integrating IoT signals with the existing localization sensors.
We consider a star-topology wireless network for status update where a central node collects status data from a large number of distributed machine-type terminals that share a wireless medium. The Age of Information (AoI) minimization scheduling problem is formulated by the restless multi-armed bandit. A widely-proven near-optimal solution, i.e., the Whittles index, is derived in closed-form and the corresponding indexability is established. The index is then generalized to incorporate stochastic, periodic packet arrivals and unreliable channels. Inspired by the index scheduling policies which achieve near-optimal AoI but require heavy signaling overhead, a contention-based random access scheme, namely Index-Prioritized Random Access (IPRA), is further proposed. Based on IPRA, terminals that are not urgent to update, indicated by their indices, are barred access to the wireless medium, thus improving the access timeliness. A computer-based simulation shows that IPRAs performance is close to the optimal AoI in this setting and outperforms standard random access schemes. Also, for applications with hard AoI deadlines, we provide reliable deadline guarantee analysis. Closed-form achievable AoI stationary distributions under Bernoulli packet arrivals are derived such that AoI deadline with high reliability can be ensured by calculating the maximum number of supportable terminals and allocating system resources proportionally.
To date, model-based reliable communication with low latency is of paramount importance for time-critical wireless control systems. In this work, we study the downlink (DL) controller-to-actuator scheduling problem in a wireless industrial network such that the outage probability is minimized. In contrast to the existing literature based on well-known stationary fading channel models, we assume an arbitrary and unknown channel fading model, which is available only via samples. To overcome the issue of limited data samples, we invoke the generative adversarial network framework and propose an online data-driven approach to jointly schedule the DL transmissions and learn the channel distributions in an online manner. Numerical results show that the proposed approach can effectively learn any arbitrary channel distribution and further achieve the optimal performance by using the predicted outage probability.
We study a problem of fundamental importance to ICNs, namely, minimizing routing costs by jointly optimizing caching and routing decisions over an arbitrary network topology. We consider both source routing and hop-by-hop routing settings. The respective offline problems are NP-hard. Nevertheless, we show that there exist polynomial time approximation algorithms producing solutions within a constant approximation from the optimal. We also produce distributed, adaptive algorithms with the same approximation guarantees. We simulate our adaptive algorithms over a broad array of different topologies. Our algorithms reduce routing costs by several orders of magnitude compared to prior art, including algorithms optimizing caching under fixed routing.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا