ترغب بنشر مسار تعليمي؟ اضغط هنا

Data-Driven Predictive Scheduling in Ultra-Reliable Low-Latency Industrial IoT: A Generative Adversarial Network Approach

89   0   0.0 ( 0 )
 نشر من قبل Chen-Feng Liu
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

To date, model-based reliable communication with low latency is of paramount importance for time-critical wireless control systems. In this work, we study the downlink (DL) controller-to-actuator scheduling problem in a wireless industrial network such that the outage probability is minimized. In contrast to the existing literature based on well-known stationary fading channel models, we assume an arbitrary and unknown channel fading model, which is available only via samples. To overcome the issue of limited data samples, we invoke the generative adversarial network framework and propose an online data-driven approach to jointly schedule the DL transmissions and learn the channel distributions in an online manner. Numerical results show that the proposed approach can effectively learn any arbitrary channel distribution and further achieve the optimal performance by using the predicted outage probability.



قيم البحث

اقرأ أيضاً

Considering a Manhattan mobility model in vehicle-to-vehicle networks, this work studies a power minimization problem subject to second-order statistical constraints on latency and reliability, captured by a network-wide maximal data queue length. We invoke results in extreme value theory to characterize statistics of extreme events in terms of the maximal queue length. Subsequently, leveraging Lyapunov stochastic optimization to deal with network dynamics, we propose two queue-aware power allocation solutions. In contrast with the baseline, our approaches achieve lower mean and variance of the maximal queue length.
Industrial automation deployments constitute challenging environments where moving IoT machines may produce high-definition video and other heavy sensor data during surveying and inspection operations. Transporting massive contents to the edge networ k infrastructure and then eventually to the remote human operator requires reliable and high-rate radio links supported by intelligent data caching and delivery mechanisms. In this work, we address the challenges of contents dissemination in characteristic factory automation scenarios by proposing to engage moving industrial machines as device-to-device (D2D) caching helpers. With the goal to improve reliability of high-rate millimeter-wave (mmWave) data connections, we introduce the alternative contents dissemination modes and then construct a novel mobility-aware methodology that helps develop predictive mode selection strategies based on the anticipated radio link conditions. We also conduct a thorough system-level evaluation of representative data dissemination strategies to confirm the benefits of predictive solutions that employ D2D-enabled collaborative caching at the wireless edge to lower contents delivery latency and improve data acquisition reliability.
Ultra-reliable communication (URC) is a key enabler for supporting immersive and mission-critical 5G applications. Meeting the strict reliability requirements of these applications is challenging due to the absence of accurate statistical models tail ored to URC systems. In this letter, the wireless connectivity over dynamic channels is characterized via statistical learning methods. In particular, model-based and data-driven learning approaches are proposed to estimate the non-blocking connectivity statistics over a set of training samples with no knowledge on the dynamic channel statistics. Using principles of survival analysis, the reliability of wireless connectivity is measured in terms of the probability of channel blocking events. Moreover, the maximum transmission duration for a given reliable non-blocking connectivity is predicted in conjunction with the confidence of the inferred transmission duration. Results show that the accuracy of detecting channel blocking events is higher using the model-based method for low to moderate reliability targets requiring low sample complexity. In contrast, the data-driven method shows higher detection accuracy for higher reliability targets at the cost of 100$times$ sample complexity.
To overcome devices limitations in performing computation-intense applications, mobile edge computing (MEC) enables users to offload tasks to proximal MEC servers for faster task computation. However, current MEC system design is based on average-bas ed metrics, which fails to account for the ultra-reliable low-latency requirements in mission-critical applications. To tackle this, this paper proposes a new system design, where probabilistic and statistical constraints are imposed on task queue lengths, by applying extreme value theory. The aim is to minimize users power consumption while trading off the allocated resources for local computation and task offloading. Due to wireless channel dynamics, users are re-associated to MEC servers in order to offload tasks using higher rates or accessing proximal servers. In this regard, a user-server association policy is proposed, taking into account the channel quality as well as the servers computation capabilities and workloads. By marrying tools from Lyapunov optimization and matching theory, a two-timescale mechanism is proposed, where a user-server association is solved in the long timescale while a dynamic task offloading and resource allocation policy is executed in the short timescale. Simulation results corroborate the effectiveness of the proposed approach by guaranteeing highly-reliable task computation and lower delay performance, compared to several baselines.
65 - Chao Xu , Yiping Xie , Xijun Wang 2021
In the Internet of Things (IoT) networks, caching is a promising technique to alleviate energy consumption of sensors by responding to users data requests with the data packets cached in the edge caching node (ECN). However, without an efficient stat us update strategy, the information obtained by users may be stale, which in return would inevitably deteriorate the accuracy and reliability of derived decisions for real-time applications. In this paper, we focus on striking the balance between the information freshness, in terms of age of information (AoI), experienced by users and energy consumed by sensors, by appropriately activating sensors to update their current status. Particularly, we first depict the evolutions of the AoI with each sensor from different users perspective with time steps of non-uniform duration, which are determined by both the users data requests and the ECNs status update decision. Then, we formulate a non-uniform time step based dynamic status update optimization problem to minimize the long-term average cost, jointly considering the average AoI and energy consumption. To this end, a Markov Decision Process is formulated and further, a dueling deep R-network based dynamic status update algorithm is devised by combining dueling deep Q-network and tabular R-learning, with which challenges from the curse of dimensionality and unknown of the environmental dynamics can be addressed. Finally, extensive simulations are conducted to validate the effectiveness of our proposed algorithm by comparing it with five baseline deep reinforcement learning algorithms and policies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا