ترغب بنشر مسار تعليمي؟ اضغط هنا

ACK-Less Rate Adaptation for IEEE 802.11bc Enhanced Broadcast Services Using Sim-to-Real Deep Reinforcement Learning

149   0   0.0 ( 0 )
 نشر من قبل Takamochi Kanda
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In IEEE 802.11bc, the broadcast mode on wireless local area networks (WLANs), data rate control that is based on acknowledgement (ACK) mechanism similar to the one in the current IEEE 802.11 WLANs is not applicable because ACK mechanism is not implemented. This paper addresses this challenge by proposing ACK-less data rate adaptation methods by capturing non-broadcast uplink frames of STAs. In IEEE 802.11bc, an use case is assumed, where a part of STAs in the broadcast recipients is also associated with non-broadcast APs, and such STAs periodically transmit uplink frames including ACK frames. The proposed method is based on the idea that by overhearing such uplink frames, the broadcast AP surveys channel conditions at partial STAs, thereby setting appropriate data rates for the STAs. Furthermore, in order to avoid reception failures in a large portion of STAs, this paper proposes deep reinforcement learning (DRL)-based data rate adaptation framework that uses a sim-to-real approach. Therein, information of reception success/failure at broadcast recipient STAs, that could not be notified to the broadcast AP in real deployments, are made available by simulations beforehand, thereby forming data rate adaptation strategies. Numerical results show that utilizing overheard uplink frames of recipients makes it feasible to manage data rates in ACK-less broadcast WLANs, and using the sim-to-real DRL framework can decrease reception failures.



قيم البحث

اقرأ أيضاً

Client-side video players employ adaptive bitrate (ABR) algorithms to optimize user quality of experience (QoE). We evaluate recently proposed RL-based ABR methods in Facebooks web-based video streaming platform. Real-world ABR contains several chall enges that requires customized designs beyond off-the-shelf RL algorithms -- we implement a scalable neural network architecture that supports videos with arbitrary bitrate encodings; we design a training method to cope with the variance resulting from the stochasticity in network conditions; and we leverage constrained Bayesian optimization for reward shaping in order to optimize the conflicting QoE objectives. In a week-long worldwide deployment with more than 30 million video streaming sessions, our RL approach outperforms the existing human-engineered ABR algorithms.
Deep reinforcement learning has recently seen huge success across multiple areas in the robotics domain. Owing to the limitations of gathering real-world data, i.e., sample inefficiency and the cost of collecting it, simulation environments are utili zed for training the different agents. This not only aids in providing a potentially infinite data source, but also alleviates safety concerns with real robots. Nonetheless, the gap between the simulated and real worlds degrades the performance of the policies once the models are transferred into real robots. Multiple research efforts are therefore now being directed towards closing this sim-to-real gap and accomplish more efficient policy transfer. Recent years have seen the emergence of multiple methods applicable to different domains, but there is a lack, to the best of our knowledge, of a comprehensive review summarizing and putting into context the different methods. In this survey paper, we cover the fundamental background behind sim-to-real transfer in deep reinforcement learning and overview the main methods being utilized at the moment: domain randomization, domain adaptation, imitation learning, meta-learning and knowledge distillation. We categorize some of the most relevant recent works, and outline the main application scenarios. Finally, we discuss the main opportunities and challenges of the different approaches and point to the most promising directions.
The combination of cloud computing capabilities at the network edge and artificial intelligence promise to turn future mobile networks into service- and radio-aware entities, able to address the requirements of upcoming latency-sensitive applications . In this context, a challenging research goal is to exploit edge intelligence to dynamically and optimally manage the Radio Access Network Slicing (that is a less mature and more complex technology than fifth-generation Network Slicing) and Radio Resource Management, which is a very complex task due to the mostly unpredictably nature of the wireless channel. This paper presents a novel architecture that leverages Deep Reinforcement Learning at the edge of the network in order to address Radio Access Network Slicing and Radio Resource Management optimization supporting latency-sensitive applications. The effectiveness of our proposal against baseline methodologies is investigated through computer simulation, by considering an autonomous-driving use-case.
Learning robotic control policies in the real world gives rise to challenges in data efficiency, safety, and controlling the initial condition of the system. On the other hand, simulations are a useful alternative as they provide an abundant source o f data without the restrictions of the real world. Unfortunately, simulations often fail to accurately model complex real-world phenomena. Traditional system identification techniques are limited in expressiveness by the analytical model parameters, and usually are not sufficient to capture such phenomena. In this paper we propose a general framework for improving the analytical model by optimizing state dependent generalized forces. State dependent generalized forces are expressive enough to model constraints in the equations of motion, while maintaining a clear physical meaning and intuition. We use reinforcement learning to efficiently optimize the mapping from states to generalized forces over a discounted infinite horizon. We show that using only minutes of real world data improves the sim-to-real control policy transfer. We demonstrate the feasibility of our approach by validating it on a nonprehensile manipulation task on the Sawyer robot.
Caching and rate allocation are two promising approaches to support video streaming over wireless network. However, existing rate allocation designs do not fully exploit the advantages of the two approaches. This paper investigates the problem of cac he-enabled QoE-driven video rate allocation problem. We establish a mathematical model for this problem, and point out that it is difficult to solve the problem with traditional dynamic programming. Then we propose a deep reinforcement learning approaches to solve it. First, we model the problem as a Markov decision problem. Then we present a deep Q-learning algorithm with a special knowledge transfer process to find out effective allocation policy. Finally, numerical results are given to demonstrate that the proposed solution can effectively maintain high-quality user experience of mobile user moving among small cells. We also investigate the impact of configuration of critical parameters on the performance of our algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا