ﻻ يوجد ملخص باللغة العربية
Millimeter wave (mmWave) beam-tracking based on machine learning enables the development of accurate tracking policies while obviating the need to periodically solve beam-optimization problems. However, its applicability is still arguable when training-test gaps exist in terms of environmental parameters that affect the node dynamics. From this skeptical point of view, the contribution of this study is twofold. First, by considering an example scenario, we confirm that the training-test gap adversely affects the beam-tracking performance. More specifically, we consider nodes placed on overhead messenger wires, where the node dynamics are affected by several environmental parameters, e.g, the wire mass and tension. Although these are particular scenarios, they yield insight into the validation of the training-test gap problems. Second, we demonstrate the feasibility of textit{zero-shot adaptation} as a solution, where a learning agent adapts to environmental parameters unseen during training. This is achieved by leveraging a robust adversarial reinforcement learning (RARL) technique, where such training-and-test gaps are regarded as disturbances by adversaries that are jointly trained with a legitimate beam-tracking agent. Numerical evaluations demonstrate that the beam-tracking policy learned via RARL can be applied to a wide range of environmental parameters without severely degrading the received power.
This paper discusses the feasibility of beam tracking against dynamics in millimeter wave (mmWave) nodes placed on overhead messenger wires, including wind-forced perturbations and disturbances caused by impulsive forces to wires. Our main contributi
Deep neural networks, including reinforcement learning agents, have been proven vulnerable to small adversarial changes in the input, thus making deploying such networks in the real world problematic. In this paper, we propose RADIAL-RL, a method to
Although deep reinforcement learning (deep RL) methods have lots of strengths that are favorable if applied to autonomous driving, real deep RL applications in autonomous driving have been slowed down by the modeling gap between the source (training)
Reinforcement Learning (RL) is an effective tool for controller design but can struggle with issues of robustness, failing catastrophically when the underlying system dynamics are perturbed. The Robust RL formulation tackles this by adding worst-case
We introduce a new RL problem where the agent is required to generalize to a previously-unseen environment characterized by a subtask graph which describes a set of subtasks and their dependencies. Unlike existing hierarchical multitask RL approaches