Do you want to publish a course? Click here

Deep Reinforcement Learning-Based Beam Tracking for Low-Latency Services in Vehicular Networks

78   0   0.0 ( 0 )
 Added by Zhiyuan Jiang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Ultra-Reliable and Low-Latency Communications (URLLC) services in vehicular networks on millimeter-wave bands present a significant challenge, considering the necessity of constantly adjusting the beam directions. Conventional methods are mostly based on classical control theory, e.g., Kalman filter and its variations, which mainly deal with stationary scenarios. Therefore, severe application limitations exist, especially with complicated, dynamic Vehicle-to-Everything (V2X) channels. This paper gives a thorough study of this subject, by first modifying the classical approaches, e.g., Extended Kalman Filter (EKF) and Particle Filter (PF), for non-stationary scenarios, and then proposing a Reinforcement Learning (RL)-based approach that can achieve the URLLC requirements in a typical intersection scenario. Simulation results based on a commercial ray-tracing simulator show that enhanced EKF and PF methods achieve packet delay more than $10$ ms, whereas the proposed deep RL-based method can reduce the latency to about $6$ ms, by extracting context information from the training data.



rate research

Read More

In intelligent transportation systems (ITS), vehicles are expected to feature with advanced applications and services which demand ultra-high data rates and low-latency communications. For that, the millimeter wave (mmWave) communication has been emerging as a very promising solution. However, incorporating the mmWave into ITS is particularly challenging due to the high mobility of vehicles and the inherent sensitivity of mmWave beams to dynamic blockages. This article addresses these problems by developing an optimal beam association framework for mmWave vehicular networks under high mobility. Specifically, we use the semi-Markov decision process to capture the dynamics and uncertainty of the environment. The Q-learning algorithm is then often used to find the optimal policy. However, Q-learning is notorious for its slow-convergence. Instead of adopting deep reinforcement learning structures (like most works in the literature), we leverage the fact that there are usually multiple vehicles on the road to speed up the learning process. To that end, we develop a lightweight yet very effective parallel Q-learning algorithm to quickly obtain the optimal policy by simultaneously learning from various vehicles. Extensive simulations demonstrate that our proposed solution can increase the data rate by 47% and reduce the disconnection probability by 29% compared to other solutions.
142 - Mushu Li , Jie Gao , Lian Zhao 2020
Mobile edge computing (MEC) is a promising technology to support mission-critical vehicular applications, such as intelligent path planning and safety applications. In this paper, a collaborative edge computing framework is developed to reduce the computing service latency and improve service reliability for vehicular networks. First, a task partition and scheduling algorithm (TPSA) is proposed to decide the workload allocation and schedule the execution order of the tasks offloaded to the edge servers given a computation offloading strategy. Second, an artificial intelligence (AI) based collaborative computing approach is developed to determine the task offloading, computing, and result delivery policy for vehicles. Specifically, the offloading and computing problem is formulated as a Markov decision process. A deep reinforcement learning technique, i.e., deep deterministic policy gradient, is adopted to find the optimal solution in a complex urban transportation network. By our approach, the service cost, which includes computing service latency and service failure penalty, can be minimized via the optimal workload assignment and server selection in collaborative computing. Simulation results show that the proposed AI-based collaborative computing approach can adapt to a highly dynamic environment with outstanding performance.
Millimeter wave channels exhibit structure that allows beam alignment with fewer channel measurements than exhaustive beam search. From a compressed sensing (CS) perspective, the received channel measurements are usually obtained by multiplying a CS matrix with a sparse representation of the channel matrix. Due to the constraints imposed by analog processing, designing CS matrices that efficiently exploit the channel structure is, however, challenging. In this paper, we propose an end-to-end deep learning technique to design a structured CS matrix that is well suited to the underlying channel distribution, leveraging both sparsity and the particular spatial structure that appears in vehicular channels. The channel measurements acquired with the designed CS matrix are then used to predict the best beam for link configuration. Simulation results for vehicular communication channels indicate that our deep learning-based approach achieves better beam alignment than standard CS techniques that use the random phase shift-based design.
The combination of cloud computing capabilities at the network edge and artificial intelligence promise to turn future mobile networks into service- and radio-aware entities, able to address the requirements of upcoming latency-sensitive applications. In this context, a challenging research goal is to exploit edge intelligence to dynamically and optimally manage the Radio Access Network Slicing (that is a less mature and more complex technology than fifth-generation Network Slicing) and Radio Resource Management, which is a very complex task due to the mostly unpredictably nature of the wireless channel. This paper presents a novel architecture that leverages Deep Reinforcement Learning at the edge of the network in order to address Radio Access Network Slicing and Radio Resource Management optimization supporting latency-sensitive applications. The effectiveness of our proposal against baseline methodologies is investigated through computer simulation, by considering an autonomous-driving use-case.
Vehicle tracking has become one of the key applications of wireless sensor networks (WSNs) in the fields of rescue, surveillance, traffic monitoring, etc. However, the increased tracking accuracy requires more energy consumption. In this letter, a decentralized vehicle tracking strategy is conceived for improving both tracking accuracy and energy saving, which is based on adjusting the intersection area between the fixed sensing area and the dynamic activation area. Then, two deep reinforcement learning (DRL) aided solutions are proposed relying on the dynamic selection of the activation area radius. Finally, simulation results show the superiority of our DRL aided design.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا