Do you want to publish a course? Click here

Deep Reinforcement Learning Based Mode Selection and Resource Allocation for Cellular V2X Communications

80   0   0.0 ( 0 )
 Added by Xinran Zhang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Cellular vehicle-to-everything (V2X) communication is crucial to support future diverse vehicular applications. However, for safety-critical applications, unstable vehicle-to-vehicle (V2V) links and high signalling overhead of centralized resource allocation approaches become bottlenecks. In this paper, we investigate a joint optimization problem of transmission mode selection and resource allocation for cellular V2X communications. In particular, the problem is formulated as a Markov decision process, and a deep reinforcement learning (DRL) based decentralized algorithm is proposed to maximize the sum capacity of vehicle-to-infrastructure users while meeting the latency and reliability requirements of V2V pairs. Moreover, considering training limitation of local DRL models, a two-timescale federated DRL algorithm is developed to help obtain robust model. Wherein, the graph theory based vehicle clustering algorithm is executed on a large timescale and in turn the federated learning algorithm is conducted on a small timescale. Simulation results show that the proposed DRL-based algorithm outperforms other decentralized baselines, and validate the superiority of the two-timescale federated DRL algorithm for newly activated V2V pairs.



rate research

Read More

The research efforts on cellular vehicle-to-everything (V2X) communications are gaining momentum with each passing year. It is considered as a paradigm-altering approach to connect a large number of vehicles with minimal cost of deployment and maintenance. This article aims to further push the state-of-the-art of cellular V2X communications by providing an optimization framework for wireless charging, power allocation, and resource block assignment. Specifically, we design a network model where roadside objects use wireless power from RF signals of electric vehicles for charging and information processing. Moreover, due to the resource-constraint nature of cellular V2X, the power allocation and resource block assignment are performed to efficiently use the resources. The proposed optimization framework shows an improvement in terms of the overall energy efficiency of the network when compared with the baseline technique. The performance gains of the proposed solution clearly demonstrate its feasibility and utility for cellular V2X communications.
Vehicular edge computing (VEC) is envisioned as a promising approach to process the explosive computation tasks of vehicular user (VU). In the VEC system, each VU allocates power to process partial tasks through offloading and the remaining tasks through local execution. During the offloading, each VU adopts the multi-input multi-out and non-orthogonal multiple access (MIMO-NOMA) channel to improve the channel spectrum efficiency and capacity. However, the channel condition is uncertain due to the channel interference among VUs caused by the MIMO-NOMA channel and the time-varying path-loss caused by the mobility of each VU. In addition, the task arrival of each VU is stochastic in the real world. The stochastic task arrival and uncertain channel condition affect greatly on the power consumption and latency of tasks for each VU. It is critical to design an optimal power allocation scheme considering the stochastic task arrival and channel variation to optimize the long-term reward including the power consumption and latency in the MIMO-NOMA VEC. Different from the traditional centralized deep reinforcement learning (DRL)-based scheme, this paper constructs a decentralized DRL framework to formulate the power allocation optimization problem, where the local observations are selected as the state. The deep deterministic policy gradient (DDPG) algorithm is adopted to learn the optimal power allocation scheme based on the decentralized DRL framework. Simulation results demonstrate that our proposed power allocation scheme outperforms the existing schemes.
Cellular vehicle-to-everything (V2X) communication is expected to herald the age of autonomous vehicles in the coming years. With the integration of blockchain in such networks, information of all granularity levels, from complete blocks to individual transactions, would be accessible to vehicles at any time. Specifically, the blockchain technology is expected to improve the security, immutability, and decentralization of cellular V2X communication through smart contract and distributed ledgers. Although blockchain-based cellular V2X networks hold promise, many challenges need to be addressed to enable the future interoperability and accessibility of such large-scale platforms. One such challenge is the offloading of mining tasks in cellular V2X networks. While transportation authorities may try to balance the network mining load, the vehicles may select the nearest mining clusters to offload a task. This may cause congestion and disproportionate use of vehicular network resources. To address this issue, we propose a game-theoretic approach for balancing the load at mining clusters while maintaining fairness among offloading vehicles. Keeping in mind the low-latency requirements of vehicles, we consider a finite channel blocklength transmission which is more practical compared to the use of infinite blocklength codes. The simulation results obtained with our proposed offloading framework show improved performance over the conventional nearest mining cluster selection technique.
LoRa wireless networks are considered as a key enabling technology for next generation internet of things (IoT) systems. New IoT deployments (e.g., smart city scenarios) can have thousands of devices per square kilometer leading to huge amount of power consumption to provide connectivity. In this paper, we investigate green LoRa wireless networks powered by a hybrid of the grid and renewable energy sources, which can benefit from harvested energy while dealing with the intermittent supply. This paper proposes resource management schemes of the limited number of channels and spreading factors (SFs) with the objective of improving the LoRa gateway energy efficiency. First, the problem of grid power consumption minimization while satisfying the systems quality of service demands is formulated. Specifically, both scenarios the uncorrelated and time-correlated channels are investigated. The optimal resource management problem is solved by decoupling the formulated problem into two sub-problems: channel and SF assignment problem and energy management problem. Since the optimal solution is obtained with high complexity, online resource management heuristic algorithms that minimize the grid energy consumption are proposed. Finally, taking into account the channel and energy correlation, adaptable resource management schemes based on Reinforcement Learning (RL), are developed. Simulations results show that the proposed resource management schemes offer efficient use of renewable energy in LoRa wireless networks.
In the emerging high mobility Vehicle-to-Everything (V2X) communications using millimeter Wave (mmWave) and sub-THz, Multiple-Input Multiple-Output (MIMO) channel estimation is an extremely challenging task. At mmWaves/sub-THz frequencies, MIMO channels exhibit few leading paths in the space-time domain (i.e., directions or arrival/departure and delays). Algebraic Low-rank (LR) channel estimation exploits space-time channel sparsity through the computation of position-dependent MIMO channel eigenmodes leveraging recurrent training vehicle passages in the coverage cell. LR requires vehicles geographical positions and tens to hundreds of training vehicles passages for each position, leading to significant complexity and control signalling overhead. Here we design a DL-based LR channel estimation method to infer MIMO channel eigenmodes in V2X urban settings, starting from a single LS channel estimate and without needing vehicles position information. Numerical results show that the proposed method attains comparable Mean Squared Error (MSE) performance as the position-based LR. Moreover, we show that the proposed model can be trained on a reference scenario and be effectively transferred to urban contexts with different space-time channel features, providing comparable MSE performance without an explicit transfer learning procedure. This result eases the deployment in arbitrary dense urban scenarios.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا