ترغب بنشر مسار تعليمي؟ اضغط هنا

Conflict-free Cooperation Method for Connected and Automated Vehicles at Unsignalized Intersections: Graph-based Modeling and Optimality Analysis

109   0   0.0 ( 0 )
 نشر من قبل Chaoyi Chen
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Connected and automated vehicles have shown great potential in improving traffic mobility and reducing emissions, especially at unsignalized intersections. Previous research has shown that vehicle passing order is the key influencing factor in improving intersection traffic mobility. In this paper, we propose a graph-based cooperation method to formalize the conflict-free scheduling problem at an unsignalized intersection. Based on graphical analysis, a vehicles trajectory conflict relationship is modeled as a conflict directed graph and a coexisting undirected graph. Then, two graph-based methods are proposed to find the vehicle passing order. The first is an improved depth-first spanning tree algorithm, which aims to find the local optimal passing order vehicle by vehicle. The other novel method is a minimum clique cover algorithm, which identifies the global optimal solution. Finally, a distributed control framework and communication topology are presented to realize the conflict-free cooperation of vehicles. Extensive numerical simulations are conducted for various numbers of vehicles and traffic volumes, and the simulation results prove the effectiveness of the proposed algorithms.

قيم البحث

اقرأ أيضاً

Connected vehicles will change the modes of future transportation management and organization, especially at an intersection without traffic light. Centralized coordination methods globally coordinate vehicles approaching the intersection from all se ctions by considering their states altogether. However, they need substantial computation resources since they own a centralized controller to optimize the trajectories for all approaching vehicles in real-time. In this paper, we propose a centralized coordination scheme of automated vehicles at an intersection without traffic signals using reinforcement learning (RL) to address low computation efficiency suffered by current centralized coordination methods. We first propose an RL training algorithm, model accelerated proximal policy optimization (MA-PPO), which incorporates a prior model into proximal policy optimization (PPO) algorithm to accelerate the learning process in terms of sample efficiency. Then we present the design of state, action and reward to formulate centralized coordination as an RL problem. Finally, we train a coordinate policy in a simulation setting and compare computing time and traffic efficiency with a coordination scheme based on model predictive control (MPC) method. Results show that our method spends only 1/400 of the computing time of MPC and increase the efficiency of the intersection by 4.5 times.
We propose a fully distributed control system architecture, amenable to in-vehicle implementation, that aims to safely coordinate connected and automated vehicles (CAVs) in road intersections. For control purposes, we build upon a fully distributed m odel predictive control approach, in which the agents solve a nonconvex optimal control problem (OCP) locally and synchronously, and exchange their optimized trajectories via vehicle-to-vehicle (V2V) communication. To accommodate a fast solution of the nonconvex OCPs, we apply the penalty convex-concave procedure which aims to solve a convexified version of the original OCP. For experimental evaluation, we complement the predictive controller with a localization layer, being in charge of self-localization and the estimation of joint collision points with other agents. Moreover, we come up with a proprietary communication protocol to exchange trajectories with other agents. Experimental tests reveal the efficacy of proposed control system architecture.
In this paper, we address the much-anticipated deployment of connected and automated vehicles (CAVs) in society by modeling and analyzing the social-mobility dilemma in a game-theoretic approach. We formulate this dilemma as a normal-form game of pla yers making a binary decision: whether to travel with a CAV (CAV travel) or not (non-CAV travel) and by constructing an intuitive payoff function inspired by the socially beneficial outcomes of a mobility system consisting of CAVs. We show that the game is equivalent to the Prisoners dilemma, which implies that the rational collective decision is the opposite of the socially optimum. We present two different solutions to tackle this phenomenon: one with a preference structure and the other with institutional arrangements. In the first approach, we implement a social mechanism that incentivizes players to non-CAV travel and derive a lower bound on the players that ensures an equilibrium of non-CAV travel. In the second approach, we investigate the possibility of players bargaining to create an institution that enforces non-CAV travel and show that as the number of players increases, the incentive ratio of non-CAV travel over CAV travel tends to zero. We conclude by showcasing the last result with a numerical study.
Connected and Automated Hybrid Electric Vehicles have the potential to reduce fuel consumption and travel time in real-world driving conditions. The eco-driving problem seeks to design optimal speed and power usage profiles based upon look-ahead info rmation from connectivity and advanced mapping features. Recently, Deep Reinforcement Learning (DRL) has been applied to the eco-driving problem. While the previous studies synthesize simulators and model-free DRL to reduce online computation, this work proposes a Safe Off-policy Model-Based Reinforcement Learning algorithm for the eco-driving problem. The advantages over the existing literature are three-fold. First, the combination of off-policy learning and the use of a physics-based model improves the sample efficiency. Second, the training does not require any extrinsic rewarding mechanism for constraint satisfaction. Third, the feasibility of trajectory is guaranteed by using a safe set approximated by deep generative models. The performance of the proposed method is benchmarked against a baseline controller representing human drivers, a previously designed model-free DRL strategy, and the wait-and-see optimal solution. In simulation, the proposed algorithm leads to a policy with a higher average speed and a better fuel economy compared to the model-free agent. Compared to the baseline controller, the learned strategy reduces the fuel consumption by more than 21% while keeping the average speed comparable.
For a foreseeable future, autonomous vehicles (AVs) will operate in traffic together with human-driven vehicles. Their planning and control systems need extensive testing, including early-stage testing in simulations where the interactions among auto nomous/human-driven vehicles are represented. Motivated by the need for such simulation tools, we propose a game-theoretic approach to modeling vehicle interactions, in particular, for urban traffic environments with unsignalized intersections. We develop traffic models with heterogeneous (in terms of their driving styles) and interactive vehicles based on our proposed approach, and use them for virtual testing, evaluation, and calibration of AV control systems. For illustration, we consider two AV control approaches, analyze their characteristics and performance based on the simulation results with our developed traffic models, and optimize the parameters of one of them.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا