No Arabic abstract
Prior research has extensively explored Autonomous Vehicle (AV) navigation in the presence of other vehicles, however, navigation among pedestrians, who are the most vulnerable element in urban environments, has been less examined. This paper explores AV navigation in crowded, unsignalized intersections. We compare the performance of different deep reinforcement learning methods trained on our reward function and state representation. The performance of these methods and a standard rule-based approach were evaluated in two ways, first at the unsignalized intersection on which the methods were trained, and secondly at an unknown unsignalized intersection with a different topology. For both scenarios, the rule-based method achieves less than 40% collision-free episodes, whereas our methods result in a performance of approximately 100%. Of the three methods used, DDQN/PER outperforms the other two methods while it also shows the smallest average intersection crossing time, the greatest average speed, and the greatest distance from the closest pedestrian.
We propose a safe DRL approach for autonomous vehicle (AV) navigation through crowds of pedestrians while making a left turn at an unsignalized intersection. Our method uses two long-short term memory (LSTM) models that are trained to generate the perceived state of the environment and the future trajectories of pedestrians given noisy observations of their movement. A future collision prediction algorithm based on the future trajectories of the ego vehicle and pedestrians is used to mask unsafe actions if the system predicts a collision. The performance of our approach is evaluated in two experiments using the high-fidelity CARLA simulation environment. The first experiment tests the performance of our method at intersections that are similar to the training intersection and the second experiment tests our method at intersections with a different topology. For both experiments, our methods do not result in a collision with a pedestrian while still navigating the intersection at a reasonable speed.
Decision-making module enables autonomous vehicles to reach appropriate maneuvers in the complex urban environments, especially the intersection situations. This work proposes a deep reinforcement learning (DRL) based left-turn decision-making framework at unsignalized intersection for autonomous vehicles. The objective of the studied automated vehicle is to make an efficient and safe left-turn maneuver at a four-way unsignalized intersection. The exploited DRL methods include deep Q-learning (DQL) and double DQL. Simulation results indicate that the presented decision-making strategy could efficaciously reduce the collision rate and improve transport efficiency. This work also reveals that the constructed left-turn control structure has a great potential to be applied in real-time.
This paper describes a novel method for allowing an autonomous ground vehicle to predict the intent of other agents in an urban environment. This method, termed the cognitive driving framework, models both the intent and the potentially false beliefs of an obstacle vehicle. By modeling the relationships between these variables as a dynamic Bayesian network, filtering can be performed to calculate the intent of the obstacle vehicle as well as its belief about the environment. This joint knowledge can be exploited to plan safer and more efficient trajectories when navigating in an urban environment. Simulation results are presented that demonstrate the ability of the proposed method to calculate the intent of obstacle vehicles as an autonomous vehicle navigates a road intersection such that preventative maneuvers can be taken to avoid imminent collisions.
We study a novel principle for safe and efficient collision avoidance that adopts a mathematically elegant and general framework abstracting as much as possible from the controlled vehicles dynamics and of its environment. Vehicle dynamics is characterized by pre-computed functions for accelerating and braking to a given speed. Environment is modeled by a function of time giving the free distance ahead of the controlled vehicle under the assumption that the obstacles are either fixed or are moving in the same direction. The main result is a control policy enforcing the vehicles speed so as to avoid collision and efficiently use the free distance ahead, provided some initial safety condition holds. The studied principle is applied to the design of two discrete controllers, one synchronous and another asynchronous. We show that both controllers are safe by construction. Furthermore, we show that their efficiency strictly increases for decreasing granularity of discretization. We present implementations of the two controllers, their experimental evaluation in the Carla autonomous driving simulator and investigate various performance issues.
For a foreseeable future, autonomous vehicles (AVs) will operate in traffic together with human-driven vehicles. Their planning and control systems need extensive testing, including early-stage testing in simulations where the interactions among autonomous/human-driven vehicles are represented. Motivated by the need for such simulation tools, we propose a game-theoretic approach to modeling vehicle interactions, in particular, for urban traffic environments with unsignalized intersections. We develop traffic models with heterogeneous (in terms of their driving styles) and interactive vehicles based on our proposed approach, and use them for virtual testing, evaluation, and calibration of AV control systems. For illustration, we consider two AV control approaches, analyze their characteristics and performance based on the simulation results with our developed traffic models, and optimize the parameters of one of them.