ترغب بنشر مسار تعليمي؟ اضغط هنا

Decision-making at Unsignalized Intersection for Autonomous Vehicles: Left-turn Maneuver with Deep Reinforcement Learning

118   0   0.0 ( 0 )
 نشر من قبل Teng Liu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Decision-making module enables autonomous vehicles to reach appropriate maneuvers in the complex urban environments, especially the intersection situations. This work proposes a deep reinforcement learning (DRL) based left-turn decision-making framework at unsignalized intersection for autonomous vehicles. The objective of the studied automated vehicle is to make an efficient and safe left-turn maneuver at a four-way unsignalized intersection. The exploited DRL methods include deep Q-learning (DQL) and double DQL. Simulation results indicate that the presented decision-making strategy could efficaciously reduce the collision rate and improve transport efficiency. This work also reveals that the constructed left-turn control structure has a great potential to be applied in real-time.



قيم البحث

اقرأ أيضاً

196 - Teng Liu , Hong Wang , Bing Lu 2020
Decision-making strategy for autonomous vehicles de-scribes a sequence of driving maneuvers to achieve a certain navigational mission. This paper utilizes the deep reinforcement learning (DRL) method to address the continuous-horizon decision-making problem on the highway. First, the vehicle kinematics and driving scenario on the freeway are introduced. The running objective of the ego automated vehicle is to execute an efficient and smooth policy without collision. Then, the particular algorithm named proximal policy optimization (PPO)-enhanced DRL is illustrated. To overcome the challenges in tardy training efficiency and sample inefficiency, this applied algorithm could realize high learning efficiency and excellent control performance. Finally, the PPO-DRL-based decision-making strategy is estimated from multiple perspectives, including the optimality, learning efficiency, and adaptability. Its potential for online application is discussed by applying it to similar driving scenarios.
190 - Hong Shu , Teng Liu , Xingyu Mu 2020
Knowledge transfer is a promising concept to achieve real-time decision-making for autonomous vehicles. This paper constructs a transfer deep reinforcement learning framework to transform the driving tasks in inter-section environments. The driving m issions at the un-signalized intersection are cast into a left turn, right turn, and running straight for automated vehicles. The goal of the autonomous ego vehicle (AEV) is to drive through the intersection situation efficiently and safely. This objective promotes the studied vehicle to increase its speed and avoid crashing other vehicles. The decision-making pol-icy learned from one driving task is transferred and evaluated in another driving mission. Simulation results reveal that the decision-making strategies related to similar tasks are transferable. It indicates that the presented control framework could reduce the time consumption and realize online implementation.
We propose a safe DRL approach for autonomous vehicle (AV) navigation through crowds of pedestrians while making a left turn at an unsignalized intersection. Our method uses two long-short term memory (LSTM) models that are trained to generate the pe rceived state of the environment and the future trajectories of pedestrians given noisy observations of their movement. A future collision prediction algorithm based on the future trajectories of the ego vehicle and pedestrians is used to mask unsafe actions if the system predicts a collision. The performance of our approach is evaluated in two experiments using the high-fidelity CARLA simulation environment. The first experiment tests the performance of our method at intersections that are similar to the training intersection and the second experiment tests our method at intersections with a different topology. For both experiments, our methods do not result in a collision with a pedestrian while still navigating the intersection at a reasonable speed.
192 - Teng Liu , Bing Huang , Xingyu Mu 2020
Deep reinforcement learning (DRL) is becoming a prevalent and powerful methodology to address the artificial intelligent problems. Owing to its tremendous potentials in self-learning and self-improvement, DRL is broadly serviced in many research fiel ds. This article conducted a comprehensive comparison of multiple DRL approaches on the freeway decision-making problem for autonomous vehicles. These techniques include the common deep Q learning (DQL), double DQL (DDQL), dueling DQL, and prioritized replay DQL. First, the reinforcement learning (RL) framework is introduced. As an extension, the implementations of the above mentioned DRL methods are established mathematically. Then, the freeway driving scenario for the automated vehicles is constructed, wherein the decision-making problem is transferred as a control optimization problem. Finally, a series of simulation experiments are achieved to evaluate the control performance of these DRL-enabled decision-making strategies. A comparative analysis is realized to connect the autonomous driving results with the learning characteristics of these DRL techniques.
Prior research has extensively explored Autonomous Vehicle (AV) navigation in the presence of other vehicles, however, navigation among pedestrians, who are the most vulnerable element in urban environments, has been less examined. This paper explore s AV navigation in crowded, unsignalized intersections. We compare the performance of different deep reinforcement learning methods trained on our reward function and state representation. The performance of these methods and a standard rule-based approach were evaluated in two ways, first at the unsignalized intersection on which the methods were trained, and secondly at an unknown unsignalized intersection with a different topology. For both scenarios, the rule-based method achieves less than 40% collision-free episodes, whereas our methods result in a performance of approximately 100%. Of the three methods used, DDQN/PER outperforms the other two methods while it also shows the smallest average intersection crossing time, the greatest average speed, and the greatest distance from the closest pedestrian.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا