Do you want to publish a course? Click here

Feedback Enhanced Motion Planning for Autonomous Vehicles

107   0   0.0 ( 0 )
 Added by Ke Sun
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this work, we address the motion planning problem for autonomous vehicles through a new lattice planning approach, called Feedback Enhanced Lattice Planner (FELP). Existing lattice planners have two major limitations, namely the high dimensionality of the lattice and the lack of modeling of agent vehicle behaviors. We propose to apply the Intelligent Driver Model (IDM) as a speed feedback policy to address both of these limitations. IDM both enables the responsive behavior of the agents, and uniquely determines the acceleration and speed profile of the ego vehicle on a given path. Therefore, only a spatial lattice is needed, while discretization of higher order dimensions is no longer required. Additionally, we propose a directed-graph map representation to support the implementation and execution of lattice planners. The map can reflect local geometric structure, embed the traffic rules adhering to the road, and is efficient to construct and update. We show that FELP is more efficient compared to other existing lattice planners through runtime complexity analysis, and we propose two variants of FELP to further reduce the complexity to polynomial time. We demonstrate the improvement by comparing FELP with an existing spatiotemporal lattice planner using simulations of a merging scenario and continuous highway traffic. We also study the performance of FELP under different traffic densities.



rate research

Read More

For autonomous vehicles integrating onto roadways with human traffic participants, it requires understanding and adapting to the participants intention and driving styles by responding in predictable ways without explicit communication. This paper proposes a reinforcement learning based negotiation-aware motion planning framework, which adopts RL to adjust the driving style of the planner by dynamically modifying the prediction horizon length of the motion planner in real time adaptively w.r.t the event of a change in environment, typically triggered by traffic participants switch of intents with different driving styles. The framework models the interaction between the autonomous vehicle and other traffic participants as a Markov Decision Process. A temporal sequence of occupancy grid maps are taken as inputs for RL module to embed an implicit intention reasoning. Curriculum learning is employed to enhance the training efficiency and the robustness of the algorithm. We applied our method to narrow lane navigation in both simulation and real world to demonstrate that the proposed method outperforms the common alternative due to its advantage in alleviating the social dilemma problem with proper negotiation skills.
112 - Fei Ye , Shen Zhang , Pin Wang 2021
In this survey, we systematically summarize the current literature on studies that apply reinforcement learning (RL) to the motion planning and control of autonomous vehicles. Many existing contributions can be attributed to the pipeline approach, which consists of many hand-crafted modules, each with a functionality selected for the ease of human interpretation. However, this approach does not automatically guarantee maximal performance due to the lack of a system-level optimization. Therefore, this paper also presents a growing trend of work that falls into the end-to-end approach, which typically offers better performance and smaller system scales. However, their performance also suffers from the lack of expert data and generalization issues. Finally, the remaining challenges applying deep RL algorithms on autonomous driving are summarized, and future research directions are also presented to tackle these challenges.
In this paper, we develop a new algorithm, called T$^{star}$-Lite, that enables fast time-risk optimal motion planning for variable-speed autonomous vehicles. The T$^{star}$-Lite algorithm is a significantly faster version of the previously developed T$^{star}$ algorithm. T$^{star}$-Lite uses the novel time-risk cost function of T$^{star}$; however, instead of a grid-based approach, it uses an asymptotically optimal sampling-based motion planner. Furthermore, it utilizes the recently developed Generalized Multi-speed Dubins Motion-model (GMDM) for sample-to-sample kinodynamic motion planning. The sample-based approach and GMDM significantly reduce the computational burden of T$^{star}$ while providing reasonable solution quality. The sample points are drawn from a four-dimensional configuration space consisting of two position coordinates plus vehicle heading and speed. Specifically, T$^{star}$-Lite enables the motion planner to select the vehicle speed and direction based on its proximity to the obstacle to generate faster and safer paths. In this paper, T$^{star}$-Lite is developed using the RRT$^{star}$ motion planner, but adaptation to other motion planners is straightforward and depends on the needs of the planner
The Institute of Measurement, Control and Microtechnology at Ulm University investigates advanced driver assistance systems for decades and concentrates in large parts on autonomous driving. It is well known that motion planning is a key technology for autonomous driving. It is first and foremost responsible for the safety of the vehicle passengers as well as of all surrounding traffic participants. However, a further task consists in providing a smooth and comfortable driving behavior. In Ulm, we have the grateful opportunity to test our algorithms under real conditions in public traffic and diversified scenarios. In this paper, we would like to give the readers an insight of our work, about the vehicle, the test track, as well as of the related problems, challenges and solutions. Therefore, we will describe the motion planning system and explain the implemented functionalities. Furthermore, we will show how our vehicle moves through public road traffic and how it deals with challenging scenarios like e.g. driving through roundabouts and intersections.
This paper presents a novel algorithm, called $epsilon^*$+, for online coverage path planning of unknown environments using energy-constrained autonomous vehicles. Due to limited battery size, the energy-constrained vehicles have limited duration of operation time. Therefore, while executing a coverage trajectory, the vehicle has to return to the charging station for a recharge before the battery runs out. In this regard, the $epsilon^*$+ algorithm enables the vehicle to retreat back to the charging station based on the remaining energy which is monitored throughout the coverage process. This is followed by an advance trajectory that takes the vehicle to a near by unexplored waypoint to restart the coverage process, instead of taking it back to the previous left over point of the retreat trajectory; thus reducing the overall coverage time. The proposed $epsilon^*$+ algorithm is an extension of the $epsilon^*$ algorithm, which utilizes an Exploratory Turing Machine (ETM) as a supervisor to navigate the vehicle with back and forth trajectory for complete coverage. The performance of the $epsilon^*$+ algorithm is validated on complex scenarios using Player/Stage which is a high-fidelity robotic simulator.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا