Do you want to publish a course? Click here

Efficient Behavior-aware Control of Automated Vehicles at Crosswalks using Minimal Information Pedestrian Prediction Model

196   0   0.0 ( 0 )
 Added by Lionel Robert
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

For automated vehicles (AVs) to reliably navigate through crosswalks, they need to understand pedestrians crossing behaviors. Simple and reliable pedestrian behavior models aid in real-time AV control by allowing the AVs to predict future pedestrian behaviors. In this paper, we present a Behavior aware Model Predictive Controller (B-MPC) for AVs that incorporates long-term predictions of pedestrian crossing behavior using a previously developed pedestrian crossing model. The model incorporates pedestrians gap acceptance behavior and utilizes minimal pedestrian information, namely their position and speed, to predict pedestrians crossing behaviors. The BMPC controller is validated through simulations and compared to a rule-based controller. By incorporating predictions of pedestrian behavior, the B-MPC controller is able to efficiently plan for longer horizons and handle a wider range of pedestrian interaction scenarios than the rule-based controller. Results demonstrate the applicability of the controller for safe and efficient navigation at crossing scenarios.



rate research

Read More

For safe navigation around pedestrians, automated vehicles (AVs) need to plan their motion by accurately predicting pedestrians trajectories over long time horizons. Current approaches to AV motion planning around crosswalks predict only for short time horizons (1-2 s) and are based on data from pedestrian interactions with human-driven vehicles (HDVs). In this paper, we develop a hybrid systems model that uses pedestrians gap acceptance behavior and constant velocity dynamics for long-term pedestrian trajectory prediction when interacting with AVs. Results demonstrate the applicability of the model for long-term (> 5 s) pedestrian trajectory prediction at crosswalks. Further we compared measures of pedestrian crossing behaviors in the immersive virtual environment (when interacting with AVs) to that in the real world (results of published studies of pedestrians interacting with HDVs), and found similarities between the two. These similarities demonstrate the applicability of the hybrid model of AV interactions developed from an immersive virtual environment (IVE) for real-world scenarios for both AVs and HDVs.
Prior research has extensively explored Autonomous Vehicle (AV) navigation in the presence of other vehicles, however, navigation among pedestrians, who are the most vulnerable element in urban environments, has been less examined. This paper explores AV navigation in crowded, unsignalized intersections. We compare the performance of different deep reinforcement learning methods trained on our reward function and state representation. The performance of these methods and a standard rule-based approach were evaluated in two ways, first at the unsignalized intersection on which the methods were trained, and secondly at an unknown unsignalized intersection with a different topology. For both scenarios, the rule-based method achieves less than 40% collision-free episodes, whereas our methods result in a performance of approximately 100%. Of the three methods used, DDQN/PER outperforms the other two methods while it also shows the smallest average intersection crossing time, the greatest average speed, and the greatest distance from the closest pedestrian.
This paper presents a teleoperation system that includes robot perception and intent prediction from hand gestures. The perception module identifies the objects present in the robot workspace and the intent prediction module which object the user likely wants to grasp. This architecture allows the approach to rely on traded control instead of direct control: we use hand gestures to specify the goal objects for a sequential manipulation task, the robot then autonomously generates a grasping or a retrieving motion using trajectory optimization. The perception module relies on the model-based tracker to precisely track the 6D pose of the objects and makes use of a state of the art learning-based object detection and segmentation method, to initialize the tracker by automatically detecting objects in the scene. Goal objects are identified from user hand gestures using a trained a multi-layer perceptron classifier. After presenting all the components of the system and their empirical evaluation, we present experimental results comparing our pipeline to a direct traded control approach (i.e., one that does not use prediction) which shows that using intent prediction allows to bring down the overall task execution time.
Automated driving in urban scenarios requires efficient planning algorithms able to handle complex situations in real-time. A popular approach is to use graph-based planning methods in order to obtain a rough trajectory which is subsequently optimized. A key aspect is the generation of trajectories implementing comfortable and safe behavior already during graph-search while keeping computation times low. To capture this aspect, on the one hand, a branching strategy is presented in this work that leads to better performance in terms of quality of resulting trajectories and runtime. On the other hand, admissible heuristics are shown which guide the graph-search efficiently, where the solution remains optimal.
In this paper, we present an Efficient Planning System for automated vehicles In highLy interactive envirONments (EPSILON). EPSILON is an efficient interaction-aware planning system for automated driving, and is extensively validated in both simulation and real-world dense city traffic. It follows a hierarchical structure with an interactive behavior planning layer and an optimization-based motion planning layer. The behavior planning is formulated from a partially observable Markov decision process (POMDP), but is much more efficient than naively applying a POMDP to the decision-making problem. The key to efficiency is guided branching in both the action space and observation space, which decomposes the original problem into a limited number of closed-loop policy evaluations. Moreover, we introduce a new driver model with a safety mechanism to overcome the risk induced by the potential imperfectness of prior knowledge. For motion planning, we employ a spatio-temporal semantic corridor (SSC) to model the constraints posed by complex driving environments in a unified way. Based on the SSC, a safe and smooth trajectory is optimized, complying with the decision provided by the behavior planner. We validate our planning system in both simulations and real-world dense traffic, and the experimental results show that our EPSILON achieves human-like driving behaviors in highly interactive traffic flow smoothly and safely without being over-conservative compared to the existing planning methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا