No Arabic abstract
As the advanced driver assistance system (ADAS) functions become more sophisticated, the strategies that properly coordinate interaction and communication among the ADAS functions are required for autonomous driving. This paper proposes a derivative-free optimization based imitation learning method for the decision maker that coordinates the proper ADAS functions. The proposed method is able to make decisions in multi-lane highways timely with the LIDAR data. The simulation-based evaluation verifies that the proposed method presents desired performance.
Discretionary lane change (DLC) is a basic but complex maneuver in driving, which aims at reaching a faster speed or better driving conditions, e.g., further line of sight or better ride quality. Although many DLC decision-making models have been studied in traffic engineering and autonomous driving, the impact of human factors, which is an integral part of current and future traffic flow, is largely ignored in the existing literature. In autonomous driving, the ignorance of human factors of surrounding vehicles will lead to poor interaction between the ego vehicle and the surrounding vehicles, thus, a high risk of accidents. The human factors are also a crucial part to simulate a human-like traffic flow in the traffic engineering area. In this paper, we integrate the human factors that are represented by driving styles to design a new DLC decision-making model. Specifically, our proposed model takes not only the contextual traffic information but also the driving styles of surrounding vehicles into consideration and makes lane-change/keep decisions. Moreover, the model can imitate human drivers decision-making maneuvers to the greatest extent by learning the driving style of the ego vehicle. Our evaluation results show that the proposed model almost follows the human decision-making maneuvers, which can achieve 98.66% prediction accuracy with respect to human drivers decisions against the ground truth. Besides, the lane-change impact analysis results demonstrate that our model even performs better than human drivers in terms of improving the safety and speed of traffic.
Fast recognizing drivers decision-making style of changing lanes plays a pivotal role in safety-oriented and personalized vehicle control system design. This paper presents a time-efficient recognition method by integrating k-means clustering (k-MC) with K-nearest neighbor (KNN), called kMC-KNN. The mathematical morphology is implemented to automatically label the decision-making data into three styles (moderate, vague, and aggressive), while the integration of kMC and KNN helps to improve the recognition speed and accuracy. Our developed mathematical morphology-based clustering algorithm is then validated by comparing to agglomerative hierarchical clustering. Experimental results demonstrate that the developed kMC-KNN method, in comparison to the traditional KNN, can shorten the recognition time by over 72.67% with recognition accuracy of 90%-98%. In addition, our developed kMC-KNN method also outperforms the support vector machine (SVM) in recognition accuracy and stability. The developed time-efficient recognition approach would have great application potential to the in-vehicle embedded solutions with restricted design specifications.
Interpretation of common-yet-challenging interaction scenarios can benefit well-founded decisions for autonomous vehicles. Previous research achieved this using their prior knowledge of specific scenarios with predefined models, limiting their adaptive capabilities. This paper describes a Bayesian nonparametric approach that leverages continuous (i.e., Gaussian processes) and discrete (i.e., Dirichlet processes) stochastic processes to reveal underlying interaction patterns of the ego vehicle with other nearby vehicles. Our model relaxes dependency on the number of surrounding vehicles by developing an acceleration-sensitive velocity field based on Gaussian processes. The experiment results demonstrate that the velocity field can represent the spatial interactions between the ego vehicle and its surroundings. Then, a discrete Bayesian nonparametric model, integrating Dirichlet processes and hidden Markov models, is developed to learn the interaction patterns over the temporal space by segmenting and clustering the sequential interaction data into interpretable granular patterns automatically. We then evaluate our approach in the highway lane-change scenarios using the highD dataset collected from real-world settings. Results demonstrate that our proposed Bayesian nonparametric approach provides an insight into the complicated lane-change interactions of the ego vehicle with multiple surrounding traffic participants based on the interpretable interaction patterns and their transition properties in temporal relationships. Our proposed approach sheds light on efficiently analyzing other kinds of multi-agent interactions, such as vehicle-pedestrian interactions. View the demos via https://youtu.be/z_vf9UHtdAM.
Learning from demonstrations has made great progress over the past few years. However, it is generally data hungry and task specific. In other words, it requires a large amount of data to train a decent model on a particular task, and the model often fails to generalize to new tasks that have a different distribution. In practice, demonstrations from new tasks will be continuously observed and the data might be unlabeled or only partially labeled. Therefore, it is desirable for the trained model to adapt to new tasks that have limited data samples available. In this work, we build an adaptable imitation learning model based on the integration of Meta-learning and Adversarial Inverse Reinforcement Learning (Meta-AIRL). We exploit the adversarial learning and inverse reinforcement learning mechanisms to learn policies and reward functions simultaneously from available training tasks and then adapt them to new tasks with the meta-learning framework. Simulation results show that the adapted policy trained with Meta-AIRL can effectively learn from limited number of demonstrations, and quickly reach the performance comparable to that of the experts on unseen tasks.
Multi-agent path finding (MAPF) is an essential component of many large-scale, real-world robot deployments, from aerial swarms to warehouse automation. However, despite the communitys continued efforts, most state-of-the-art MAPF planners still rely on centralized planning and scale poorly past a few hundred agents. Such planning approaches are maladapted to real-world deployments, where noise and uncertainty often require paths be recomputed online, which is impossible when planning times are in seconds to minutes. We present PRIMAL, a novel framework for MAPF that combines reinforcement and imitation learning to teach fully-decentralized policies, where agents reactively plan paths online in a partially-observable world while exhibiting implicit coordination. This framework extends our previous work on distributed learning of collaborative policies by introducing demonstrations of an expert MAPF planner during training, as well as careful reward shaping and environment sampling. Once learned, the resulting policy can be copied onto any number of agents and naturally scales to different team sizes and world dimensions. We present results on randomized worlds with up to 1024 agents and compare success rates against state-of-the-art MAPF planners. Finally, we experimentally validate the learned policies in a hybrid simulation of a factory mockup, involving both real-world and simulated robots.