No Arabic abstract
Robust sensing and perception in adverse weather conditions remains one of the biggest challenges for realizing reliable autonomous vehicle mobility services. Prior work has established that rainfall rate is a useful measure for adversity of atmospheric weather conditions. This work presents a probabilistic hierarchical Bayesian model that infers rainfall rate from automotive lidar point cloud sequences with high accuracy and reliability. The model is a hierarchical mixture of expert model, or a probabilistic decision tree, with gating and expert nodes consisting of variational logistic and linear regression models. Experimental data used to train and evaluate the model is collected in a large-scale rainfall experiment facility from both stationary and moving vehicle platforms. The results show prediction accuracy comparable to the measurement resolution of a disdrometer, and the soundness and usefulness of the uncertainty estimation. The model achieves RMSE 2.42 mm/h after filtering out uncertain predictions. The error is comparable to the mean rainfall rate change of 3.5 mm/h between measurements. Model parameter studies show how predictive performance changes with tree depth, sampling duration, and crop box dimension. A second experiment demonstrate the predictability of higher rainfall above 300 mm/h using a different lidar sensor, demonstrating sensor independence.
In this work, we propose the use of radar with advanced deep segmentation models to identify open space in parking scenarios. A publically available dataset of radar observations called SCORP was collected. Deep models are evaluated with various radar input representations. Our proposed approach achieves low memory usage and real-time processing speeds, and is thus very well suited for embedded deployment.
Support estimation (SE) of a sparse signal refers to finding the location indices of the non-zero elements in a sparse representation. Most of the traditional approaches dealing with SE problem are iterative algorithms based on greedy methods or optimization techniques. Indeed, a vast majority of them use sparse signal recovery techniques to obtain support sets instead of directly mapping the non-zero locations from denser measurements (e.g., Compressively Sensed Measurements). This study proposes a novel approach for learning such a mapping from a training set. To accomplish this objective, the Convolutional Support Estimator Networks (CSENs), each with a compact configuration, are designed. The proposed CSEN can be a crucial tool for the following scenarios: (i) Real-time and low-cost support estimation can be applied in any mobile and low-power edge device for anomaly localization, simultaneous face recognition, etc. (ii) CSENs output can directly be used as prior information which improves the performance of sparse signal recovery algorithms. The results over the benchmark datasets show that state-of-the-art performance levels can be achieved by the proposed approach with a significantly reduced computational complexity.
With increasing focus on privacy protection, alternative methods to identify vehicle operator without the use of biometric identifiers have gained traction for automotive data analysis. The wide variety of sensors installed on modern vehicles enable autonomous driving, reduce accidents and improve vehicle handling. On the other hand, the data these sensors collect reflect drivers habit. Drivers use of turn indicators, following distance, rate of acceleration, etc. can be transformed to an embedding that is representative of their behavior and identity. In this paper, we develop a deep learning architecture (Driver2vec) to map a short interval of driving data into an embedding space that represents the drivers behavior to assist in driver identification. We develop a custom model that leverages performance gains of temporal convolutional networks, embedding separation power of triplet loss and classification accuracy of gradient boosting decision trees. Trained on a dataset of 51 drivers provided by Nervtech, Driver2vec is able to accurately identify the driver from a short 10-second interval of sensor data, achieving an average pairwise driver identification accuracy of 83.1% from this 10-second interval, which is remarkably higher than performance obtained in previous studies. We then analyzed performance of Driver2vec to show that its performance is consistent across scenarios and that modeling choices are sound.
Lidar sensors are often used in mobile robots and autonomous vehicles to complement camera, radar and ultrasonic sensors for environment perception. Typically, perception algorithms are trained to only detect moving and static objects as well as ground estimation, but intentionally ignore weather effects to reduce false detections. In this work, we present an in-depth analysis of automotive lidar performance under harsh weather conditions, i.e. heavy rain and dense fog. An extensive data set has been recorded for various fog and rain conditions, which is the basis for the conducted in-depth analysis of the point cloud under changing environmental conditions. In addition, we introduce a novel approach to detect and classify rain or fog with lidar sensors only and achieve an mean union over intersection of 97.14 % for a data set in controlled environments. The analysis of weather influences on the performance of lidar sensors and the weather detection are important steps towards improving safety levels for autonomous driving in adverse weather conditions by providing reliable information to adapt vehicle behavior.
Autonomous radar has been an integral part of advanced driver assistance systems due to its robustness to adverse weather and various lighting conditions. Conventional automotive radars use digital signal processing (DSP) algorithms to process raw data into sparse radar pins that do not provide information regarding the size and orientation of the objects. In this paper, we propose a deep-learning based algorithm for radar object detection. The algorithm takes in radar data in its raw tensor representation and places probabilistic oriented bounding boxes around the detected objects in birds-eye-view space. We created a new multimodal dataset with 102544 frames of raw radar and synchronized LiDAR data. To reduce human annotation effort we developed a scalable pipeline to automatically annotate ground truth using LiDAR as reference. Based on this dataset we developed a vehicle detection pipeline using raw radar data as the only input. Our best performing radar detection model achieves 77.28% AP under oriented IoU of 0.3. To the best of our knowledge, this is the first attempt to investigate object detection with raw radar data for conventional corner automotive radars.