Do you want to publish a course? Click here

Deep, spatially coherent Occupancy Maps based on Radar Measurements

74   0   0.0 ( 0 )
 Added by Daniel Bauer
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

One essential step to realize modern driver assistance technology is the accurate knowledge about the location of static objects in the environment. In this work, we use artificial neural networks to predict the occupation state of a whole scene in an end-to-end manner. This stands in contrast to the traditional approach of accumulating each detections influence on the occupancy state and allows to learn spatial priors which can be used to interpolate the environments occupancy state. We show that these priors make our method suitable to predict dense occupancy estimations from sparse, highly uncertain inputs, as given by automotive radars, even for complex urban scenarios. Furthermore, we demonstrate that these estimations can be used for large-scale mapping applications.



rate research

Read More

177 - Huan Yin , Yue Wang , Li Tang 2020
Radar and lidar, provided by two different range sensors, each has pros and cons of various perception tasks on mobile robots or autonomous driving. In this paper, a Monte Carlo system is used to localize the robot with a rotating radar sensor on 2D lidar maps. We first train a conditional generative adversarial network to transfer raw radar data to lidar data, and achieve reliable radar points from generator. Then an efficient radar odometry is included in the Monte Carlo system. Combining the initial guess from odometry, a measurement model is proposed to match the radar data and prior lidar maps for final 2D positioning. We demonstrate the effectiveness of the proposed localization framework on the public multi-session dataset. The experimental results show that our system can achieve high accuracy for long-term localization in outdoor scenes.
112 - Huan Yin , Yue Wang , Rong Xiong 2021
We present a heterogeneous localization framework for solving radar global localization and pose tracking on pre-built lidar maps. To bridge the gap of sensing modalities, deep neural networks are constructed to create shared embedding space for radar scans and lidar maps. Herein learned feature embeddings are supportive for similarity measurement, thus improving map retrieval and data matching respectively. In RobotCar and MulRan datasets, we demonstrate the effectiveness of the proposed framework with the comparison to Scan Context and RaLL. In addition, the proposed pose tracking pipeline is with less neural networks compared to the original RaLL.
We present a novel method for generating, predicting, and using Spatiotemporal Occupancy Grid Maps (SOGM), which embed future information of dynamic scenes. Our automated generation process creates groundtruth SOGMs from previous navigation data. We build on prior work to annotate lidar points based on their dynamic properties, which are then projected on time-stamped 2D grids: SOGMs. We design a 3D-2D feedforward architecture, trained to predict the future time steps of SOGMs, given 3D lidar frames as input. Our pipeline is entirely self-supervised, thus enabling lifelong learning for robots. The network is composed of a 3D back-end that extracts rich features and enables the semantic segmentation of the lidar frames, and a 2D front-end that predicts the future information embedded in the SOGMs within planning. We also design a navigation pipeline that uses these predicted SOGMs. We provide both quantitative and qualitative insights into the predictions and validate our choices of network design with a comparison to the state of the art and ablation studies.
Deep learning (DL) has recently attracted increasing interest to improve object type classification for automotive radar.In addition to high accuracy, it is crucial for decision making in autonomous vehicles to evaluate the reliability of the predictions; however, decisions of DL networks are non-transparent. Current DL research has investigated how uncertainties of predictions can be quantified, and in this article, we evaluate the potential of these methods for safe, automotive radar perception. In particular we evaluate how uncertainty quantification can support radar perception under (1) domain shift, (2) corruptions of input signals, and (3) in the presence of unknown objects. We find that in agreement with phenomena observed in the literature,deep radar classifiers are overly confident, even in their wrong predictions. This raises concerns about the use of the confidence values for decision making under uncertainty, as the model fails to notify when it cannot handle an unknown situation. Accurate confidence values would allow optimal integration of multiple information sources, e.g. via sensor fusion. We show that by applying state-of-the-art post-hoc uncertainty calibration, the quality of confidence measures can be significantly improved,thereby partially resolving the over-confidence problem. Our investigation shows that further research into training and calibrating DL networks is necessary and offers great potential for safe automotive object classification with radar sensors.
Research on localization and perception for Autonomous Driving is mainly focused on camera and LiDAR datasets, rarely on radar data. Manually labeling sparse radar point clouds is challenging. For a dataset generation, we propose the cross sensor Radar Artifact Labeling Framework (RALF). Automatically generated labels for automotive radar data help to cure radar shortcomings like artifacts for the application of artificial intelligence. RALF provides plausibility labels for radar raw detections, distinguishing between artifacts and targets. The optical evaluation backbone consists of a generalized monocular depth image estimation of surround view cameras plus LiDAR scans. Modern car sensor sets of cameras and LiDAR allow to calibrate image-based relative depth information in overlapping sensing areas. K-Nearest Neighbors matching relates the optical perception point cloud with raw radar detections. In parallel, a temporal tracking evaluation part considers the radar detections transient behavior. Based on the distance between matches, respecting both sensor and model uncertainties, we propose a plausibility rating of every radar detection. We validate the results by evaluating error metrics on semi-manually labeled ground truth dataset of $3.28cdot10^6$ points. Besides generating plausible radar detections, the framework enables further labeled low-level radar signal datasets for applications of perception and Autonomous Driving learning tasks.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا