ترغب بنشر مسار تعليمي؟ اضغط هنا

Using Machine Learning to Detect Ghost Images in Automotive Radar

101   0   0.0 ( 0 )
 نشر من قبل Florian Kraus
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Radar sensors are an important part of driver assistance systems and intelligent vehicles due to their robustness against all kinds of adverse conditions, e.g., fog, snow, rain, or even direct sunlight. This robustness is achieved by a substantially larger wavelength compared to light-based sensors such as cameras or lidars. As a side effect, many surfaces act like mirrors at this wavelength, resulting in unwanted ghost detections. In this article, we present a novel approach to detect these ghost objects by applying data-driven machine learning algorithms. For this purpose, we use a large-scale automotive data set with annotated ghost objects. We show that we can use a state-of-the-art automotive radar classifier in order to detect ghost objects alongside real objects. Furthermore, we are able to reduce the amount of false positive detections caused by ghost images in some settings.



قيم البحث

اقرأ أيضاً

Radar-based road user detection is an important topic in the context of autonomous driving applications. The resolution of conventional automotive radar sensors results in a sparse data representation which is tough to refine during subsequent signal processing. On the other hand, a new sensor generation is waiting in the wings for its application in this challenging field. In this article, two sensors of different radar generations are evaluated against each other. The evaluation criterion is the performance on moving road user object detection and classification tasks. To this end, two data sets originating from an off-the-shelf radar and a high resolution next generation radar are compared. Special attention is given on how the two data sets are assembled in order to make them comparable. The utilized object detector consists of a clustering algorithm, a feature extraction module, and a recurrent neural network ensemble for classification. For the assessment, all components are evaluated both individually and, for the first time, as a whole. This allows for indicating where overall performance improvements have their origin in the pipeline. Furthermore, the generalization capabilities of both data sets are evaluated and important comparison metrics for automotive radar object detection are discussed. Results show clear benefits of the next generation radar. Interestingly, those benefits do not actually occur due to better performance at the classification stage, but rather because of the vast improvements at the clustering stage.
Nowadays, mutual interference among automotive radars has become a problem of wide concern. In this paper, a decentralized spectrum allocation approach is presented to avoid mutual interference among automotive radars. Although decentralized spectrum allocation has been extensively studied in cognitive radio sensor networks, two challenges are observed for automotive sensors using radar. First, the allocation approach should be dynamic as all radars are mounted on moving vehicles. Second, each radar does not communicate with the others so it has quite limited information. A machine learning technique, reinforcement learning, is utilized because it can learn a decision making policy in an unknown dynamic environment. As a single radar observation is incomplete, a long short-term memory recurrent network is used to aggregate radar observations through time so that each radar can learn to choose a frequency subband by combining both the present and past observations. Simulation experiments are conducted to compare the proposed approach with other common spectrum allocation methods such as the random and myopic policy, indicating that our approach outperforms the others.
Autonomous radar has been an integral part of advanced driver assistance systems due to its robustness to adverse weather and various lighting conditions. Conventional automotive radars use digital signal processing (DSP) algorithms to process raw da ta into sparse radar pins that do not provide information regarding the size and orientation of the objects. In this paper, we propose a deep-learning based algorithm for radar object detection. The algorithm takes in radar data in its raw tensor representation and places probabilistic oriented bounding boxes around the detected objects in birds-eye-view space. We created a new multimodal dataset with 102544 frames of raw radar and synchronized LiDAR data. To reduce human annotation effort we developed a scalable pipeline to automatically annotate ground truth using LiDAR as reference. Based on this dataset we developed a vehicle detection pipeline using raw radar data as the only input. Our best performing radar detection model achieves 77.28% AP under oriented IoU of 0.3. To the best of our knowledge, this is the first attempt to investigate object detection with raw radar data for conventional corner automotive radars.
Camera and Lidar processing have been revolutionized with the rapid development of deep learning model architectures. Automotive radar is one of the crucial elements of automated driver assistance and autonomous driving systems. Radar still relies on traditional signal processing techniques, unlike camera and Lidar based methods. We believe this is the missing link to achieve the most robust perception system. Identifying drivable space and occupied space is the first step in any autonomous decision making task. Occupancy grid map representation of the environment is often used for this purpose. In this paper, we propose PolarNet, a deep neural model to process radar information in polar domain for open space segmentation. We explore various input-output representations. Our experiments show that PolarNet is a effective way to process radar data that achieves state-of-the-art performance and processing speeds while maintaining a compact size.
We aim to improve segmentation through the use of machine learning tools during region agglomeration. We propose an active learning approach for performing hierarchical agglomerative segmentation from superpixels. Our method combines multiple feature s at all scales of the agglomerative process, works for data with an arbitrary number of dimensions, and scales to very large datasets. We advocate the use of variation of information to measure segmentation accuracy, particularly in 3D electron microscopy (EM) images of neural tissue, and using this metric demonstrate an improvement over competing algorithms in EM and natural images.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا