Do you want to publish a course? Click here

Mapping with Reflection -- Detection and Utilization of Reflection in 3D Lidar Scans

211   0   0.0 ( 0 )
 Added by Zhijie Yang
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

This paper presents a method to detect reflection of 3D light detection and ranging (Lidar) scans and uses it to classify the points and also map objects outside the line of sight. Our software uses several approaches to analyze the point cloud, including intensity peak detection, dual return detection, plane fitting, and finding the boundaries. These approaches can classify the point cloud and detect the reflection in it. By mirroring the reflection points on the detected window pane and adding classification labels on the points, we can improve the map quality in a Simultaneous Localization and Mapping (SLAM) framework. Experiments using real scan data and ground truth data showcase the effectiveness of our method.

rate research

Read More

Ego-motion estimation is a fundamental requirement for most mobile robotic applications. By sensor fusion, we can compensate the deficiencies of stand-alone sensors and provide more reliable estimations. We introduce a tightly coupled lidar-IMU fusion method in this paper. By jointly minimizing the cost derived from lidar and IMU measurements, the lidar-IMU odometry (LIO) can perform well with acceptable drift after long-term experiment, even in challenging cases where the lidar measurements can be degraded. Besides, to obtain more reliable estimations of the lidar poses, a rotation-constrained refinement algorithm (LIO-mapping) is proposed to further align the lidar poses with the global map. The experiment results demonstrate that the proposed method can estimate the poses of the sensor pair at the IMU update rate with high precision, even under fast motion conditions or with insufficient features.
Simultaneous Localization and Mapping (SLAM) has been considered as a solved problem thanks to the progress made in the past few years. However, the great majority of LiDAR-based SLAM algorithms are designed for a specific type of payload and therefore dont generalize across different platforms. In practice, this drawback causes the development, deployment and maintenance of an algorithm difficult. Consequently, our work focuses on improving the compatibility across different sensing payloads. Specifically, we extend the Cartographer SLAM library to handle different types of LiDAR including fixed or rotating, 2D or 3D LiDARs. By replacing the localization module of Cartographer and maintaining the sparse pose graph (SPG), the proposed framework can create high-quality 3D maps in real-time on different sensing payloads. Additionally, it brings the benefit of simplicity with only a few parameters need to be adjusted for each sensor type.
In this paper, we present INertial Lidar Localisation Autocalibration And MApping (IN2LAAMA): an offline probabilistic framework for localisation, mapping, and extrinsic calibration based on a 3D-lidar and a 6-DoF-IMU. Most of todays lidars collect geometric information about the surrounding environment by sweeping lasers across their field of view. Consequently, 3D-points in one lidar scan are acquired at different timestamps. If the sensor trajectory is not accurately known, the scans are affected by the phenomenon known as motion distortion. The proposed method leverages preintegration with a continuous representation of the inertial measurements to characterise the systems motion at any point in time. It enables precise correction of the motion distortion without relying on any explicit motion model. The systems pose, velocity, biases, and time-shift are estimated via a full batch optimisation that includes automatically generated loop-closure constraints. The autocalibration and the registration of lidar data rely on planar and edge features matched across pairs of scans. The performance of the framework is validated through simulated and real-data experiments.
258 - Qiang Cheng , Qing-Feng Sun 2021
We propose a universal method to detect the specular Andreev reflection taking the simple two dimensional Weyl nodal-line semimetal-superconductor double-junction structure as an example. The quasiclassical quantization conditions are established for the energy levels of bound states formed in the middle semimetal along a closed path. The establishment of the conditions is completely based on the intrinsic character of the specularly reflected hole which has the same sign relation of its wave vector and group velocity with the incident electron. This brings about the periodic oscillation of conductance with the length of the middle semimetal, which is lack for the retro-Andreev reflected hole having the opposite sign relation with the incident electron. The positions of the conductance peaks and the oscillation period can be precisely predicted by the quantization conditions. Our detection method is irrespective of the details of the materials, which may promote the experimental detection of and further researches on the specular Andreev reflection as well as its applications in superconducting electronics.
We present a method for detecting and mapping trees in noisy stereo camera point clouds, using a learned 3-D object detector. Inspired by recent advancements in 3-D object detection using a pseudo-lidar representation for stereo data, we train a PointRCNN detector to recognize trees in forest-like environments. We generate detector training data with a novel automatic labeling process that clusters a fused global point cloud. This process annotates large stereo point cloud training data sets with minimal user supervision, and unlike previous pseudo-lidar detection pipelines, requires no 3-D ground truth from other sensors such as lidar. Our mapping system additionally uses a Kalman filter to associate detections and consistently estimate the positions and sizes of trees. We collect a data set for tree detection consisting of 8680 stereo point clouds, and validate our method on an outdoors test sequence. Our results demonstrate robust tree recognition in noisy stereo data at ranges of up to 7 meters, on 720p resolution images from a Stereolabs ZED 2 camera. Code and data are available at https://github.com/brian-h-wang/pseudolidar-tree-detection.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا