ترغب بنشر مسار تعليمي؟ اضغط هنا

A New Approach to 3D ICP Covariance Estimation

60   0   0.0 ( 0 )
 نشر من قبل Martin Brossard
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In mobile robotics, scan matching of point clouds using Iterative Closest Point (ICP) allows estimating sensor displacements. It may prove important to assess the associated uncertainty about the obtained rigid transformation, especially for sensor fusion purposes. In this paper we propose a novel approach to 3D uncertainty of ICP that accounts for all the sources of error as listed in Censis pioneering work [1], namely wrong convergence, underconstrained situations, and sensor noise. Our approach builds on two facts. First, the uncertainty about the ICPs output fully depends on the initialization accuracy. Thus speaking of the covariance of ICP makes sense only in relation to the initialization uncertainty, which generally stems from odometry errors. We capture this using the unscented transform, which also reflects correlations between initial and final uncertainties. Then, assuming white sensor noise leads to overoptimism as ICP is biased owing to e.g. calibration biases, which we account for. Our solution is tested on publicly available real data ranging from structured to unstructured environments, where our algorithm predicts consistent results with actual uncertainty, and compares favorably to previous methods.

قيم البحث

اقرأ أيضاً

A new belief space planning algorithm, called covariance steering Belief RoadMap (CS-BRM), is introduced, which is a multi-query algorithm for motion planning of dynamical systems under simultaneous motion and observation uncertainties. CS-BRM extend s the probabilistic roadmap (PRM) approach to belief spaces and is based on the recently developed theory of covariance steering (CS) that enables guaranteed satisfaction of terminal belief constraints in finite-time. The nodes in the CS-BRM are sampled in belief space and represent distributions of the system states. A covariance steering controller steers the system from one BRM node to another, thus acting as an edge controller of the corresponding belief graph that ensures belief constraint satisfaction. After the edge controller is computed, a specific edge cost is assigned to that edge. The CS-BRM algorithm allows the sampling of non-stationary belief nodes, and thus is able to explore the velocity space and find efficient motion plans. The performance of CS-BRM is evaluated and compared to a previous belief space planning method, demonstrating the benefits of the proposed approach.
We present semi-supervised deep learning approaches for traversability estimation from fisheye images. Our method, GONet, and the proposed extensions leverage Generative Adversarial Networks (GANs) to effectively predict whether the area seen in the input image(s) is safe for a robot to traverse. These methods are trained with many positive images of traversable places, but just a small set of negative images depicting blocked and unsafe areas. This makes the proposed methods practical. Positive examples can be collected easily by simply operating a robot through traversable spaces, while obtaining negative examples is time consuming, costly, and potentially dangerous. Through extensive experiments and several demonstrations, we show that the proposed traversability estimation approaches are robust and can generalize to unseen scenarios. Further, we demonstrate that our methods are memory efficient and fast, allowing for real-time operation on a mobile robot with single or stereo fisheye cameras. As part of our contributions, we open-source two new datasets for traversability estimation. These datasets are composed of approximately 24h of videos from more than 25 indoor environments. Our methods outperform baseline approaches for traversability estimation on these new datasets.
In this self-contained chapter, we revisit a fundamental problem of multivariate statistics: estimating covariance matrices from finitely many independent samples. Based on massive Multiple-Input Multiple-Output (MIMO) systems we illustrate the neces sity of leveraging structure and considering quantization of samples when estimating covariance matrices in practice. We then provide a selective survey of theoretical advances of the last decade focusing on the estimation of structured covariance matrices. This review is spiced up by some yet unpublished insights on how to benefit from combined structural constraints. Finally, we summarize the findings of our recently published preprint Covariance estimation under one-bit quantization to show how guaranteed covariance estimation is possible even under coarse quantization of the samples.
We demonstrate practically approximation-free electrostatic calculations of micromesh detectors that can be extended to any other type of micropattern detectors. Using newly developed Boundary Element Method called Robin Hood Method we can easily han dle objects with huge number of boundary elements (hundreds of thousands) without any compromise in numerical accuracy. In this paper we show how such calculations can be applied to Micromegas detectors by comparing electron transparencies and gains for four different types of meshes. We demonstrate inclusion of dielectric material by calculating the electric field around different types of dielectric spacers.
Fusing data from LiDAR and camera is conceptually attractive because of their complementary properties. For instance, camera images are higher resolution and have colors, while LiDAR data provide more accurate range measurements and have a wider Fiel d Of View (FOV). However, the sensor fusion problem remains challenging since it is difficult to find reliable correlations between data of very different characteristics (geometry vs. texture, sparse vs. dense). This paper proposes an offline LiDAR-camera fusion method to build dense, accurate 3D models. Specifically, our method jointly solves a bundle adjustment (BA) problem and a cloud registration problem to compute camera poses and the sensor extrinsic calibration. In experiments, we show that our method can achieve an averaged accuracy of 2.7mm and resolution of 70 points per square cm by comparing to the ground truth data from a survey scanner. Furthermore, the extrinsic calibration result is discussed and shown to outperform the state-of-the-art method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا