ﻻ يوجد ملخص باللغة العربية
With the dominance of keyframe-based SLAM in the field of robotics, the relative frame poses between keyframes have typically been sacrificed for a faster algorithm to achieve online applications. However, those approaches can become insufficient for applications that may require refined poses of all frames, not just keyframes which are relatively sparse compared to all input frames. This paper proposes a novel algorithm to correct the relative frames between keyframes after the keyframes have been updated by a back-end optimization process. The correction model is derived using conservation of the measurement constraint between landmarks and the robot pose. The proposed algorithm is designed to be easily integrable to existing keyframe-based SLAM systems while exhibiting robust and accurate performance superior to existing interpolation methods. The algorithm also requires low computational resources and hence has a minimal burden on the whole SLAM pipeline. We provide the evaluation of the proposed pose correction algorithm in comparison to existing interpolation methods in various vector spaces, and our method has demonstrated excellent accuracy in both KITTI and EuRoC datasets.
In this paper, we propose a real-time deep learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves in a communication-constrained underwater
In this paper, we present the RISE-SLAM algorithm for performing visual-inertial simultaneous localization and mapping (SLAM), while improving estimation consistency. Specifically, in order to achieve real-time operation, existing approaches often as
In object-based Simultaneous Localization and Mapping (SLAM), 6D object poses offer a compact representation of landmark geometry useful for downstream planning and manipulation tasks. However, measurement ambiguity then arises as objects may possess
Simultaneous mapping and localization (SLAM) in an real indoor environment is still a challenging task. Traditional SLAM approaches rely heavily on low-level geometric constraints like corners or lines, which may lead to tracking failure in texturele
We present a new paradigm for real-time object-oriented SLAM with a monocular camera. Contrary to previous approaches, that rely on object-level models, we construct category-level models from CAD collections which are now widely available. To allevi