Do you want to publish a course? Click here

Contact Inertial Odometry: Collisions are your Friends

105   0   0.0 ( 0 )
 Added by Thomas Lew
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Autonomous exploration of unknown environments with aerial vehicles remains a challenge, especially in perceptually degraded conditions. Dust, fog, or a lack of visual or LiDAR-based features results in severe difficulties for state estimation algorithms, which failure can be catastrophic. In this work, we show that it is indeed possible to navigate in such conditions without any exteroceptive sensing by exploiting collisions instead of treating them as constraints. To this end, we present a novel contact-based inertial odometry (CIO) algorithm: it uses estimated external forces with the environment to detect collisions and generate pseudo-measurements of the robot velocity, enabling autonomous flight. To fully exploit this method, we first perform modeling of a hybrid ground and aerial vehicle which can withstand collisions at moderate speeds, for which we develop an external wrench estimation algorithm. Then, we present our CIO algorithm and develop a reactive planner and control law which encourage exploration by bouncing off obstacles. All components of this framework are validated in hardware experiments and we demonstrate that a quadrotor can traverse a cluttered environment using an IMU only. This work can be used on drones to recover from visual inertial odometry failure or on micro-drones that do not have the payload capacity to carry cameras, LiDARs or powerful computers.



rate research

Read More

Advances in micro-electro-mechanical (MEMS) techniques enable inertial measurements units (IMUs) to be small, cheap, energy efficient, and widely used in smartphones, robots, and drones. Exploiting inertial data for accurate and reliable navigation and localization has attracted significant research and industrial interest, as IMU measurements are completely ego-centric and generally environment agnostic. Recent studies have shown that the notorious issue of drift can be significantly alleviated by using deep neural networks (DNNs), e.g. IONet. However, the lack of sufficient labelled data for training and testing various architectures limits the proliferation of adopting DNNs in IMU-based tasks. In this paper, we propose and release the Oxford Inertial Odometry Dataset (OxIOD), a first-of-its-kind data collection for inertial-odometry research, with all sequences having ground-truth labels. Our dataset contains 158 sequences totalling more than 42 km in total distance, much larger than previous inertial datasets. Another notable feature of this dataset lies in its diversity, which can reflect the complex motions of phone-based IMUs in various everyday usage. The measurements were collected with four different attachments (handheld, in the pocket, in the handbag and on the trolley), four motion modes (halting, walking slowly, walking normally, and running), five different users, four types of off-the-shelf consumer phones, and large-scale localization from office buildings. Deep inertial tracking experiments were conducted to show the effectiveness of our dataset in training deep neural network models and evaluate learning-based and model-based algorithms. The OxIOD Dataset is available at: http://deepio.cs.ox.ac.uk
Ego-motion estimation is a fundamental requirement for most mobile robotic applications. By sensor fusion, we can compensate the deficiencies of stand-alone sensors and provide more reliable estimations. We introduce a tightly coupled lidar-IMU fusion method in this paper. By jointly minimizing the cost derived from lidar and IMU measurements, the lidar-IMU odometry (LIO) can perform well with acceptable drift after long-term experiment, even in challenging cases where the lidar measurements can be degraded. Besides, to obtain more reliable estimations of the lidar poses, a rotation-constrained refinement algorithm (LIO-mapping) is proposed to further align the lidar poses with the global map. The experiment results demonstrate that the proposed method can estimate the poses of the sensor pair at the IMU update rate with high precision, even under fast motion conditions or with insufficient features.
To achieve robust motion estimation in visually degraded environments, thermal odometry has been an attraction in the robotics community. However, most thermal odometry methods are purely based on classical feature extractors, which is difficult to establish robust correspondences in successive frames due to sudden photometric changes and large thermal noise. To solve this problem, we propose ThermalPoint, a lightweight feature detection network specifically tailored for producing keypoints on thermal images, providing notable anti-noise improvements compared with other state-of-the-art methods. After that, we combine ThermalPoint with a novel radiometric feature tracking method, which directly makes use of full radiometric data and establishes reliable correspondences between sequential frames. Finally, taking advantage of an optimization-based visual-inertial framework, a deep feature-based thermal-inertial odometry (TP-TIO) framework is proposed and evaluated thoroughly in various visually degraded environments. Experiments show that our method outperforms state-of-the-art visual and laser odometry methods in smoke-filled environments and achieves competitive accuracy in normal environments.
79 - Bo Xu , Peng Wang , Yijia He 2021
Leveraging line features can help to improve the localization accuracy of point-based monocular Visual-Inertial Odometry (VIO) system, as lines provide additional constraints. Moreover, in an artificial environment, some straight lines are parallel to each other. In this paper, we designed a VIO system based on points and straight lines, which divides straight lines into structural straight lines (that is, straight lines parallel to each other) and non-structural straight lines. In addition, unlike the orthogonal representation using four parameters to represent the 3D straight line, we only used two parameters to minimize the representation of the structural straight line and the non-structural straight line. Furthermore, we designed a straight line matching strategy based on sampling points to improve the efficiency and success rate of straight line matching. The effectiveness of our method is verified on both public datasets of EuRoc and TUM VI benchmark and compared with other state-of-the-art algorithms.
Motion estimation by fusing data from at least a camera and an Inertial Measurement Unit (IMU) enables many applications in robotics. However, among the multitude of Visual Inertial Odometry (VIO) methods, few efficiently estimate device motion with consistent covariance, and calibrate sensor parameters online for handling data from consumer sensors. This paper addresses the gap with a Keyframe-based Structureless Filter (KSF). For efficiency, landmarks are not included in the filters state vector. For robustness, KSF associates feature observations and manages state variables using the concept of keyframes. For flexibility, KSF supports anytime calibration of IMU systematic errors, as well as extrinsic, intrinsic, and temporal parameters of each camera. Estimator consistency and observability of sensor parameters were analyzed by simulation. Sensitivity to design options, e.g., feature matching method and camera count was studied with the EuRoC benchmark. Sensor parameter estimation was evaluated on raw TUM VI sequences and smartphone data. Moreover, pose estimation accuracy was evaluated on EuRoC and TUM VI sequences versus recent VIO methods. These tests confirm that KSF reliably calibrates sensor parameters when the data contain adequate motion, and consistently estimate motion with accuracy rivaling recent VIO methods. Our implementation runs at 42 Hz with stereo camera images on a consumer laptop.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا