No Arabic abstract
State estimation for robots navigating in GPS-denied and perceptually-degraded environments, such as underground tunnels, mines and planetary subsurface voids, remains challenging in robotics. Towards this goal, we present LION (Lidar-Inertial Observability-Aware Navigator), which is part of the state estimation framework developed by the team CoSTAR for the DARPA Subterranean Challenge, where the team achieved second and first places in the Tunnel and Urban circuits in August 2019 and February 2020, respectively. LION provides high-rate odometry estimates by fusing high-frequency inertial data from an IMU and low-rate relative pose estimates from a lidar via a fixed-lag sliding window smoother. LION does not require knowledge of relative positioning between lidar and IMU, as the extrinsic calibration is estimated online. In addition, LION is able to self-assess its performance using an observability metric that evaluates whether the pose estimate is geometrically ill-constrained. Odometry and confidence estimates are used by HeRO, a supervisory algorithm that provides robust estimates by switching between different odometry sources. In this paper we benchmark the performance of LION in perceptually-degraded subterranean environments, demonstrating its high technology readiness level for deployment in the field.
We propose Super Odometry, a high-precision multi-modal sensor fusion framework, providing a simple but effective way to fuse multiple sensors such as LiDAR, camera, and IMU sensors and achieve robust state estimation in perceptually-degraded environments. Different from traditional sensor-fusion methods, Super Odometry employs an IMU-centric data processing pipeline, which combines the advantages of loosely coupled methods with tightly coupled methods and recovers motion in a coarse-to-fine manner. The proposed framework is composed of three parts: IMU odometry, visual-inertial odometry, and laser-inertial odometry. The visual-inertial odometry and laser-inertial odometry provide the pose prior to constrain the IMU bias and receive the motion prediction from IMU odometry. To ensure high performance in real-time, we apply a dynamic octree that only consumes 10 % of the running time compared with a static KD-tree. The proposed system was deployed on drones and ground robots, as part of Team Explorers effort to the DARPA Subterranean Challenge where the team won $1^{st}$ and $2^{nd}$ place in the Tunnel and Urban Circuits, respectively.
In this paper we present a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km through New College, Oxford. The dataset includes data from two commercially available devices - a stereoscopic-inertial camera and a multi-beam 3D LiDAR, which also provides inertial measurements. Additionally, we used a tripod-mounted survey grade LiDAR scanner to capture a detailed millimeter-accurate 3D map of the test location (containing $sim$290 million points). Using the map we inferred centimeter-accurate 6 Degree of Freedom (DoF) ground truth for the position of the device for each LiDAR scan to enable better evaluation of LiDAR and vision localisation, mapping and reconstruction systems. This ground truth is the particular novel contribution of this dataset and we believe that it will enable systematic evaluation which many similar datasets have lacked. The dataset combines both built environments, open spaces and vegetated areas so as to test localization and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LIDAR reconstruction and appearance-based place recognition. The dataset is available at: ori.ox.ac.uk/datasets/newer-college-dataset
We present an algorithm to characterize the space of identifiable inertial parameters in system identification of an articulated robot. The algorithm can be applied to general open-chain kinematic trees ranging from industrial manipulators to legged robots. It is the first solution for this case that is provably correct and does not rely on symbolic techniques. The high-level operation of the algorithm is based on a key observation: Undetectable changes in inertial parameters can be interpreted as sequences of inertial transfers across the joints. Drawing on the exponential parameterization of rigid-body kinematics, undetectable inertial transfers are analyzed in terms of linear observability. This analysis can be applied recursively, and lends an overall complexity of $O(N)$ to characterize parameter identifiability for a system of N bodies. Matlab source code for the new algorithm is provided.
Most of the routing algorithms for unmanned vehicles, that arise in data gathering and monitoring applications in the literature, rely on the Global Positioning System (GPS) information for localization. However, disruption of GPS signals either intentionally or unintentionally could potentially render these algorithms not applicable. In this article, we present a novel method to address this difficulty by combining methods from cooperative localization and routing. In particular, the article formulates a fundamental combinatorial optimization problem to plan routes for an unmanned vehicle in a GPS-restricted environment while enabling localization for the vehicle. We also develop algorithms to compute optimal paths for the vehicle using the proposed formulation. Extensive simulation results are also presented to corroborate the effectiveness and performance of the proposed formulation and algorithms.
Continuous robot operation in extreme scenarios such as underground mines or sewers is difficult because exteroceptive sensors may fail due to fog, darkness, dirt or malfunction. So as to enable autonomous navigation in these kinds of situations, we have developed a type of proprioceptive localization which exploits the foot contacts made by a quadruped robot to localize against a prior map of an environment, without the help of any camera or LIDAR sensor. The proposed method enables the robot to accurately re-localize itself after making a sequence of contact events over a terrain feature. The method is based on Sequential Monte Carlo and can support both 2.5D and 3D prior map representations. We have tested the approach online and onboard the ANYmal quadruped robot in two different scenarios: the traversal of a custom built wooden terrain course and a wall probing and following task. In both scenarios, the robot is able to effectively achieve a localization match and to execute a desired pre-planned path. The method keeps the localization error down to 10cm on feature rich terrain by only using its feet, kinematic and inertial sensing.