Do you want to publish a course? Click here

Ground Encoding: Learned Factor Graph-based Models for Localizing Ground Penetrating Radar

78   0   0.0 ( 0 )
 Added by Alexander Baikovitz
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We address the problem of robot localization using ground penetrating radar (GPR) sensors. Current approaches for localization with GPR sensors require a priori maps of the systems environment as well as access to approximate global positioning (GPS) during operation. In this paper, we propose a novel, real-time GPR-based localization system for unknown and GPS-denied environments. We model the localization problem as an inference over a factor graph. Our approach combines 1D single-channel GPR measurements to form 2D image submaps. To use these GPR images in the graph, we need sensor models that can map noisy, high-dimensional image measurements into the state space. These are challenging to obtain a priori since image generation has a complex dependency on subsurface composition and radar physics, which itself varies with sensors and variations in subsurface electromagnetic properties. Our key idea is to instead learn relative sensor models directly from GPR data that map non-sequential GPR image pairs to relative robot motion. These models are incorporated as factors within the factor graph with relative motion predictions correcting for accumulated drift in the position estimates. We demonstrate our approach over datasets collected across multiple locations using a custom designed experimental rig. We show reliable, real-time localization using only GPR and odometry measurements for varying trajectories in three distinct GPS-denied environments. For our supplementary video, see https://youtu.be/HXXgdTJzqyw.



rate research

Read More

There has been exciting recent progress in using radar as a sensor for robot navigation due to its increased robustness to varying environmental conditions. However, within these different radar perception systems, ground penetrating radar (GPR) remains under-explored. By measuring structures beneath the ground, GPR can provide stable features that are less variant to ambient weather, scene, and lighting changes, making it a compelling choice for long-term spatio-temporal mapping. In this work, we present the CMU-GPR dataset--an open-source ground penetrating radar dataset for research in subsurface-aided perception for robot navigation. In total, the dataset contains 15 distinct trajectory sequences in 3 GPS-denied, indoor environments. Measurements from a GPR, wheel encoder, RGB camera, and inertial measurement unit were collected with ground truth positions from a robotic total station. In addition to the dataset, we also provide utility code to convert raw GPR data into processed images. This paper describes our recording platform, the data format, utility scripts, and proposed methods for using this data.
Multistatic ground-penetrating radar (GPR) signals can be imaged tomographically to produce three-dimensional distributions of image intensities. In the absence of objects of interest, these intensities can be considered to be estimates of clutter. These clutter intensities spatially vary over several orders of magnitude, and vary across different arrays, which makes direct comparison of these raw intensities difficult. However, by gathering statistics on these intensities and their spatial variation, a variety of metrics can be determined. In this study, the clutter distribution is found to fit better to a two-parameter Weibull distribution than Gaussian or lognormal distributions. Based upon the spatial variation of the two Weibull parameters, scale and shape, more information may be gleaned from these data. How well the GPR array is illuminating various parts of the ground, in depth and cross-track, may be determined from the spatial variation of the Weibull scale parameter, which may in turn be used to estimate an effective attenuation coefficient in the soil. The transition in depth from clutter-limited to noise-limited conditions (which is one possible definition of GPR penetration depth) can be estimated from the spatial variation of the Weibull shape parameter. Finally, the underlying clutter distributions also provide an opportunity to standardize image intensities to determine when a statistically significant deviation from background (clutter) has occurred, which is convenient for buried threat detection algorithm development which needs to be robust across multiple different arrays.
The three electromagnetic properties appearing in Maxwells equations are dielectric permittivity, electrical conductivity and magnetic permeability. The study of point diffractors in a homogeneous, isotropic, linear medium suggests the use of logarithms to describe the variations of electromagnetic properties in the earth. A small anomaly in electrical properties (permittivity and conductivity) responds to an incident electromagnetic field as an electric dipole, whereas a small anomaly in the magnetic property responds as a magnetic dipole. Neither property variation can be neglected without justification. Considering radiation patterns of the different diffracting points, diagnostic interpretation of electric and magnetic variations is theoretically feasible but is not an easy task using Ground Penetrating Radar. However, using an effective electromagnetic impedance and an effective electromagnetic velocity to describe a medium, the radiation patterns of a small anomaly behave completely differently with source-receiver offset. Zero-offset reflection data give a direct image of impedance variations while large-offset reflection data contain information on velocity variations.
Were interested in the problem of estimating object states from touch during manipulation under occlusions. In this work, we address the problem of estimating object poses from touch during planar pushing. Vision-based tactile sensors provide rich, local image measurements at the point of contact. A single such measurement, however, contains limited information and multiple measurements are needed to infer latent object state. We solve this inference problem using a factor graph. In order to incorporate tactile measurements in the graph, we need local observation models that can map high-dimensional tactile images onto a low-dimensional state space. Prior work has used low-dimensional force measurements or engineered functions to interpret tactile measurements. These methods, however, can be brittle and difficult to scale across objects and sensors. Our key insight is to directly learn tactile observation models that predict the relative pose of the sensor given a pair of tactile images. These relative poses can then be incorporated as factors within a factor graph. We propose a two-stage approach: first we learn local tactile observation models supervised with ground truth data, and then integrate these models along with physics and geometric factors within a factor graph optimizer. We demonstrate reliable object tracking using only tactile feedback for 150 real-world planar pushing sequences with varying trajectories across three object shapes. Supplementary video: https://youtu.be/y1kBfSmi8w0
Forward-looking ground-penetrating radar (FLGPR) has recently been investigated as a remote sensing modality for buried target detection (e.g., landmines). In this context, raw FLGPR data is beamformed into images and then computerized algorithms are applied to automatically detect subsurface buried targets. Most existing algorithms are supervised, meaning they are trained to discriminate between labeled target and non-target imagery, usually based on features extracted from the imagery. A large number of features have been proposed for this purpose, however thus far it is unclear which are the most effective. The first goal of this work is to provide a comprehensive comparison of detection performance using existing features on a large collection of FLGPR data. Fusion of the decisions resulting from processing each feature is also considered. The second goal of this work is to investigate two modern feature learning approaches from the object recognition literature: the bag-of-visual-words and the Fisher vector for FLGPR processing. The results indicate that the new feature learning approaches outperform existing methods. Results also show that fusion between existing features and new features yields little additional performance improvements.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا