No Arabic abstract
Substantial research has been devoted to the development of algorithms that automate buried threat detection (BTD) with ground penetrating radar (GPR) data, resulting in a large number of proposed algorithms. One popular algorithm GPR-based BTD, originally applied by Torrione et al., 2012, is the Histogram of Oriented Gradients (HOG) feature. In a recent large-scale comparison among five veteran institutions, a modified version of HOG referred to here as gprHOG, performed poorly compared to other modern algorithms. In this paper, we provide experimental evidence demonstrating that the modifications to HOG that comprise gprHOG result in a substantially better-performing algorithm. The results here, in conjunction with the large-scale algorithm comparison, suggest that HOG is not competitive with modern GPR-based BTD algorithms. Given HOGs popularity, these results raise some questions about many existing studies, and suggest gprHOG (and especially HOG) should be employed with caution in future studies.
Forward-looking ground-penetrating radar (FLGPR) has recently been investigated as a remote sensing modality for buried target detection (e.g., landmines). In this context, raw FLGPR data is beamformed into images and then computerized algorithms are applied to automatically detect subsurface buried targets. Most existing algorithms are supervised, meaning they are trained to discriminate between labeled target and non-target imagery, usually based on features extracted from the imagery. A large number of features have been proposed for this purpose, however thus far it is unclear which are the most effective. The first goal of this work is to provide a comprehensive comparison of detection performance using existing features on a large collection of FLGPR data. Fusion of the decisions resulting from processing each feature is also considered. The second goal of this work is to investigate two modern feature learning approaches from the object recognition literature: the bag-of-visual-words and the Fisher vector for FLGPR processing. The results indicate that the new feature learning approaches outperform existing methods. Results also show that fusion between existing features and new features yields little additional performance improvements.
The three electromagnetic properties appearing in Maxwells equations are dielectric permittivity, electrical conductivity and magnetic permeability. The study of point diffractors in a homogeneous, isotropic, linear medium suggests the use of logarithms to describe the variations of electromagnetic properties in the earth. A small anomaly in electrical properties (permittivity and conductivity) responds to an incident electromagnetic field as an electric dipole, whereas a small anomaly in the magnetic property responds as a magnetic dipole. Neither property variation can be neglected without justification. Considering radiation patterns of the different diffracting points, diagnostic interpretation of electric and magnetic variations is theoretically feasible but is not an easy task using Ground Penetrating Radar. However, using an effective electromagnetic impedance and an effective electromagnetic velocity to describe a medium, the radiation patterns of a small anomaly behave completely differently with source-receiver offset. Zero-offset reflection data give a direct image of impedance variations while large-offset reflection data contain information on velocity variations.
Autonomous radar has been an integral part of advanced driver assistance systems due to its robustness to adverse weather and various lighting conditions. Conventional automotive radars use digital signal processing (DSP) algorithms to process raw data into sparse radar pins that do not provide information regarding the size and orientation of the objects. In this paper, we propose a deep-learning based algorithm for radar object detection. The algorithm takes in radar data in its raw tensor representation and places probabilistic oriented bounding boxes around the detected objects in birds-eye-view space. We created a new multimodal dataset with 102544 frames of raw radar and synchronized LiDAR data. To reduce human annotation effort we developed a scalable pipeline to automatically annotate ground truth using LiDAR as reference. Based on this dataset we developed a vehicle detection pipeline using raw radar data as the only input. Our best performing radar detection model achieves 77.28% AP under oriented IoU of 0.3. To the best of our knowledge, this is the first attempt to investigate object detection with raw radar data for conventional corner automotive radars.
Multistatic ground-penetrating radar (GPR) signals can be imaged tomographically to produce three-dimensional distributions of image intensities. In the absence of objects of interest, these intensities can be considered to be estimates of clutter. These clutter intensities spatially vary over several orders of magnitude, and vary across different arrays, which makes direct comparison of these raw intensities difficult. However, by gathering statistics on these intensities and their spatial variation, a variety of metrics can be determined. In this study, the clutter distribution is found to fit better to a two-parameter Weibull distribution than Gaussian or lognormal distributions. Based upon the spatial variation of the two Weibull parameters, scale and shape, more information may be gleaned from these data. How well the GPR array is illuminating various parts of the ground, in depth and cross-track, may be determined from the spatial variation of the Weibull scale parameter, which may in turn be used to estimate an effective attenuation coefficient in the soil. The transition in depth from clutter-limited to noise-limited conditions (which is one possible definition of GPR penetration depth) can be estimated from the spatial variation of the Weibull shape parameter. Finally, the underlying clutter distributions also provide an opportunity to standardize image intensities to determine when a statistically significant deviation from background (clutter) has occurred, which is convenient for buried threat detection algorithm development which needs to be robust across multiple different arrays.
We introduce the histogram of oriented gradients (HOG), a tool developed for machine vision that we propose as a new metric for the systematic characterization of observations of atomic and molecular gas and the study of molecular cloud formation models. In essence, the HOG technique takes as input extended spectral-line observations from two tracers and provides an estimate of their spatial correlation across velocity channels. We characterize HOG using synthetic observations of HI and $^{13}$CO(J=1-0) emission from numerical simulations of MHD turbulence leading to the formation of molecular gas after the collision of two atomic clouds. We find a significant spatial correlation between the two tracers in velocity channels where $v_{HI}approx v_{^{13}CO}$, independent of the orientation of the collision with respect to the line of sight. We use HOG to investigate the spatial correlation of the HI, from the THOR survey, and the $^{13}$CO(J=1-0) emission, from the GRS, toward the portion of the Galactic plane 33.75$lt llt$35.25$^{o}$ and $|b|lt$1.25$^{o}$. We find a significant spatial correlation between the tracers in extended portions of the studied region. Although some of the regions with high spatial correlation are associated with HI self-absorption features, suggesting that it is produced by the cold atomic gas, the correlation is not exclusive to this kind of region. The HOG results also indicate significant differences between individual regions: some show spatial correlation in channels around $v_{HI}approx v_{^{13}CO}$ while others present this correlation in velocity channels separated by a few km/s. We associate these velocity offsets to the effect of feedback and to the presence of physical conditions that are not included in the atomic-cloud-collision simulations, such as more general magnetic field configurations, shear, and global gas infall.