ترغب بنشر مسار تعليمي؟ اضغط هنا

Mitigation of Through-Wall Distortions of Frontal Radar Images using Denoising Autoencoders

75   0   0.0 ( 0 )
 نشر من قبل Shelly Vishwakarma
 تاريخ النشر 2019
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Radar images of humans and other concealed objects are considerably distorted by attenuation, refraction and multipath clutter in indoor through-wall environments. While several methods have been proposed for removing target independent static and dynamic clutter, there still remain considerable challenges in mitigating target dependent clutter especially when the knowledge of the exact propagation characteristics or analytical framework is unavailable. In this work we focus on mitigating wall effects using a machine learning based solution -- denoising autoencoders -- that does not require prior information of the wall parameters or room geometry. Instead, the method relies on the availability of a large volume of training radar images gathered in through-wall conditions and the corresponding clean images captured in line-of-sight conditions. During the training phase, the autoencoder learns how to denoise the corrupted through-wall images in order to resemble the free space images. We have validated the performance of the proposed solution for both static and dynamic human subjects. The frontal radar images of static targets are obtained by processing wideband planar array measurement data with two-dimensional array and range processing. The frontal radar images of dynamic targets are simulated using narrowband planar array data processed with two-dimensional array and Doppler processing. In both simulation and measurement processes, we incorporate considerable diversity in the target and propagation conditions. Our experimental results, from both simulation and measurement data, show that the denoised images are considerably more similar to the free-space images when compared to the original through-wall images.



قيم البحث

اقرأ أيضاً

Narrowband and broadband indoor radar images significantly deteriorate in the presence of target dependent and independent static and dynamic clutter arising from walls. A stacked and sparse denoising autoencoder (StackedSDAE) is proposed for mitigat ing wall clutter in indoor radar images. The algorithm relies on the availability of clean images and corresponding noisy images during training and requires no additional information regarding the wall characteristics. The algorithm is evaluated on simulated Doppler-time spectrograms and high range resolution profiles generated for diverse radar frequencies and wall characteristics in around-the-corner radar (ACR) scenarios. Additional experiments are performed on range-enhanced frontal images generated from measurements gathered from a wideband RF imaging sensor. The results from the experiments show that the StackedSDAE successfully reconstructs images that closely resemble those that would be obtained in free space conditions. Further, the incorporation of sparsity and depth in the hidden layer representations within the autoencoder makes the algorithm more robust to low signal to noise ratio (SNR) and label mismatch between clean and corrupt data during training than the conventional single layer DAE. For example, the denoised ACR signatures show a structural similarity above 0.75 to clean free space images at SNR of -10dB and label mismatch error of 50%.
In this paper, we used the deep learning approach to perform two-dimensional, multi-target locating in Throughthe-Wall Radar under conditions where the wall is modeled as a complex electromagnetic media. We have assumed 5 models for the wall and 3 mo des for the number of targets. The target modes are single, double and triple. The wall scenarios are homogeneous wall, wall with airgap, inhomogeneous wall, anisotropic wall and inhomogeneous-anisotropic wall. For this purpose, we have used the deep neural network algorithm. Using the Python FDTD library, we generated a dataset, and then modeled it with deep learning. Assuming the wall as a complex electromagnetic media, we achieved 97:7% accuracy for single-target 2D locating, and for two-targets, three-targets we achieved an accuracy of 94:1% and 62:2%, respectively.
107 - Jiabao Yu , Aiqun Hu , Fen Zhou 2019
Radio Frequency Fingerprinting (RFF) is one of the promising passive authentication approaches for improving the security of the Internet of Things (IoT). However, with the proliferation of low-power IoT devices, it becomes imperative to improve the identification accuracy at low SNR scenarios. To address this problem, this paper proposes a general Denoising AutoEncoder (DAE)-based model for deep learning RFF techniques. Besides, a partially stacking method is designed to appropriately combine the semi-steady and steady-state RFFs of ZigBee devices. The proposed Partially Stacking-based Convolutional DAE (PSC-DAE) aims at reconstructing a high-SNR signal as well as device identification. Experimental results demonstrate that compared to Convolutional Neural Network (CNN), PSCDAE can improve the identification accuracy by 14% to 23.5% at low SNRs (from -10 dB to 5 dB) under Additive White Gaussian Noise (AWGN) corrupted channels. Even at SNR = 10 dB, the identification accuracy is as high as 97.5%.
Birds-Eye-View (BEV) maps have emerged as one of the most powerful representations for scene understanding due to their ability to provide rich spatial context while being easy to interpret and process. However, generating BEV maps requires complex m ulti-stage paradigms that encapsulate a series of distinct tasks such as depth estimation, ground plane estimation, and semantic segmentation. These sub-tasks are often learned in a disjoint manner which prevents the model from holistic reasoning and results in erroneous BEV maps. Moreover, existing algorithms only predict the semantics in the BEV space, which limits their use in applications where the notion of object instances is critical. In this work, we present the first end-to-end learning approach for directly predicting dense panoptic segmentation maps in the BEV, given a single monocular image in the frontal view (FV). Our architecture follows the top-down paradigm and incorporates a novel dense transformer module consisting of two distinct transformers that learn to independently map vertical and flat regions in the input image from the FV to the BEV. Additionally, we derive a mathematical formulation for the sensitivity of the FV-BEV transformation which allows us to intelligently weight pixels in the BEV space to account for the varying descriptiveness across the FV image. Extensive evaluations on the KITTI-360 and nuScenes datasets demonstrate that our approach exceeds the state-of-the-art in the PQ metric by 3.61 pp and 4.93 pp respectively.
In this work, we propose the use of radar with advanced deep segmentation models to identify open space in parking scenarios. A publically available dataset of radar observations called SCORP was collected. Deep models are evaluated with various rada r input representations. Our proposed approach achieves low memory usage and real-time processing speeds, and is thus very well suited for embedded deployment.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا