ترغب بنشر مسار تعليمي؟ اضغط هنا

Radio Frequency Fingerprint Identification Based on Denoising Autoencoders

108   0   0.0 ( 0 )
 نشر من قبل Jiabao Yu
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Radio Frequency Fingerprinting (RFF) is one of the promising passive authentication approaches for improving the security of the Internet of Things (IoT). However, with the proliferation of low-power IoT devices, it becomes imperative to improve the identification accuracy at low SNR scenarios. To address this problem, this paper proposes a general Denoising AutoEncoder (DAE)-based model for deep learning RFF techniques. Besides, a partially stacking method is designed to appropriately combine the semi-steady and steady-state RFFs of ZigBee devices. The proposed Partially Stacking-based Convolutional DAE (PSC-DAE) aims at reconstructing a high-SNR signal as well as device identification. Experimental results demonstrate that compared to Convolutional Neural Network (CNN), PSCDAE can improve the identification accuracy by 14% to 23.5% at low SNRs (from -10 dB to 5 dB) under Additive White Gaussian Noise (AWGN) corrupted channels. Even at SNR = 10 dB, the identification accuracy is as high as 97.5%.

قيم البحث

اقرأ أيضاً

Radio frequency fingerprint identification (RFFI) is an emerging device authentication technique that relies on intrinsic hardware characteristics of wireless devices. We designed an RFFI scheme for Long Range (LoRa) systems based on spectrogram and convolutional neural network (CNN). Specifically, we used spectrogram to represent the fine-grained time-frequency characteristics of LoRa signals. In addition, we revealed that the instantaneous carrier frequency offset (CFO) is drifting, which will result in misclassification and significantly compromise the system stability; we demonstrated CFO compensation is an effective mitigation. Finally, we designed a hybrid classifier that can adjust CNN outputs with the estimated CFO. The mean value of CFO remains relatively stable, hence it can be used to rule out CNN predictions whose estimated CFO falls out of the range. We performed experiments in real wireless environments using 20 LoRa devices under test (DUTs) and a Universal Software Radio Peripheral (USRP) N210 receiver. By comparing with the IQ-based and FFT-based RFFI schemes, our spectrogram-based scheme can reach the best classification accuracy, i.e., 97.61% for 20 LoRa DUTs.
Radio frequency fingerprint identification (RFFI) is a promising device authentication technique based on the transmitter hardware impairments. In this paper, we propose a scalable and robust RFFI framework achieved by deep learning powered radio fre quency fingerprint (RFF) extractor. Specifically, we leverage the deep metric learning to train an RFF extractor, which has excellent generalization ability and can extract RFFs from previously unseen devices. Any devices can be enrolled via the pre-trained RFF extractor and the RFF database can be maintained efficiently for allowing devices to join and leave. Wireless channel impacts the RFF extraction and is tackled by exploiting channel independent feature and data augmentation. We carried out extensive experimental evaluation involving 60 commercial off-the-shelf LoRa devices and a USRP N210 software defined radio platform. The results have successfully demonstrated that our framework can achieve excellent generalization abilities for device classification and rogue device detection as well as effective channel mitigation.
Narrowband and broadband indoor radar images significantly deteriorate in the presence of target dependent and independent static and dynamic clutter arising from walls. A stacked and sparse denoising autoencoder (StackedSDAE) is proposed for mitigat ing wall clutter in indoor radar images. The algorithm relies on the availability of clean images and corresponding noisy images during training and requires no additional information regarding the wall characteristics. The algorithm is evaluated on simulated Doppler-time spectrograms and high range resolution profiles generated for diverse radar frequencies and wall characteristics in around-the-corner radar (ACR) scenarios. Additional experiments are performed on range-enhanced frontal images generated from measurements gathered from a wideband RF imaging sensor. The results from the experiments show that the StackedSDAE successfully reconstructs images that closely resemble those that would be obtained in free space conditions. Further, the incorporation of sparsity and depth in the hidden layer representations within the autoencoder makes the algorithm more robust to low signal to noise ratio (SNR) and label mismatch between clean and corrupt data during training than the conventional single layer DAE. For example, the denoised ACR signatures show a structural similarity above 0.75 to clean free space images at SNR of -10dB and label mismatch error of 50%.
Radar images of humans and other concealed objects are considerably distorted by attenuation, refraction and multipath clutter in indoor through-wall environments. While several methods have been proposed for removing target independent static and dy namic clutter, there still remain considerable challenges in mitigating target dependent clutter especially when the knowledge of the exact propagation characteristics or analytical framework is unavailable. In this work we focus on mitigating wall effects using a machine learning based solution -- denoising autoencoders -- that does not require prior information of the wall parameters or room geometry. Instead, the method relies on the availability of a large volume of training radar images gathered in through-wall conditions and the corresponding clean images captured in line-of-sight conditions. During the training phase, the autoencoder learns how to denoise the corrupted through-wall images in order to resemble the free space images. We have validated the performance of the proposed solution for both static and dynamic human subjects. The frontal radar images of static targets are obtained by processing wideband planar array measurement data with two-dimensional array and range processing. The frontal radar images of dynamic targets are simulated using narrowband planar array data processed with two-dimensional array and Doppler processing. In both simulation and measurement processes, we incorporate considerable diversity in the target and propagation conditions. Our experimental results, from both simulation and measurement data, show that the denoised images are considerably more similar to the free-space images when compared to the original through-wall images.
This paper presents a new solution for reconstructing missing data in power system measurements. An Enhanced Denoising Autoencoder (EDAE) is proposed to reconstruct the missing data through the input vector space reconstruction based on the neighbor values correlation and Long Short-Term Memory (LSTM) networks. The proposed LSTM-EDAE is able to remove the noise, extract principle features of the dataset, and reconstruct the missing information for new inputs. The paper shows that the utilization of neighbor correlation can perform better in missing data reconstruction. Trained with LSTM networks, the EDAE is more effective in coping with big data in power systems and obtains a better performance than the neural network in conventional Denoising Autoencoder. A random data sequence and the simulated Phasor Measurement Unit (PMU) data of power system are utilized to verify the effectiveness of the proposed LSTM-EDAE.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا