ترغب بنشر مسار تعليمي؟ اضغط هنا

Penetrating RF Fingerprinting-based Authentication with a Generative Adversarial Attack

74   0   0.0 ( 0 )
 نشر من قبل Samurdhi Karunaratne
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Physical layer authentication relies on detecting unique imperfections in signals transmitted by radio devices to isolate their fingerprint. Recently, deep learning-based authenticators have increasingly been proposed to classify devices using these fingerprints, as they achieve higher accuracies compared to traditional approaches. However, it has been shown in other domains that adding carefully crafted perturbations to legitimate inputs can fool such classifiers. This can undermine the security provided by the authenticator. Unlike adversarial attacks applied in other domains, an adversary has no control over the propagation environment. Therefore, to investigate the severity of this type of attack in wireless communications, we consider an unauthorized transmitter attempting to have its signals classified as authorized by a deep learning-based authenticator. We demonstrate a reinforcement learning-based attack where the impersonator--using only the authenticators binary authentication decision--distorts its signals in order to penetrate the system. Extensive simulations and experiments on a software-defined radio testbed indicate that at appropriate channel conditions and bounded by a maximum distortion level, it is possible to fool the authenticator reliably at more than 90% success rate.



قيم البحث

اقرأ أيضاً

345 - Guanlin Li , Guowen Xu , Han Qiu 2021
This paper presents a novel fingerprinting scheme for the Intellectual Property (IP) protection of Generative Adversarial Networks (GANs). Prior solutions for classification models adopt adversarial examples as the fingerprints, which can raise steal thiness and robustness problems when they are applied to the GAN models. Our scheme constructs a composite deep learning model from the target GAN and a classifier. Then we generate stealthy fingerprint samples from this composite model, and register them to the classifier for effective ownership verification. This scheme inspires three concrete methodologies to practically protect the modern GAN models. Theoretical analysis proves that these methods can satisfy different security requirements necessary for IP protection. We also conduct extensive experiments to show that our solutions outperform existing strategies in terms of stealthiness, functionality-preserving and unremovability.
Previous studies have demonstrated that commonly studied (vanilla) touch-based continuous authentication systems (V-TCAS) are susceptible to population attack. This paper proposes a novel Generative Adversarial Network assisted TCAS (G-TCAS) framewor k, which showed more resilience to the population attack. G-TCAS framework was tested on a dataset of 117 users who interacted with a smartphone and tablet pair. On average, the increase in the false accept rates (FARs) for V-TCAS was much higher (22%) than G-TCAS (13%) for the smartphone. Likewise, the increase in the FARs for V-TCAS was 25% compared to G-TCAS (6%) for the tablet.
Ever since Machine Learning as a Service (MLaaS) emerges as a viable business that utilizes deep learning models to generate lucrative revenue, Intellectual Property Right (IPR) has become a major concern because these deep learning models can easily be replicated, shared, and re-distributed by any unauthorized third parties. To the best of our knowledge, one of the prominent deep learning models - Generative Adversarial Networks (GANs) which has been widely used to create photorealistic image are totally unprotected despite the existence of pioneering IPR protection methodology for Convolutional Neural Networks (CNNs). This paper therefore presents a complete protection framework in both black-box and white-box settings to enforce IPR protection on GANs. Empirically, we show that the proposed method does not compromise the original GANs performance (i.e. image generation, image super-resolution, style transfer), and at the same time, it is able to withstand both removal and ambiguity attacks against embedded watermarks.
RF devices can be identified by unique imperfections embedded in the signals they transmit called RF fingerprints. The closed set classification of such devices, where the identification must be made among an authorized set of transmitters, has been well explored. However, the much more difficult problem of open set classification, where the classifier needs to reject unauthorized transmitters while recognizing authorized transmitters, has only been recently visited. So far, efforts at open set classification have largely relied on the utilization of signal samples captured from a known set of unauthorized transmitters to aid the classifier learn unauthorized transmitter fingerprints. Since acquiring new transmitters to use as known transmitters is highly expensive, we propose to use generative deep learning methods to emulate unauthorized signal samples for the augmentation of training datasets. We develop two different data augmentation techniques, one that exploits a limited number of known unauthorized transmitters and the other that does not require any unauthorized transmitters. Experiments conducted on a dataset captured from a WiFi testbed indicate that data augmentation allows for significant increases in open set classification accuracy, especially when the authorized set is small.
105 - Shaohao Lu , Yuqiao Xian , Ke Yan 2021
The Deep Neural Networks are vulnerable toadversarial exam-ples(Figure 1), making the DNNs-based systems collapsed byadding the inconspicuous perturbations to the images. Most of the existing works for adversarial attack are gradient-based and suf-fe r from the latency efficiencies and the load on GPU memory. Thegenerative-based adversarial attacks can get rid of this limitation,and some relative works propose the approaches based on GAN.However, suffering from the difficulty of the convergence of train-ing a GAN, the adversarial examples have either bad attack abilityor bad visual quality. In this work, we find that the discriminatorcould be not necessary for generative-based adversarial attack, andpropose theSymmetric Saliency-based Auto-Encoder (SSAE)to generate the perturbations, which is composed of the saliencymap module and the angle-norm disentanglement of the featuresmodule. The advantage of our proposed method lies in that it is notdepending on discriminator, and uses the generative saliency map to pay more attention to label-relevant regions. The extensive exper-iments among the various tasks, datasets, and models demonstratethat the adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.The code is available at https://github.com/BravoLu/SSAE.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا