Penetrating RF Fingerprinting-based Authentication with a Generative Adversarial Attack


الملخص بالإنكليزية

Physical layer authentication relies on detecting unique imperfections in signals transmitted by radio devices to isolate their fingerprint. Recently, deep learning-based authenticators have increasingly been proposed to classify devices using these fingerprints, as they achieve higher accuracies compared to traditional approaches. However, it has been shown in other domains that adding carefully crafted perturbations to legitimate inputs can fool such classifiers. This can undermine the security provided by the authenticator. Unlike adversarial attacks applied in other domains, an adversary has no control over the propagation environment. Therefore, to investigate the severity of this type of attack in wireless communications, we consider an unauthorized transmitter attempting to have its signals classified as authorized by a deep learning-based authenticator. We demonstrate a reinforcement learning-based attack where the impersonator--using only the authenticators binary authentication decision--distorts its signals in order to penetrate the system. Extensive simulations and experiments on a software-defined radio testbed indicate that at appropriate channel conditions and bounded by a maximum distortion level, it is possible to fool the authenticator reliably at more than 90% success rate.

تحميل البحث