ترغب بنشر مسار تعليمي؟ اضغط هنا

On the benefits of robust models in modulation recognition

61   0   0.0 ( 0 )
 نشر من قبل Javier Maroto
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Given the rapid changes in telecommunication systems and their higher dependence on artificial intelligence, it is increasingly important to have models that can perform well under different, possibly adverse, conditions. Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications. However, in other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations, which consist of imperceptible crafted noise that when added to the data fools the model into misclassification. This puts into question the security of DNNs in communication tasks, and in particular in modulation recognition. We propose a novel framework to test the robustness of current state-of-the-art models where the adversarial perturbation strength is dependent on the signal strength and measured with the signal to perturbation ratio (SPR). We show that current state-of-the-art models are susceptible to these perturbations. In contrast to current research on the topic of image classification, modulation recognition allows us to have easily accessible insights on the usefulness of the features learned by DNNs by looking at the constellation space. When analyzing these vulnerable models we found that adversarial perturbations do not shift the symbols towards the nearest classes in constellation space. This shows that DNNs do not base their decisions on signal statistics that are important for the Bayes-optimal modulation recognition model, but spurious correlations in the training data. Our feature analysis and proposed framework can help in the task of finding better models for communication systems.

قيم البحث

اقرأ أيضاً

In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models. However, these models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise c rafted to induce misclassification. This raises questions about the security but also the general trust in model predictions. We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition (AMC) models. We show that current state-of-the-art models benefit from adversarial training, which mitigates the robustness issues for some families of modulations. We use adversarial perturbations to visualize the features learned, and we found that in robust models the signal symbols are shifted towards the nearest classes in constellation space, like maximum likelihood methods. This confirms that robust models not only are more secure, but also more interpretable, building their decisions on signal statistics that are relevant to modulation recognition.
Adversarial robust models have been shown to learn more robust and interpretable features than standard trained models. As shown in [cite{tsipras2018robustness}], such robust models inherit useful interpretable properties where the gradient aligns pe rceptually well with images, and adding a large targeted adversarial perturbation leads to an image resembling the target class. We perform experiments to show that interpretable and perceptually aligned gradients are present even in models that do not show high robustness to adversarial attacks. Specifically, we perform adversarial training with attack for different max-perturbation bound. Adversarial training with low max-perturbation bound results in models that have interpretable features with only slight drop in performance over clean samples. In this paper, we leverage models with interpretable perceptually-aligned features and show that adversarial training with low max-perturbation bound can improve the performance of models for zero-shot and weakly supervised localization tasks.
We consider a wireless communication system that consists of a transmitter, a receiver, and an adversary. The transmitter transmits signals with different modulation types, while the receiver classifies its received signals to modulation types using a deep learning-based classifier. In the meantime, the adversary makes over-the-air transmissions that are received as superimposed with the transmitters signals to fool the classifier at the receiver into making errors. While this evasion attack has received growing interest recently, the channel effects from the adversary to the receiver have been ignored so far such that the previous attack mechanisms cannot be applied under realistic channel effects. In this paper, we present how to launch a realistic evasion attack by considering channels from the adversary to the receiver. Our results show that modulation classification is vulnerable to an adversarial attack over a wireless channel that is modeled as Rayleigh fading with path loss and shadowing. We present various adversarial attacks with respect to availability of information about channel, transmitter input, and classifier architecture. First, we present two types of adversarial attacks, namely a targeted attack (with minimum power) and non-targeted attack that aims to change the classification to a target label or to any other label other than the true label, respectively. Both are white-box attacks that are transmitter input-specific and use channel information. Then we introduce an algorithm to generate adversarial attacks using limited channel information where the adversary only knows the channel distribution. Finally, we present a black-box universal adversarial perturbation (UAP) attack where the adversary has limited knowledge about both channel and transmitter input.
Modulation classification, recognized as the intermediate step between signal detection and demodulation, is widely deployed in several modern wireless communication systems. Although many approaches have been studied in the last decades for identify ing the modulation format of an incoming signal, they often reveal the obstacle of learning radio characteristics for most traditional machine learning algorithms. To overcome this drawback, we propose an accurate modulation classification method by exploiting deep learning for being compatible with constellation diagram. Particularly, a convolutional neural network is developed for proficiently learning the most relevant radio characteristics of gray-scale constellation image. The deep network is specified by multiple processing blocks, where several grouped and asymmetric convolutional layers in each block are organized by a flow-in-flow structure for feature enrichment. These blocks are connected via skip-connection to prevent the vanishing gradient problem while effectively preserving the information identify throughout the network. Regarding several intensive simulations on the constellation image dataset of eight digital modulations, the proposed deep network achieves the remarkable classification accuracy of approximately 87% at 0 dB signal-to-noise ratio (SNR) under a multipath Rayleigh fading channel and further outperforms some state-of-the-art deep models of constellation-based modulation classification.
Modulation Classification (MC) refers to the problem of classifying the modulation class of a wireless signal. In the wireless communications pipeline, MC is the first operation performed on the received signal and is critical for reliable decoding. This paper considers the problem of secure modulation classification, where a transmitter (Alice) wants to maximize MC accuracy at a legitimate receiver (Bob) while minimizing MC accuracy at an eavesdropper (Eve). The contribution of this work is to design novel adversarial learning techniques for secure MC. In particular, we present adversarial filtering based algorithms for secure MC, in which Alice uses a carefully designed adversarial filter to mask the transmitted signal, that can maximize MC accuracy at Bob while minimizing MC accuracy at Eve. We present two filtering based algorithms, namely gradient ascent filter (GAF), and a fast gradient filter method (FGFM), with varying levels of complexity. Our proposed adversarial filtering based approaches significantly outperform additive adversarial perturbations (used in the traditional ML community and other prior works on secure MC) and also have several other desirable properties. In particular, GAF and FGFM algorithms are a) computational efficient (allow fast decoding at Bob), b) power-efficient (do not require excessive transmit power at Alice); and c) SNR efficient (i.e., perform well even at low SNR values at Bob).

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا