ترغب بنشر مسار تعليمي؟ اضغط هنا

SafeAMC: Adversarial training for robust modulation recognition models

104   0   0.0 ( 0 )
 نشر من قبل Javier Maroto
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models. However, these models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification. This raises questions about the security but also the general trust in model predictions. We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition (AMC) models. We show that current state-of-the-art models benefit from adversarial training, which mitigates the robustness issues for some families of modulations. We use adversarial perturbations to visualize the features learned, and we found that in robust models the signal symbols are shifted towards the nearest classes in constellation space, like maximum likelihood methods. This confirms that robust models not only are more secure, but also more interpretable, building their decisions on signal statistics that are relevant to modulation recognition.



قيم البحث

اقرأ أيضاً

Given the rapid changes in telecommunication systems and their higher dependence on artificial intelligence, it is increasingly important to have models that can perform well under different, possibly adverse, conditions. Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications. However, in other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations, which consist of imperceptible crafted noise that when added to the data fools the model into misclassification. This puts into question the security of DNNs in communication tasks, and in particular in modulation recognition. We propose a novel framework to test the robustness of current state-of-the-art models where the adversarial perturbation strength is dependent on the signal strength and measured with the signal to perturbation ratio (SPR). We show that current state-of-the-art models are susceptible to these perturbations. In contrast to current research on the topic of image classification, modulation recognition allows us to have easily accessible insights on the usefulness of the features learned by DNNs by looking at the constellation space. When analyzing these vulnerable models we found that adversarial perturbations do not shift the symbols towards the nearest classes in constellation space. This shows that DNNs do not base their decisions on signal statistics that are important for the Bayes-optimal modulation recognition model, but spurious correlations in the training data. Our feature analysis and proposed framework can help in the task of finding better models for communication systems.
This paper explores the use of adversarial examples in training speech recognition systems to increase robustness of deep neural network acoustic models. During training, the fast gradient sign method is used to generate adversarial examples augmenti ng the original training data. Different from conventional data augmentation based on data transformations, the examples are dynamically generated based on current acoustic model parameters. We assess the impact of adversarial data augmentation in experiments on the Aurora-4 and CHiME-4 single-channel tasks, showing improved robustness against noise and channel variation. Further improvement is obtained when combining adversarial examples with teacher/student training, leading to a 23% relative word error rate reduction on Aurora-4.
Modulation Classification (MC) refers to the problem of classifying the modulation class of a wireless signal. In the wireless communications pipeline, MC is the first operation performed on the received signal and is critical for reliable decoding. This paper considers the problem of secure modulation classification, where a transmitter (Alice) wants to maximize MC accuracy at a legitimate receiver (Bob) while minimizing MC accuracy at an eavesdropper (Eve). The contribution of this work is to design novel adversarial learning techniques for secure MC. In particular, we present adversarial filtering based algorithms for secure MC, in which Alice uses a carefully designed adversarial filter to mask the transmitted signal, that can maximize MC accuracy at Bob while minimizing MC accuracy at Eve. We present two filtering based algorithms, namely gradient ascent filter (GAF), and a fast gradient filter method (FGFM), with varying levels of complexity. Our proposed adversarial filtering based approaches significantly outperform additive adversarial perturbations (used in the traditional ML community and other prior works on secure MC) and also have several other desirable properties. In particular, GAF and FGFM algorithms are a) computational efficient (allow fast decoding at Bob), b) power-efficient (do not require excessive transmit power at Alice); and c) SNR efficient (i.e., perform well even at low SNR values at Bob).
Adversarial training (AT) has been demonstrated as one of the most promising defense methods against various adversarial attacks. To our knowledge, existing AT-based methods usually train with the locally most adversarial perturbed points and treat a ll the perturbed points equally, which may lead to considerably weaker adversarial robust generalization on test data. In this work, we introduce a new adversarial training framework that considers the diversity as well as characteristics of the perturbed points in the vicinity of benign samples. To realize the framework, we propose a Regional Adversarial Training (RAT) defense method that first utilizes the attack path generated by the typical iterative attack method of projected gradient descent (PGD), and constructs an adversarial region based on the attack path. Then, RAT samples diverse perturbed training points efficiently inside this region, and utilizes a distance-aware label smoothing mechanism to capture our intuition that perturbed points at different locations should have different impact on the model performance. Extensive experiments on several benchmark datasets show that RAT consistently makes significant improvement on standard adversarial training (SAT), and exhibits better robust generalization.
The goal of this work is to train robust speaker recognition models without speaker labels. Recent works on unsupervised speaker representations are based on contrastive learning in which they encourage within-utterance embeddings to be similar and a cross-utterance embeddings to be dissimilar. However, since the within-utterance segments share the same acoustic characteristics, it is difficult to separate the speaker information from the channel information. To this end, we propose augmentation adversarial training strategy that trains the network to be discriminative for the speaker information, while invariant to the augmentation applied. Since the augmentation simulates the acoustic characteristics, training the network to be invariant to augmentation also encourages the network to be invariant to the channel information in general. Extensive experiments on the VoxCeleb and VOiCES datasets show significant improvements over previous works using self-supervision, and the performance of our self-supervised models far exceed that of humans.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا