ﻻ يوجد ملخص باللغة العربية
This work aims to assess the reality and feasibility of the adversarial attack against cardiac diagnosis system powered by machine learning algorithms. To this end, we introduce adversarial beats, which are adversarial perturbations tailored specifically against electrocardiograms (ECGs) beat-by-beat classification system. We first formulate an algorithm to generate adversarial examples for the ECG classification neural network model, and study its attack success rate. Next, to evaluate its feasibility in a physical environment, we mount a hardware attack by designing a malicious signal generator which injects adversarial beats into ECG sensor readings. To the best of our knowledge, our work is the first in evaluating the proficiency of adversarial examples for ECGs in a physical setup. Our real-world experiments demonstrate that adversarial beats successfully manipulated the diagnosis results 3-5 times out of 40 attempts throughout the course of 2 minutes. Finally, we discuss the overall feasibility and impact of the attack, by clearly defining motives and constraints of expected attackers along with our experimental results.
We propose a new ensemble method for detecting and classifying adversarial examples generated by state-of-the-art attacks, including DeepFool and C&W. Our method works by training the members of an ensemble to have low classification error on random
Adversarial examples have become one of the largest challenges that machine learning models, especially neural network classifiers, face. These adversarial examples break the assumption of attack-free scenario and fool state-of-the-art (SOTA) classif
The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community. In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs. We revis
Despite the remarkable success of deep neural networks, significant concerns have emerged about their robustness to adversarial perturbations to inputs. While most attacks aim to ensure that these are imperceptible, physical perturbation attacks typi
Though deep neural network has hit a huge success in recent studies and applica- tions, it still remains vulnerable to adversarial perturbations which are imperceptible to humans. To address this problem, we propose a novel network called ReabsNet to