ترغب بنشر مسار تعليمي؟ اضغط هنا

Vulnerability of quantum classification to adversarial perturbations

221   0   0.0 ( 0 )
 نشر من قبل Nana Liu
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

High-dimensional quantum systems are vital for quantum technologies and are essential in demonstrating practical quantum advantage in quantum computing, simulation and sensing. Since dimensionality grows exponentially with the number of qubits, the potential power of noisy intermediate-scale quantum (NISQ) devices over classical resources also stems from entangled states in high dimensions. An important family of quantum protocols that can take advantage of high-dimensional Hilbert space are classification tasks. These include quantum machine learning algorithms, witnesses in quantum information processing and certain decision problems. However, due to counter-intuitive geometrical properties emergent in high dimensions, classification problems are vulnerable to adversarial attacks. We demonstrate that the amount of perturbation needed for an adversary to induce a misclassification scales inversely with dimensionality. This is shown to be a fundamental feature independent of the details of the classification protocol. Furthermore, this leads to a trade-off between the security of the classification algorithm against adversarial attacks and quantum advantages we expect for high-dimensional problems. In fact, protection against these adversarial attacks require extra resources that scale at least polynomially with the Hilbert space dimension of the system, which can erase any significant quantum advantage that we might expect from a quantum protocol. This has wide-ranging implications in the use of both near-term and future quantum technologies for classification.



قيم البحث

اقرأ أيضاً

Adversarial attacks against machine learning models have threatened various real-world applications such as spam filtering and sentiment analysis. In this paper, we propose a novel framework, learning to DIScriminate Perturbations (DISP), to identify and adjust malicious perturbations, thereby blocking adversarial attacks for text classification models. To identify adversarial attacks, a perturbation discriminator validates how likely a token in the text is perturbed and provides a set of potential perturbations. For each potential perturbation, an embedding estimator learns to restore the embedding of the original word based on the context and a replacement token is chosen based on approximate kNN search. DISP can block adversarial attacks for any NLP model without modifying the model structure or training procedure. Extensive experiments on two benchmark datasets demonstrate that DISP significantly outperforms baseline methods in blocking adversarial attacks for text classification. In addition, in-depth analysis shows the robustness of DISP across different situations.
The major imminent investments in quantum technologies will bring concepts like a global quantum Internet and quantum Internet-of-Things, closer to reality. Our findings reveal a new form of vulnerability that will enable hostile groups of quantum-en abled adversaries to inflict maximal disruption on the global quantum state in such systems. These attacks will be practically impossible to detect since they introduce no change in the Hamiltonian and no loss of purity; they require no real-time communication; and they can be over within a second. We also predict that such attacks will be amplified by the statistical character of modern extremist, insurgent and terrorist groups. A countermeasure could be to embed future quantum technologies within redundant classical networks.
The local density of states or its Fourier transform, usually called fidelity amplitude, are important measures of quantum irreversibility due to imperfect evolution. In this Rapid Communication we study both quantities in a paradigmatic many body sy stem, the Dicke Hamiltonian, where a single-mode bosonic field interacts with an ensemble of N two-level atoms. This model exhibits a quantum phase transition in the thermodynamic limit, while for finite instances the system undergoes a transition from quasi-integrability to quantum chaotic. We show that the width of the local density of states clearly points out the imprints of the transition from integrability to chaos but no trace remains of the quantum phase transition. The connection with the decay of the fidelity amplitude is also established.
The goal of this paper is to analyze an intriguing phenomenon recently discovered in deep networks, namely their instability to adversarial perturbations (Szegedy et. al., 2014). We provide a theoretical framework for analyzing the robustness of clas sifiers to adversarial perturbations, and show fundamental upper bounds on the robustness of classifiers. Specifically, we establish a general upper bound on the robustness of classifiers to adversarial perturbations, and then illustrate the obtained upper bound on the families of linear and quadratic classifiers. In both cases, our upper bound depends on a distinguishability measure that captures the notion of difficulty of the classification task. Our results for both classes imply that in tasks involving small distinguishability, no classifier in the considered set will be robust to adversarial perturbations, even if a good accuracy is achieved. Our theoretical framework moreover suggests that the phenomenon of adversarial instability is due to the low flexibility of classifiers, compared to the difficulty of the classification task (captured by the distinguishability). Moreover, we show the existence of a clear distinction between the robustness of a classifier to random noise and its robustness to adversarial perturbations. Specifically, the former is shown to be larger than the latter by a factor that is proportional to sqrt{d} (with d being the signal dimension) for linear classifiers. This result gives a theoretical explanation for the discrepancy between the two robustness properties in high dimensional problems, which was empirically observed in the context of neural networks. To the best of our knowledge, our results provide the first theoretical work that addresses the phenomenon of adversarial instability recently observed for deep networks. Our analysis is complemented by experimental results on controlled and real-world data.
Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations. This vulnerability has proven empirically to be very intricate to address. In this paper, we study the phenomenon of adversarial perturbations under the assumption that the data is generated with a smooth generative model. We derive fundamental upper bounds on the robustness to perturbations of any classification function, and prove the existence of adversarial perturbations that transfer well across different classifiers with small risk. Our analysis of the robustness also provides insights onto key properties of generative models, such as their smoothness and dimensionality of latent space. We conclude with numerical experimental results showing that our bounds provide informative baselines to the maximal achievable robustness on several datasets.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا