Do you want to publish a course? Click here

Robust in Practice: Adversarial Attacks on Quantum Machine Learning

100   0   0.0 ( 0 )
 Added by Haoran Liao
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

State-of-the-art classical neural networks are observed to be vulnerable to small crafted adversarial perturbations. A more severe vulnerability has been noted for quantum machine learning (QML) models classifying Haar-random pure states. This stems from the concentration of measure phenomenon, a property of the metric space when sampled probabilistically, and is independent of the classification protocol. In order to provide insights into the adversarial robustness of a quantum classifier on real-world classification tasks, we focus on the adversarial robustness in classifying a subset of encoded states that are smoothly generated from a Gaussian latent space. We show that the vulnerability of this task is considerably weaker than that of classifying Haar-random pure states. In particular, we find only mildly polynomially decreasing robustness in the number of qubits, in contrast to the exponentially decreasing robustness when classifying Haar-random pure states and suggesting that QML models can be useful for real-world classification tasks.



rate research

Read More

High-precision operation of quantum computing systems must be robust to uncertainties and noises in the quantum hardware. In this paper, we show that through a game played between the uncertainties (or noises) and the controls, adversarial uncertainty samples can be generated to find highly robust controls through the search for Nash equilibria (NE). We propose a broad family of adversarial learning algorithms, namely a-GRAPE algorithms, which include two effective learning schemes referred to as the best-response approach and the better-response approach within the game-theoretic terminology, providing options for rapidly learning robust controls. Numerical experiments demonstrate that the balance between fidelity and robustness depends on the details of the chosen adversarial learning algorithm, which can effectively lead to a significant enhancement of control robustness while attaining high fidelity.
This paper proposes adversarial attacks for Reinforcement Learning (RL) and then improves the robustness of Deep Reinforcement Learning algorithms (DRL) to parameter uncertainties with the help of these attacks. We show that even a naively engineered attack successfully degrades the performance of DRL algorithm. We further improve the attack using gradient information of an engineered loss function which leads to further degradation in performance. These attacks are then leveraged during training to improve the robustness of RL within robust control framework. We show that this adversarial training of DRL algorithms like Deep Double Q learning and Deep Deterministic Policy Gradients leads to significant increase in robustness to parameter variations for RL benchmarks such as Cart-pole, Mountain Car, Hopper and Half Cheetah environment.
80 - Yusen Wu , Hao Chen , Xin Wang 2021
Adversarial attacks attempt to disrupt the training, retraining and utilizing of artificial intelligence and machine learning models in large-scale distributed machine learning systems. This causes security risks on its prediction outcome. For example, attackers attempt to poison the model by either presenting inaccurate misrepresentative data or altering the models parameters. In addition, Byzantine faults including software, hardware, network issues occur in distributed systems which also lead to a negative impact on the prediction outcome. In this paper, we propose a novel distributed training algorithm, partial synchronous stochastic gradient descent (ParSGD), which defends adversarial attacks and/or tolerates Byzantine faults. We demonstrate the effectiveness of our algorithm under three common adversarial attacks again the ML models and a Byzantine fault during the training phase. Our results show that using ParSGD, ML models can still produce accurate predictions as if it is not being attacked nor having failures at all when almost half of the nodes are being compromised or failed. We will report the experimental evaluations of ParSGD in comparison with other algorithms.
Over the past few years several quantum machine learning algorithms were proposed that promise quantum speed-ups over their classical counterparts. Most of these learning algorithms either assume quantum access to data -- making it unclear if quantum speed-ups still exist without making these strong assumptions, or are heuristic in nature with no provable advantage over classical algorithms. In this paper, we establish a rigorous quantum speed-up for supervised classification using a general-purpose quantum learning algorithm that only requires classical access to data. Our quantum classifier is a conventional support vector machine that uses a fault-tolerant quantum computer to estimate a kernel function. Data samples are mapped to a quantum feature space and the kernel entries can be estimated as the transition amplitude of a quantum circuit. We construct a family of datasets and show that no classical learner can classify the data inverse-polynomially better than random guessing, assuming the widely-believed hardness of the discrete logarithm problem. Meanwhile, the quantum classifier achieves high accuracy and is robust against additive errors in the kernel entries that arise from finite sampling statistics.
Adversarial examples are perturbed inputs that are designed (from a deep learning networks (DLN) parameter gradients) to mislead the DLN during test time. Intuitively, constraining the dimensionality of inputs or parameters of a network reduces the space in which adversarial examples exist. Guided by this intuition, we demonstrate that discretization greatly improves the robustness of DLNs against adversarial attacks. Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8-bit to 4 values or 2-bit) extensively improves the adversarial robustness of DLNs for a substantial range of perturbations for minimal loss in test accuracy. Furthermore, we find that Binary Neural Networks (BNNs) and related variants are intrinsically more robust than their full precision counterparts in adversarial scenarios. Combining input discretization with BNNs furthers the robustness even waiving the need for adversarial training for certain magnitude of perturbation values. We evaluate the effect of discretization on MNIST, CIFAR10, CIFAR100 and Imagenet datasets. Across all datasets, we observe maximal adversarial resistance with 2-bit input discretization that incurs an adversarial accuracy loss of just ~1-2% as compared to clean test accuracy.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا