ترغب بنشر مسار تعليمي؟ اضغط هنا

Robustness Verification of Quantum Classifiers

66   0   0.0 ( 0 )
 نشر من قبل Ji Guan
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Several important models of machine learning algorithms have been successfully generalized to the quantum world, with potential speedup to training classical classifiers and applications to data analytics in quantum physics that can be implemented on the near future quantum computers. However, quantum noise is a major obstacle to the practical implementation of quantum machine learning. In this work, we define a formal framework for the robustness verification and analysis of quantum machine learning algorithms against noises. A robust bound is derived and an algorithm is developed to check whether or not a quantum machine learning algorithm is robust with respect to quantum training data. In particular, this algorithm can find adversarial examples during checking. Our approach is implemented on Googles TensorFlow Quantum and can verify the robustness of quantum machine learning algorithms with respect to a small disturbance of noises, derived from the surrounding environment. The effectiveness of our robust bound and algorithm is confirmed by the experimental results, including quantum bits classification as the Hello World example, quantum phase recognition and cluster excitation detection from real world intractable physical problems, and the classification of MNIST from the classical world.

قيم البحث

اقرأ أيضاً

Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies. However, noise has often played beneficial roles, from enhancing weak signals in stochastic resonanc e to protecting the privacy of data in differential privacy. It is then natural to ask, can we harness the power of quantum noise that is beneficial to quantum computing? An important current direction for quantum computing is its application to machine learning, such as classification problems. One outstanding problem in machine learning for classification is its sensitivity to adversarial examples. These are small, undetectable perturbations from the original data where the perturbed data is completely misclassified in otherwise extremely accurate classifiers. They can also be considered as `worst-case perturbations by unknown noise sources. We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived where the robustness improves with increasing noise. This robustness property is intimately connected with an important security concept called differential privacy which can be extended to quantum differential privacy. For the protection of quantum data, this is the first quantum protocol that can be used against the most general adversaries. Furthermore, we show how the robustness in the classical case can be sensitive to the details of the classification model, but in the quantum case the details of classification model are absent, thus also providing a potential quantum advantage for classical data that is independent of quantum speedups. This opens the opportunity to explore other ways in which quantum noise can be used in our favour, as well as identifying other ways quantum algorithms can be helpful that is independent of quantum speedups.
256 - Boxin Wang , Boyuan Pan , Xin Li 2020
Recent advances in large-scale language representation models such as BERT have improved the state-of-the-art performances in many NLP tasks. Meanwhile, character-level Chinese NLP models, including BERT for Chinese, have also demonstrated that they can outperform the existing models. In this paper, we show that, however, such BERT-based models are vulnerable under character-level adversarial attacks. We propose a novel Chinese char-level attack method against BERT-based classifiers. Essentially, we generate small perturbation on the character level in the embedding space and guide the character substitution procedure. Extensive experiments show that the classification accuracy on a Chinese news dataset drops from 91.8% to 0% by manipulating less than 2 characters on average based on the proposed attack. Human evaluations also confirm that our generated Chinese adversarial examples barely affect human performance on these NLP tasks.
Many recent works have proposed methods to train classifiers with local robustness properties, which can provably eliminate classes of evasion attacks for most inputs, but not all inputs. Since data distribution shift is very common in security appli cations, e.g., often observed for malware detection, local robustness cannot guarantee that the property holds for unseen inputs at the time of deploying the classifier. Therefore, it is more desirable to enforce global robustness properties that hold for all inputs, which is strictly stronger than local robustness. In this paper, we present a framework and tools for training classifiers that satisfy global robustness properties. We define new notions of global robustness that are more suitable for security classifiers. We design a novel booster-fixer training framework to enforce global robustness properties. We structure our classifier as an ensemble of logic rules and design a new verifier to verify the properties. In our training algorithm, the booster increases the classifiers capacity, and the fixer enforces verified global robustness properties following counterexample guided inductive synthesis. We show that we can train classifiers to satisfy different global robustness properties for three security datasets, and even multiple properties at the same time, with modest impact on the classifiers performance. For example, we train a Twitter spam account classifier to satisfy five global robustness properties, with 5.4% decrease in true positive rate, and 0.1% increase in false positive rate, compared to a baseline XGBoost model that doesnt satisfy any property.
181 - Zhengfeng Ji 2015
We present a classical interactive protocol that verifies the validity of a quantum witness state for the local Hamiltonian problem. It follows from this protocol that approximating the non-local value of a multi-player one-round game to inverse poly nomial precision is QMA-hard. Our work makes an interesting connection between the theory of QMA-completeness and Hamiltonian complexity on one hand and the study of non-local games and Bell inequalities on the other.
Distributed quantum systems and especially the Quantum Internet have the ever-increasing potential to fully demonstrate the power of quantum computation. This is particularly true given that developing a general-purpose quantum computer is much more difficult than connecting many small quantum devices. One major challenge of implementing distributed quantum systems is programming them and verifying their correctness. In this paper, we propose a CSP-like distributed programming language to facilitate the specification and verification of such systems. After presenting its operational and denotational semantics, we develop a Hoare-style logic for distributed quantum programs and establish its soundness and (relative) completeness with respect to both partial and total correctness. The effectiveness of the logic is demonstrated by its applications in the verification of quantum teleportation and local implementation of non-local CNOT gates, two important algorithms widely used in distributed quantum systems.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا