ترغب بنشر مسار تعليمي؟ اضغط هنا

Hardening Quantum Machine Learning Against Adversaries

81   0   0.0 ( 0 )
 نشر من قبل Nathan Wiebe
 تاريخ النشر 2017
والبحث باللغة English




اسأل ChatGPT حول البحث

Security for machine learning has begun to become a serious issue for present day applications. An important question remaining is whether emerging quantum technologies will help or hinder the security of machine learning. Here we discuss a number of ways that quantum information can be used to help make quantum classifiers more secure or private. In particular, we demonstrate a form of robust principal component analysis that, under some circumstances, can provide an exponential speedup relative to robust methods used at present. To demonstrate this approach we introduce a linear combinations of unitaries Hamiltonian simulation method that we show functions when given an imprecise Hamiltonian oracle, which may be of independent interest. We also introduce a new quantum approach for bagging and boosting that can use quantum superposition over the classifiers or splits of the training set to aggregate over many more models than would be possible classically. Finally, we provide a private form of $k$--means clustering that can be used to prevent an all powerful adversary from learning more than a small fraction of a bit from any user. These examples show the role that quantum technologies can play in the security of ML and vice versa. This illustrates that quantum computing can provide useful advantages to machine learning apart from speedups.

قيم البحث

اقرأ أيضاً

In privacy amplification, two mutually trusted parties aim to amplify the secrecy of an initial shared secret $X$ in order to establish a shared private key $K$ by exchanging messages over an insecure communication channel. If the channel is authenti cated the task can be solved in a single round of communication using a strong randomness extractor; choosing a quantum-proof extractor allows one to establish security against quantum adversaries. In the case that the channel is not authenticated, Dodis and Wichs (STOC09) showed that the problem can be solved in two rounds of communication using a non-malleable extractor, a stronger pseudo-random construction than a strong extractor. We give the first construction of a non-malleable extractor that is secure against quantum adversaries. The extractor is based on a construction by Li (FOCS12), and is able to extract from source of min-entropy rates larger than $1/2$. Combining this construction with a quantum-proof variant of the reduction of Dodis and Wichs, shown by Cohen and Vidick (unpublished), we obtain the first privacy amplification protocol secure against active quantum adversaries.
Noise in quantum information processing is often viewed as a disruptive and difficult-to-avoid feature, especially in near-term quantum technologies. However, noise has often played beneficial roles, from enhancing weak signals in stochastic resonanc e to protecting the privacy of data in differential privacy. It is then natural to ask, can we harness the power of quantum noise that is beneficial to quantum computing? An important current direction for quantum computing is its application to machine learning, such as classification problems. One outstanding problem in machine learning for classification is its sensitivity to adversarial examples. These are small, undetectable perturbations from the original data where the perturbed data is completely misclassified in otherwise extremely accurate classifiers. They can also be considered as `worst-case perturbations by unknown noise sources. We show that by taking advantage of depolarisation noise in quantum circuits for classification, a robustness bound against adversaries can be derived where the robustness improves with increasing noise. This robustness property is intimately connected with an important security concept called differential privacy which can be extended to quantum differential privacy. For the protection of quantum data, this is the first quantum protocol that can be used against the most general adversaries. Furthermore, we show how the robustness in the classical case can be sensitive to the details of the classification model, but in the quantum case the details of classification model are absent, thus also providing a potential quantum advantage for classical data that is independent of quantum speedups. This opens the opportunity to explore other ways in which quantum noise can be used in our favour, as well as identifying other ways quantum algorithms can be helpful that is independent of quantum speedups.
Adversarial examples are perturbed inputs that are designed (from a deep learning networks (DLN) parameter gradients) to mislead the DLN during test time. Intuitively, constraining the dimensionality of inputs or parameters of a network reduces the s pace in which adversarial examples exist. Guided by this intuition, we demonstrate that discretization greatly improves the robustness of DLNs against adversarial attacks. Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8-bit to 4 values or 2-bit) extensively improves the adversarial robustness of DLNs for a substantial range of perturbations for minimal loss in test accuracy. Furthermore, we find that Binary Neural Networks (BNNs) and related variants are intrinsically more robust than their full precision counterparts in adversarial scenarios. Combining input discretization with BNNs furthers the robustness even waiving the need for adversarial training for certain magnitude of perturbation values. We evaluate the effect of discretization on MNIST, CIFAR10, CIFAR100 and Imagenet datasets. Across all datasets, we observe maximal adversarial resistance with 2-bit input discretization that incurs an adversarial accuracy loss of just ~1-2% as compared to clean test accuracy.
In this work we consider the communication of information in the presence of an online adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword x=x_1,...,x_n symbol-by-symbol ov er a communication channel. The adversarial jammer can view the transmitted symbols x_i one at a time, and can change up to a p-fraction of them. However, the decisions of the jammer must be made in an online or causal manner. More generally, for a delay parameter 0<d<1, we study the scenario in which the jammers decision on the corruption of x_i must depend solely on x_j for j < i - dn. In this work, we initiate the study of codes for online adversaries, and present a tight characterization of the amount of information one can transmit in both the 0-delay and, more generally, the d-delay online setting. We prove tight results for both additive and overwrite jammers when the transmitted symbols are assumed to be over a sufficiently large field F. Finally, we extend our results to a jam-or-listen online model, where the online adversary can either jam a symbol or eavesdrop on it. We again provide a tight characterization of the achievable rate for several variants of this model. The rate-regions we prove for each model are informational-theoretic in nature and hold for computationally unbounded adversaries. The rate regions are characterized by simple piecewise linear functions of p and d. The codes we construct to attain the optimal rate for each scenario are computationally efficient.
We introduce TensorFlow Quantum (TFQ), an open source library for the rapid prototyping of hybrid quantum-classical models for classical or quantum data. This framework offers high-level abstractions for the design and training of both discriminative and generative quantum models under TensorFlow and supports high-performance quantum circuit simulators. We provide an overview of the software architecture and building blocks through several examples and review the theory of hybrid quantum-classical neural networks. We illustrate TFQ functionalities via several basic applications including supervised learning for quantum classification, quantum control, simulating noisy quantum circuits, and quantum approximate optimization. Moreover, we demonstrate how one can apply TFQ to tackle advanced quantum learning tasks including meta-learning, layerwise learning, Hamiltonian learning, sampling thermal states, variational quantum eigensolvers, classification of quantum phase transitions, generative adversarial networks, and reinforcement learning. We hope this framework provides the necessary tools for the quantum computing and machine learning research communities to explore models of both natural and artificial quantum systems, and ultimately discover new quantum algorithms which could potentially yield a quantum advantage.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا