ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum Learnability is Arbitrarily Distillable

108   0   0.0 ( 0 )
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Quantum learning (in metrology and machine learning) involves estimating unknown parameters from measurements of quantum states. The quantum Fisher information matrix can bound the average amount of information learnt about the unknown parameters per experimental trial. In several scenarios, it is advantageous to concentrate information in as few states as possible. Here, we present two go-go theorems proving that negativity, a narrower nonclassicality concept than noncommutation, enables unbounded and lossless distillation of Fisher information about multiple parameters in quantum learning.



قيم البحث

اقرأ أيضاً

Given a quantum circuit, a quantum computer can sample the output distribution exponentially faster in the number of bits than classical computers. A similar exponential separation has yet to be established in generative models through quantum sample learning: given samples from an n-qubit computation, can we learn the underlying quantum distribution using models with training parameters that scale polynomial in n under a fixed training time? We study four kinds of generative models: Deep Boltzmann machine (DBM), Generative Adversarial Networks (GANs), Long Short-Term Memory (LSTM) and Autoregressive GAN, on learning quantum data set generated by deep random circuits. We demonstrate the leading performance of LSTM in learning quantum samples, and thus the autoregressive structure present in the underlying quantum distribution from random quantum circuits. Both numerical experiments and a theoretical proof in the case of the DBM show exponentially growing complexity of learning-agent parameters required for achieving a fixed accuracy as n increases. Finally, we establish a connection between learnability and the complexity of generative models by benchmarking learnability against different sets of samples drawn from probability distributions of variable degrees of complexities in their quantum and classical representations.
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme, which remains largely unknown due to the non-convex optimization landscape, the measurement error, and the unavoidable gate err ors introduced by noisy intermediate-scale quantum (NISQ) machines. Our contributions in this paper are multi-fold. First, we derive the utility bounds of QNN towards empirical risk minimization, and show that large gate noise, few quantum measurements, and deep circuit depth will lead to the poor utility bounds. This result also applies to the variational quantum circuits with gradient-based classical optimization, and can be of independent interest. We then prove that QNN can be treated as a differentially private (DP) model. Thirdly, we show that if a concept class can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise. This result implies the same learnability of QNN whether it is implemented on noiseless or noisy quantum machines. We last exhibit that the quantum statistical query (QSQ) model can be effectively simulated by noisy QNN. Since the QSQ model can tackle certain tasks with runtime speedup, our result suggests that the modified QNN implemented on NISQ devices will retain the quantum advantage. Numerical simulations support the theoretical results.
Quantum friction, the electromagnetic fluctuation-induced frictional force decelerating an atom which moves past a macroscopic dielectric body, has so far eluded experimental evidence despite more than three decades of theoretical studies. Inspired b y the recent finding that dynamical corrections to such an atoms internal dynamics are enhanced by one order of magnitude for vertical motion -- compared to the paradigmatic setup of parallel motion -- we generalize quantum friction calculations to arbitrary angles between the atoms direction of motion and the surface in front of which it moves. Motivated by the disagreement between quantum friction calculations based on Markovian quantum master equations and time-dependent perturbation theory, we carry out our derivations of the quantum frictional force for arbitrary angles employing both methods and compare them.
Generative modeling with machine learning has provided a new perspective on the data-driven task of reconstructing quantum states from a set of qubit measurements. As increasingly large experimental quantum devices are built in laboratories, the ques tion of how these machine learning techniques scale with the number of qubits is becoming crucial. We empirically study the scaling of restricted Boltzmann machines (RBMs) applied to reconstruct ground-state wavefunctions of the one-dimensional transverse-field Ising model from projective measurement data. We define a learning criterion via a threshold on the relative error in the energy estimator of the machine. With this criterion, we observe that the number of RBM weight parameters required for accurate representation of the ground state in the worst case - near criticality - scales quadratically with the number of qubits. By pruning small parameters of the trained model, we find that the number of weights can be significantly reduced while still retaining an accurate reconstruction. This provides evidence that over-parametrization of the RBM is required to facilitate the learning process.
130 - Lin Chen , Yi-Xin Chen 2008
We prove that the bipartite entangled state of rank three is distillable. So there is no rank three bipartite bound entangled state. By using this fact, We present some families of rank four states that are distillable. We also analyze the relation between the low rank state and the Werner state.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا