ترغب بنشر مسار تعليمي؟ اضغط هنا

Online Learning of Quantum States

139   0   0.0 ( 0 )
 نشر من قبل Xinyi Chen
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

Suppose we have many copies of an unknown $n$-qubit state $rho$. We measure some copies of $rho$ using a known two-outcome measurement $E_{1}$, then other copies using a measurement $E_{2}$, and so on. At each stage $t$, we generate a current hypothesis $sigma_{t}$ about the state $rho$, using the outcomes of the previous measurements. We show that it is possible to do this in a way that guarantees that $|operatorname{Tr}(E_{i} sigma_{t}) - operatorname{Tr}(E_{i}rho) |$, the error in our prediction for the next measurement, is at least $varepsilon$ at most $operatorname{O}!left(n / varepsilon^2 right) $ times. Even in the non-realizable setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that do significantly worse than the best possible states at most $operatorname{O}!left(sqrt {Tn}right) $ times on the first $T$ measurements. These results generalize a 2007 theorem by Aaronson on the PAC-learnability of quantum states, to the online and regret-minimization settings. We give three different ways to prove our results---using convex optimization, quantum postselection, and sequential fat-shattering dimension---which have different advantages in terms of parameters and portability.



قيم البحث

اقرأ أيضاً

The exploration of quantum algorithms that possess quantum advantages is a central topic in quantum computation and quantum information processing. One potential candidate in this area is quantum generative adversarial learning (QuGAL), which concept ually has exponential advantages over classical adversarial networks. However, the corresponding learning algorithm remains obscured. In this paper, we propose the first quantum generative adversarial learning algorithm-- the quantum multiplicative matrix weight algorithm (QMMW)-- which enables the efficient processing of fundamental tasks. The computational complexity of QMMW is polynomially proportional to the number of training rounds and logarithmically proportional to the input size. The core concept of the proposed algorithm combines QuGAL with online learning. We exploit the implementation of QuGAL with parameterized quantum circuits, and numerical experiments for the task of entanglement test for pure state are provided to support our claims.
This is the documentation for generating random samples from the quantum state space in accordance with a specified distribution, associated with this webpage: http://tinyurl.com/QSampling . Ready-made samples (each with at least a million points) fr om various distributions are available for download, or one can generate ones own samples from a chosen distribution using the provided source codes. The sampling relies on the Hamiltonian Monte Carlo algorithm as described in New J. Phys. 17, 043018 (2015). The random samples are reposited in the hope that they would be useful for a variety of tasks in quantum information and quantum computation. Constructing credible regions for tomographic data, optimizing a function over the quantum state space with a complicated landscape, testing the typicality of entanglement among states from a multipartite quantum system, or computing the average of some quantity of interest over a subset of quantum states are but some exemplary applications among many.
We explore whether quantum advantages can be found for the zeroth-order online convex optimization problem, which is also known as bandit convex optimization with multi-point feedback. In this setting, given access to zeroth-order oracles (that is, t he loss function is accessed as a black box that returns the function value for any queried input), a player attempts to minimize a sequence of adversarially generated convex loss functions. This procedure can be described as a $T$ round iterative game between the player and the adversary. In this paper, we present quantum algorithms for the problem and show for the first time that potential quantum advantages are possible for problems of online convex optimization. Specifically, our contributions are as follows. (i) When the player is allowed to query zeroth-order oracles $O(1)$ times in each round as feedback, we give a quantum algorithm that achieves $O(sqrt{T})$ regret without additional dependence of the dimension $n$, which outperforms the already known optimal classical algorithm only achieving $O(sqrt{nT})$ regret. Note that the regret of our quantum algorithm has achieved the lower bound of classical first-order methods. (ii) We show that for strongly convex loss functions, the quantum algorithm can achieve $O(log T)$ regret with $O(1)$ queries as well, which means that the quantum algorithm can achieve the same regret bound as the classical algorithms in the full information setting.
We propose a learning method for estimating unknown pure quantum states. The basic idea of our method is to learn a unitary operation $hat{U}$ that transforms a given unknown state $|psi_taurangle$ to a known fiducial state $|frangle$. Then, after co mpletion of the learning process, we can estimate and reproduce $|psi_taurangle$ based on the learned $hat{U}$ and $|frangle$. To realize this idea, we cast a random-based learning algorithm, called `single-shot measurement learning, in which the learning rule is based on an intuitive and reasonable criterion: the greater the number of success (or failure), the less (or more) changes are imposed. Remarkably, the learning process occurs by means of a single-shot measurement outcome. We demonstrate that our method works effectively, i.e., the learning is completed with a {em finite} number, say $N$, of unknown-state copies. Most surprisingly, our method allows the maximum statistical accuracy to be achieved for large $N$, namely $simeq O(N^{-1})$ scales of average infidelity. This result is comparable to those yielded from the standard quantum tomographic method in the case where additional information is available. It highlights a non-trivial message, that is, a random-based adaptive strategy can potentially be as accurate as other standard statistical approaches.
Finding the ground state of a quantum mechanical system can be formulated as an optimal control problem. In this formulation, the drift of the optimally controlled process is chosen to match the distribution of paths in the Feynman--Kac (FK) represen tation of the solution of the imaginary time Schrodinger equation. This provides a variational principle that can be used for reinforcement learning of a neural representation of the drift. Our approach is a drop-in replacement for path integral Monte Carlo, learning an optimal importance sampler for the FK trajectories. We demonstrate the applicability of our approach to several problems of one-, two-, and many-particle physics.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا