ترغب بنشر مسار تعليمي؟ اضغط هنا

A Survey of Quantum Learning Theory

137   0   0.0 ( 0 )
 نشر من قبل Srinivasan Arunachalam
 تاريخ النشر 2017
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning from membership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.

قيم البحث

اقرأ أيضاً

We propose a learning model called the quantum statistical learning QSQ model, which extends the SQ learning model introduced by Kearns to the quantum setting. Our model can be also seen as a restriction of the quantum PAC learning model: here, the l earner does not have direct access to quantum examples, but can only obtain estimates of measurement statistics on them. Theoretically, this model provides a simple yet expressive setting to explore the power of quantum examples in machine learning. From a practical perspective, since simpler operations are required, learning algorithms in the QSQ model are more feasible for implementation on near-term quantum devices. We prove a number of results about the QSQ learning model. We first show that parity functions, (log n)-juntas and polynomial-sized DNF formulas are efficiently learnable in the QSQ model, in contrast to the classical setting where these problems are provably hard. This implies that many of the advantages of quantum PAC learning can be realized even in the more restricted quantum SQ learning model. It is well-known that weak statistical query dimension, denoted by WSQDIM(C), characterizes the complexity of learning a concept class C in the classical SQ model. We show that log(WSQDIM(C)) is a lower bound on the complexity of QSQ learning, and furthermore it is tight for certain concept classes C. Additionally, we show that this quantity provides strong lower bounds for the small-bias quantum communication model under product distributions. Finally, we introduce the notion of private quantum PAC learning, in which a quantum PAC learner is required to be differentially private. We show that learnability in the QSQ model implies learnability in the quantum private PAC model. Additionally, we show that in the private PAC learning setting, the classical and quantum sample complexities are equal, up to constant factors.
$ ewcommand{eps}{varepsilon} $In learning theory, the VC dimension of a concept class $C$ is the most common way to measure its richness. In the PAC model $$ ThetaBig(frac{d}{eps} + frac{log(1/delta)}{eps}Big) $$ examples are necessary and sufficien t for a learner to output, with probability $1-delta$, a hypothesis $h$ that is $eps$-close to the target concept $c$. In the related agnostic model, where the samples need not come from a $cin C$, we know that $$ ThetaBig(frac{d}{eps^2} + frac{log(1/delta)}{eps^2}Big) $$ examples are necessary and sufficient to output an hypothesis $hin C$ whose error is at most $eps$ worse than the best concept in $C$. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson, who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atici and Servedio, improved by Zhang, showed that in the PAC setting, quantum examples cannot be much more powerful: the required number of quantum examples is $$ OmegaBig(frac{d^{1-eta}}{eps} + d + frac{log(1/delta)}{eps}Big)mbox{ for all }eta> 0. $$ Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a $log(d/eps)$ factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the Pretty Good Measurement on the quantum state identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors.
105 - Gus Gutoski , John Watrous 2006
We study properties of quantum strategies, which are complete specifications of a given partys actions in any multiple-round interaction involving the exchange of quantum information with one or more other parties. In particular, we focus on a repres entation of quantum strategies that generalizes the Choi-Jamio{l}kowski representation of quantum operations. This new representation associates with each strategy a positive semidefinite operator acting only on the tensor product of its input and output spaces. Various facts about such representations are established, and two applications are discussed: the first is a new and conceptually simple proof of Kitaevs lower bound for strong coin-flipping, and the second is a proof of the exact characterization QRG = EXP of the class of problems having quantum refereed games.
We present two new results about exact learning by quantum computers. First, we show how to exactly learn a $k$-Fourier-sparse $n$-bit Boolean function from $O(k^{1.5}(log k)^2)$ uniform quantum examples for that function. This improves over the boun d of $widetilde{Theta}(kn)$ uniformly random classical examples (Haviv and Regev, CCC15). Our main tool is an improvement of Changs lemma for the special case of sparse functions. Second, we show that if a concept class $mathcal{C}$ can be exactly learned using $Q$ quantum membership queries, then it can also be learned using $Oleft(frac{Q^2}{log Q}log|mathcal{C}|right)$ classical membership queries. This improves the previous-best simulation result (Servedio and Gortler, SICOMP04) by a $log Q$-factor.
We generalize the PAC (probably approximately correct) learning model to the quantum world by generalizing the concepts from classical functions to quantum processes, defining the problem of emph{PAC learning quantum process}, and study its sample co mplexity. In the problem of PAC learning quantum process, we want to learn an $epsilon$-approximate of an unknown quantum process $c^*$ from a known finite concept class $C$ with probability $1-delta$ using samples ${(x_1,c^*(x_1)),(x_2,c^*(x_2)),dots}$, where ${x_1,x_2, dots}$ are computational basis states sampled from an unknown distribution $D$ and ${c^*(x_1),c^*(x_2),dots}$ are the (possibly mixed) quantum states outputted by $c^*$. The special case of PAC-learning quantum process under constant input reduces to a natural problem which we named as approximate state discrimination, where we are given copies of an unknown quantum state $c^*$ from an known finite set $C$, and we want to learn with probability $1-delta$ an $epsilon$-approximate of $c^*$ with as few copies of $c^*$ as possible. We show that the problem of PAC learning quantum process can be solved with $$Oleft(frac{log|C| + log(1/ delta)} { epsilon^2}right)$$ samples when the outputs are pure states and $$Oleft(frac{log^3 |C|(log |C|+log(1/ delta))} { epsilon^2}right)$$ samples if the outputs can be mixed. Some implications of our results are that we can PAC-learn a polynomial sized quantum circuit in polynomial samples and approximate state discrimination can be solved in polynomial samples even when concept class size $|C|$ is exponential in the number of qubits, an exponentially improvement over a full state tomography.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا