Do you want to publish a course? Click here

Efficient State Read-out for Quantum Machine Learning Algorithms

162   0   0.0 ( 0 )
 Added by Min-Hsiu Hsieh
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

Many quantum machine learning (QML) algorithms that claim speed-up over their classical counterparts only generate quantum states as solutions instead of their final classical description. The additional step to decode quantum states into classical vectors normally will destroy the quantum advantage in most scenarios because all existing tomographic methods require runtime that is polynomial with respect to the state dimension. In this Letter, we present an efficient readout protocol that yields the classical vector form of the generated state, so it will achieve the end-to-end advantage for those quantum algorithms. Our protocol suits the case that the output state lies in the row space of the input matrix, of rank $r$, that is stored in the quantum random access memory. The quantum resources for decoding the state in $ell_2$-norm with $epsilon$ error require $text{poly}(r,1/epsilon)$ copies of the output state and $text{poly}(r, kappa^r,1/epsilon)$ queries to the input oracles, where $kappa$ is the condition number of the input matrix. With our read-out protocol, we completely characterise the end-to-end resources for quantum linear equation solvers and quantum singular value decomposition. One of our technical tools is an efficient quantum algorithm for performing the Gram-Schmidt orthonormal procedure, which we believe, will be of independent interest.



rate research

Read More

Quantum simulators allow to explore static and dynamical properties of otherwise intractable quantum many-body systems. In many instances, however, it is the read-out that limits such quantum simulations. In this work, we introduce a new paradigm of experimental read-out exploiting coherent non-interacting dynamics in order to extract otherwise inaccessible observables. Specifically, we present a novel tomographic recovery method allowing to indirectly measure second moments of relative density fluctuations in one-dimensional superfluids which until now eluded direct measurements. We achieve this by relating second moments of relative phase fluctuations which are measured at different evolution times through known dynamical equations arising from unitary non-interacting multi-mode dynamics. Applying methods from signal processing we reconstruct the full matrix of second moments, including the relative density fluctuations. We employ the method to investigate equilibrium states, the dynamics of phonon occupation numbers and even to predict recurrences. The method opens a new window for quantum simulations with one-dimensional superfluids, enabling a deeper analysis of their equilibration and thermalization dynamics.
Classical machine learning (ML) provides a potentially powerful approach to solving challenging quantum many-body problems in physics and chemistry. However, the advantages of ML over more traditional methods have not been firmly established. In this work, we prove that classical ML algorithms can efficiently predict ground state properties of gapped Hamiltonians in finite spatial dimensions, after learning from data obtained by measuring other Hamiltonians in the same quantum phase of matter. In contrast, under widely accepted complexity theory assumptions, classical algorithms that do not learn from data cannot achieve the same guarantee. We also prove that classical ML algorithms can efficiently classify a wide range of quantum phases of matter. Our arguments are based on the concept of a classical shadow, a succinct classical description of a many-body quantum state that can be constructed in feasible quantum experiments and be used to predict many properties of the state. Extensive numerical experiments corroborate our theoretical results in a variety of scenarios, including Rydberg atom systems, 2D random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases.
We generalize the PAC (probably approximately correct) learning model to the quantum world by generalizing the concepts from classical functions to quantum processes, defining the problem of emph{PAC learning quantum process}, and study its sample complexity. In the problem of PAC learning quantum process, we want to learn an $epsilon$-approximate of an unknown quantum process $c^*$ from a known finite concept class $C$ with probability $1-delta$ using samples ${(x_1,c^*(x_1)),(x_2,c^*(x_2)),dots}$, where ${x_1,x_2, dots}$ are computational basis states sampled from an unknown distribution $D$ and ${c^*(x_1),c^*(x_2),dots}$ are the (possibly mixed) quantum states outputted by $c^*$. The special case of PAC-learning quantum process under constant input reduces to a natural problem which we named as approximate state discrimination, where we are given copies of an unknown quantum state $c^*$ from an known finite set $C$, and we want to learn with probability $1-delta$ an $epsilon$-approximate of $c^*$ with as few copies of $c^*$ as possible. We show that the problem of PAC learning quantum process can be solved with $$Oleft(frac{log|C| + log(1/ delta)} { epsilon^2}right)$$ samples when the outputs are pure states and $$Oleft(frac{log^3 |C|(log |C|+log(1/ delta))} { epsilon^2}right)$$ samples if the outputs can be mixed. Some implications of our results are that we can PAC-learn a polynomial sized quantum circuit in polynomial samples and approximate state discrimination can be solved in polynomial samples even when concept class size $|C|$ is exponential in the number of qubits, an exponentially improvement over a full state tomography.
Kernel methods are powerful for machine learning, as they can represent data in feature spaces that similarities between samples may be faithfully captured. Recently, it is realized that machine learning enhanced by quantum computing is closely related to kernel methods, where the exponentially large Hilbert space turns to be a feature space more expressive than classical ones. In this paper, we generalize quantum kernel methods by encoding data into continuous-variable quantum states, which can benefit from the infinite-dimensional Hilbert space of continuous variables. Specially, we propose squeezed-state encoding, in which data is encoded as either in the amplitude or the phase. The kernels can be calculated on a quantum computer and then are combined with classical machine learning, e.g. support vector machine, for training and predicting tasks. Their comparisons with other classical kernels are also addressed. Lastly, we discuss physical implementations of squeezed-state encoding for machine learning in quantum platforms such as trapped ions.
Hybrid Quantum-Classical algorithms are a promising candidate for developing uses for NISQ devices. In particular, Parametrised Quantum Circuits (PQCs) paired with classical optimizers have been used as a basis for quantum chemistry and quantum optimization problems. Training PQCs relies on methods to overcome the fact that the gradients of PQCs vanish exponentially in the size of the circuits used. Tensor network methods are being increasingly used as a classical machine learning tool, as well as a tool for studying quantum systems. We introduce a circuit pre-training method based on matrix product state machine learning methods, and demonstrate that it accelerates training of PQCs for both supervised learning, energy minimization, and combinatorial optimization.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا