No Arabic abstract
Hybrid Quantum-Classical algorithms are a promising candidate for developing uses for NISQ devices. In particular, Parametrised Quantum Circuits (PQCs) paired with classical optimizers have been used as a basis for quantum chemistry and quantum optimization problems. Training PQCs relies on methods to overcome the fact that the gradients of PQCs vanish exponentially in the size of the circuits used. Tensor network methods are being increasingly used as a classical machine learning tool, as well as a tool for studying quantum systems. We introduce a circuit pre-training method based on matrix product state machine learning methods, and demonstrate that it accelerates training of PQCs for both supervised learning, energy minimization, and combinatorial optimization.
In recent years, a close connection between the description of open quantum systems, the input-output formalism of quantum optics, and continuous matrix product states in quantum field theory has been established. So far, however, this connection has not been extended to the condensed-matter context. In this work, we substantially develop further and apply a machinery of continuous matrix product states (cMPS) to perform tomography of transport experiments. We first present an extension of the tomographic possibilities of cMPS by showing that reconstruction schemes do not need to be based on low-order correlation functions only, but also on low-order counting probabilities. We show that fermionic quantum transport settings can be formulated within the cMPS framework. This allows us to present a reconstruction scheme based on the measurement of low-order correlation functions that provides access to quantities that are not directly measurable with present technology. Emblematic examples are high-order correlations functions and waiting times distributions (WTD). The latter are of particular interest since they offer insights into short-time scale physics. We demonstrate the functioning of the method with actual data, opening up the way to accessing WTD within the quantum regime.
We demonstrate that the optimal states in lossy quantum interferometry may be efficiently simulated using low rank matrix product states. We argue that this should be expected in all realistic quantum metrological protocols with uncorrelated noise and is related to the elusive nature of the Heisenberg precision scaling in presence of decoherence.
In stochastic modeling, there has been a significant effort towards finding predictive models that predict a stochastic process future using minimal information from its past. Meanwhile, in condensed matter physics, matrix product states (MPS) are known as a particularly efficient representation of 1D spin chains. In this Letter, we associate each stochastic process with a suitable quantum state of a spin chain. We then show that the optimal predictive model for the process leads directly to an MPS representation of the associated quantum state. Conversely, MPS methods offer a systematic construction of the best known quantum predictive models. This connection allows an improved method for computing the quantum memory needed for generating optimal predictions. We prove that this memory coincides with the entanglement of the associated spin chain across the past-future bipartition.
Many quantum machine learning (QML) algorithms that claim speed-up over their classical counterparts only generate quantum states as solutions instead of their final classical description. The additional step to decode quantum states into classical vectors normally will destroy the quantum advantage in most scenarios because all existing tomographic methods require runtime that is polynomial with respect to the state dimension. In this Letter, we present an efficient readout protocol that yields the classical vector form of the generated state, so it will achieve the end-to-end advantage for those quantum algorithms. Our protocol suits the case that the output state lies in the row space of the input matrix, of rank $r$, that is stored in the quantum random access memory. The quantum resources for decoding the state in $ell_2$-norm with $epsilon$ error require $text{poly}(r,1/epsilon)$ copies of the output state and $text{poly}(r, kappa^r,1/epsilon)$ queries to the input oracles, where $kappa$ is the condition number of the input matrix. With our read-out protocol, we completely characterise the end-to-end resources for quantum linear equation solvers and quantum singular value decomposition. One of our technical tools is an efficient quantum algorithm for performing the Gram-Schmidt orthonormal procedure, which we believe, will be of independent interest.
Kernel methods are powerful for machine learning, as they can represent data in feature spaces that similarities between samples may be faithfully captured. Recently, it is realized that machine learning enhanced by quantum computing is closely related to kernel methods, where the exponentially large Hilbert space turns to be a feature space more expressive than classical ones. In this paper, we generalize quantum kernel methods by encoding data into continuous-variable quantum states, which can benefit from the infinite-dimensional Hilbert space of continuous variables. Specially, we propose squeezed-state encoding, in which data is encoded as either in the amplitude or the phase. The kernels can be calculated on a quantum computer and then are combined with classical machine learning, e.g. support vector machine, for training and predicting tasks. Their comparisons with other classical kernels are also addressed. Lastly, we discuss physical implementations of squeezed-state encoding for machine learning in quantum platforms such as trapped ions.