Do you want to publish a course? Click here

On the experimental feasibility of quantum state reconstruction via machine learning

113   0   0.0 ( 0 )
 Added by Sanjaya Lohani
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We determine the resource scaling of machine learning-based quantum state reconstruction methods, in terms of inference and training, for systems of up to four qubits when constrained to pure states. Further, we examine system performance in the low-count regime, likely to be encountered in the tomography of high-dimensional systems. Finally, we implement our quantum state reconstruction method on an IBM Q quantum computer, and compare against both unconstrained and constrained MLE state reconstruction.



rate research

Read More

Complete characterization of states and processes that occur within quantum devices is crucial for understanding and testing their potential to outperform classical technologies for communications and computing. However, solving this task with current state-of-the-art techniques becomes unwieldy for large and complex quantum systems. Here we realize and experimentally demonstrate a method for complete characterization of a quantum harmonic oscillator based on an artificial neural network known as the restricted Boltzmann machine. We apply the method to optical homodyne tomography and show it to allow full estimation of quantum states based on a smaller amount of experimental data compared to state-of-the-art methods. We link this advantage to reduced overfitting. Although our experiment is in the optical domain, our method provides a way of exploring quantum resources in a broad class of large-scale physical systems, such as superconducting circuits, atomic and molecular ensembles, and optomechanical systems.
We propose a new quantum state reconstruction method that combines ideas from compressed sensing, non-convex optimization, and acceleration methods. The algorithm, called Momentum-Inspired Factored Gradient Descent (texttt{MiFGD}), extends the applicability of quantum tomography for larger systems. Despite being a non-convex method, texttt{MiFGD} converges emph{provably} to the true density matrix at a linear rate, in the absence of experimental and statistical noise, and under common assumptions. With this manuscript, we present the method, prove its convergence property and provide Frobenius norm bound guarantees with respect to the true density matrix. From a practical point of view, we benchmark the algorithm performance with respect to other existing methods, in both synthetic and real experiments performed on an IBMs quantum processing unit. We find that the proposed algorithm performs orders of magnitude faster than state of the art approaches, with the same or better accuracy. In both synthetic and real experiments, we observed accurate and robust reconstruction, despite experimental and statistical noise in the tomographic data. Finally, we provide a ready-to-use code for state tomography of multi-qubit systems.
The classification of big data usually requires a mapping onto new data clusters which can then be processed by machine learning algorithms by means of more efficient and feasible linear separators. Recently, Lloyd et al. have advanced the proposal to embed classical data into quantum ones: these live in the more complex Hilbert space where they can get split into linearly separable clusters. Here, we implement these ideas by engineering two different experimental platforms, based on quantum optics and ultra-cold atoms respectively, where we adapt and numerically optimize the quantum embedding protocol by deep learning methods, and test it for some trial classical data. We perform also a similar analysis on the Rigetti superconducting quantum computer. Therefore, we find that the quantum embedding approach successfully works also at the experimental level and, in particular, we show how different platforms could work in a complementary fashion to achieve this task. These studies might pave the way for future investigations on quantum machine learning techniques especially based on hybrid quantum technologies.
Recent advances in quantum computing have drawn considerable attention to building realistic application for and using quantum computers. However, designing a suitable quantum circuit architecture requires expert knowledge. For example, it is non-trivial to design a quantum gate sequence for generating a particular quantum state with as fewer gates as possible. We propose a quantum architecture search framework with the power of deep reinforcement learning (DRL) to address this challenge. In the proposed framework, the DRL agent can only access the Pauli-$X$, $Y$, $Z$ expectation values and a predefined set of quantum operations for learning the target quantum state, and is optimized by the advantage actor-critic (A2C) and proximal policy optimization (PPO) algorithms. We demonstrate a successful generation of quantum gate sequences for multi-qubit GHZ states without encoding any knowledge of quantum physics in the agent. The design of our framework is rather general and can be employed with other DRL architectures or optimization methods to study gate synthesis and compilation for many quantum states.
Distributed training across several quantum computers could significantly improve the training time and if we could share the learned model, not the data, it could potentially improve the data privacy as the training would happen where the data is located. However, to the best of our knowledge, no work has been done in quantum machine learning (QML) in federation setting yet. In this work, we present the federated training on hybrid quantum-classical machine learning models although our framework could be generalized to pure quantum machine learning model. Specifically, we consider the quantum neural network (QNN) coupled with classical pre-trained convolutional model. Our distributed federated learning scheme demonstrated almost the same level of trained model accuracies and yet significantly faster distributed training. It demonstrates a promising future research direction for scaling and privacy aspects.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا