Do you want to publish a course? Click here

Quantum tomography benchmarking

98   0   0.0 ( 0 )
 Added by Boris Bantysh
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

Recent advances in quantum computers and simulators are steadily leading us towards full-scale quantum computing devices. Due to the fact that debugging is necessary to create any computing device, quantum tomography (QT) is a critical milestone on this path. In practice, the choice between different QT methods faces the lack of comparison methodology. Modern research provides a wide range of QT methods, which differ in their application areas, as well as experimental and computational complexity. Testing such methods is also being made under different conditions, and various efficiency measures are being applied. Moreover, many methods have complex programming implementations; thus, comparison becomes extremely difficult. In this study, we have developed a general methodology for comparing quantum state tomography methods. The methodology is based on an estimate of the resources needed to achieve the required accuracy. We have developed a software library (in MATLAB and Python) that makes it easy to analyze any QT method implementation through a series of numerical experiments. The conditions for such a simulation are set by the number of tests corresponding to real physical experiments. As a validation of the proposed methodology and software, we analyzed and compared a set of QT methods. The analysis revealed some method-specific features and provided estimates of the relative efficiency of the methods.

rate research

Read More

Typical quantum gate tomography protocols struggle with a self-consistency problem: the gate operation cannot be reconstructed without knowledge of the initial state and final measurement, but such knowledge cannot be obtained without well-characterized gates. A recently proposed technique, known as randomized benchmarking tomography (RBT), sidesteps this self-consistency problem by designing experiments to be insensitive to preparation and measurement imperfections. We implement this proposal in a superconducting qubit system, using a number of experimental improvements including implementing each of the elements of the Clifford group in single `atomic pulses and custom control hardware to enable large overhead protocols. We show a robust reconstruction of several single-qubit quantum gates, including a unitary outside the Clifford group. We demonstrate that RBT yields physical gate reconstructions that are consistent with fidelities obtained by randomized benchmarking.
We train convolutional neural networks to predict whether or not a set of measurements is informationally complete to uniquely reconstruct any given quantum state with no prior information. In addition, we perform fidelity benchmarking based on this measurement set without explicitly carrying out state tomography. The networks are trained to recognize the fidelity and a reliable measure for informational completeness. By gradually accumulating measurements and data, these trained convolutional networks can efficiently establish a compressive quantum-state characterization scheme by accelerating runtime computation and greatly reducing systematic drifts in experiments. We confirm the potential of this machine-learning approach by presenting experimental results for both spatial-mode and multiphoton systems of large dimensions. These predictions are further shown to improve when the networks are trained with additional bootstrapped training sets from real experimental data. Using a realistic beam-profile displacement error model for Hermite-Gaussian sources, we further demonstrate numerically that the orders-of-magnitude reduction in certification time with trained networks greatly increases the computation yield of a large-scale quantum processor using these sources, before state fidelity deteriorates significantly.
Concomitant with the rapid development of quantum technologies, challenging demands arise concerning the certification and characterization of devices. The promises of the field can only be achieved if stringent levels of precision of components can be reached and their functioning guaranteed. This review provides a brief overview of the known characterization methods of certification, benchmarking, and tomographic recovery of quantum states and processes, as well as their applications in quantum computing, simulation, and communication.
We introduce the concept of quantum field tomography, the efficient and reliable reconstruction of unknown quantum fields based on data of correlation functions. At the basis of the analysis is the concept of continuous matrix product states, a complete set of variational states grasping states in quantum field theory. We innovate a practical method, making use of and developing tools in estimation theory used in the context of compressed sensing such as Prony methods and matrix pencils, allowing us to faithfully reconstruct quantum field states based on low-order correlation functions. In the absence of a phase reference, we highlight how specific higher order correlation functions can still be predicted. We exemplify the functioning of the approach by reconstructing randomised continuous matrix product states from their correlation data and study the robustness of the reconstruction for different noise models. We also apply the method to data generated by simulations based on continuous matrix product states and using the time-dependent variational principle. The presented approach is expected to open up a new window into experimentally studying continuous quantum systems, such as encountered in experiments with ultra-cold atoms on top of atom chips. By virtue of the analogy with the input-output formalism in quantum optics, it also allows for studying open quantum systems.
A key requirement for scalable quantum computing is that elementary quantum gates can be implemented with sufficiently low error. One method for determining the error behavior of a gate implementation is to perform process tomography. However, standard process tomography is limited by errors in state preparation, measurement and one-qubit gates. It suffers from inefficient scaling with number of qubits and does not detect adverse error-compounding when gates are composed in long sequences. An additional problem is due to the fact that desirable error probabilities for scalable quantum computing are of the order of 0.0001 or lower. Experimentally proving such low errors is challenging. We describe a randomized benchmarking method that yields estimates of the computationally relevant errors without relying on accurate state preparation and measurement. Since it involves long sequences of randomly chosen gates, it also verifies that error behavior is stable when used in long computations. We implemented randomized benchmarking on trapped atomic ion qubits, establishing a one-qubit error probability per randomized pi/2 pulse of 0.00482(17) in a particular experiment. We expect this error probability to be readily improved with straightforward technical modifications.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا