No Arabic abstract
We introduce maximum likelihood fragment tomography (MLFT) as an improved circuit cutting technique for running clustered quantum circuits on quantum devices with a limited number of qubits. In addition to minimizing the classical computing overhead of circuit cutting methods, MLFT finds the most likely probability distribution for the output of a quantum circuit, given the measurement data obtained from the circuits fragments. We demonstrate the benefits of MLFT for accurately estimating the output of a fragmented quantum circuit with numerical experiments on random unitary circuits. Finally, we show that circuit cutting can estimate the output of a clustered circuit with higher fidelity than full circuit execution, thereby motivating the use of circuit cutting as a standard tool for running clustered circuits on quantum hardware.
Maximum likelihood quantum state tomography yields estimators that are consistent, provided that the likelihood model is correct, but the maximum likelihood estimators may have bias for any finite data set. The bias of an estimator is the difference between the expected value of the estimate and the true value of the parameter being estimated. This paper investigates bias in the widely used maximum likelihood quantum state tomography. Our goal is to understand how the amount of bias depends on factors such as the purity of the true state, the number of measurements performed, and the number of different bases in which the system is measured. For that, we perform numerical experiments that simulate optical homodyne tomography under various conditions, perform tomography, and estimate bias in the purity of the estimated state. We find that estimates of higher purity states exhibit considerable bias, such that the estimates have lower purity than the true states.
Quantum state tomography aims to determine the quantum state of a system from measured data and is an essential tool for quantum information science. When dealing with continuous variable quantum states of light, tomography is often done by measuring the field amplitudes at different optical phases using homodyne detection. The quadrature-phase homodyne measurement outputs a continuous variable, so to reduce the computational cost of tomography, researchers often discretize the measurements. We show that this can be done without significantly degrading the fidelity between the estimated state and the true state. This paper studies different strategies for determining the histogram bin widths. We show that computation time can be significantly reduced with little loss in the fidelity of the estimated state when the measurement operators corresponding to each histogram bin are integrated over the bin width.
I propose an iterative expectation maximization algorithm for reconstructing a quantum optical ensemble from a set of balanced homodyne measurements performed on an optical state. The algorithm applies directly to the acquired data, bypassing the intermediate step of calculating marginal distributions. The advantages of the new method are made manifest by comparing it with the traditional inverse Radon transformation technique.
Recently we find several candidates of quantum algorithms that may be implementable in near-term devices for estimating the amplitude of a given quantum state, which is a core subroutine in various computing tasks such as the Monte Carlo methods. One of those algorithms is based on the maximum likelihood estimate with parallelized quantum circuits; in this paper, we extend this method so that it can deal with the realistic noise effect. The validity of the proposed noise model is supported by an experimental demonstration on an IBM Q device, which accordingly enables us to predict the basic requirement on the hardware components (particularly the gate error) in quantum computers to realize the quantum speedup in the amplitude estimation task.
We formulate maximum likelihood (ML) channel decoding as a quadratic unconstraint binary optimization (QUBO) and simulate the decoding by the current commercial quantum annealing machine, D-Wave 2000Q. We prepared two implementations with Ising model formulations, generated from the generator matrix and the parity-check matrix respectively. We evaluated these implementations of ML decoding for low-density parity-check (LDPC) codes, analyzing the number of spins and connections and comparing the decoding performance with belief propagation (BP) decoding and brute-force ML decoding with classical computers. The results show that these implementations are superior to BP decoding in relatively short length codes, and while the performance in the long length codes deteriorates, the implementation from the parity-check matrix formulation still works up to 1k length with fewer spins and connections than that of the generator matrix formulation due to the sparseness of parity-check matrices of LDPC.