No Arabic abstract
Quantum computers are capable of efficiently contracting unitary tensor networks, a task that is likely to remain difficult for classical computers. For instance, networks based on matrix product states or the multi-scale entanglement renormalization ansatz (MERA) can be contracted on a small quantum computer to aid the simulation of a large quantum system. However, without the ability to selectively reset qubits, the associated spatial cost can be exorbitant. In this paper, we propose a protocol that can unitarily reset qubits when the circuit has a common convolutional form, thus dramatically reducing the spatial cost for implementing the contraction algorithm on general near-term quantum computers. This protocol generates fresh qubits from used ones by partially applying the time-reversed quantum circuit over qubits that are no longer in use. In the absence of noise, we prove that the state of a subset of these qubits becomes $|0ldots 0rangle$, up to an error exponentially small in the number of gates applied. We also provide a numerical evidence that the protocol works in the presence of noise. We also provide a numerical evidence that the protocol works in the presence of noise, and formulate a condition under which the noise-resilience follows rigorously.
Quantum computer possesses quantum parallelism and offers great computing power over classical computer cite{er1,er2}. As is well-know, a moving quantum object passing through a double-slit exhibits particle wave duality. A quantum computer is static and lacks this duality property. The recently proposed duality computer has exploited this particle wave duality property, and it may offer additional computing power cite{r1}. Simply put it, a duality computer is a moving quantum computer passing through a double-slit. A duality computer offers the capability to perform separate operations on the sub-waves coming out of the different slits, in the so-called duality parallelism. Here we show that an $n$-dubit duality computer can be modeled by an $(n+1)$-qubit quantum computer. In a duality mode, computing operations are not necessarily unitary. A $n$-qubit quantum computer can be used as an $n$-bit reversible classical computer and is energy efficient. Our result further enables a $(n+1)$-qubit quantum computer to run classical algorithms in a $O(2^n)$-bit classical computer. The duality mode provides a natural link between classical computing and quantum computing. Here we also propose a recycling computing mode in which a quantum computer will continue to compute until the result is obtained. These two modes provide new tool for algorithm design. A search algorithm for the unsorted database search problem is designed.
We present a quantum chemistry benchmark for noisy intermediate-scale quantum computers that leverages the variational quantum eigensolver, active space reduction, a reduced unitary coupled cluster ansatz, and reduced density purification as error mitigation. We demonstrate this benchmark on the 20 qubit IBM Tokyo and 16 qubit Rigetti Aspen processors via the simulation of alkali metal hydrides (NaH, KH, RbH),with accuracy of the computed ground state energy serving as the primary benchmark metric. We further parameterize this benchmark suite on the trial circuit type, the level of symmetry reduction, and error mitigation strategies. Our results demonstrate the characteristically high noise level present in near-term superconducting hardware, but provide a relevant baseline for future improvement of the underlying hardware, and a means for comparison across near-term hardware types. We also demonstrate how to reduce the noise in post processing with specific error mitigation techniques. Particularly, the adaptation of McWeeny purification of noisy density matrices dramatically improves accuracy of quantum computations, which, along with adjustable active space, significantly extends the range of accessible molecular systems. We demonstrate that for specific benchmark settings, the accuracy metric can reach chemical accuracy when computing over the cloud on certain quantum computers.
Readout errors are a significant source of noise for near term quantum computers. A variety of methods have been proposed to mitigate these errors using classical post processing. For a system with $n$ qubits, the entire readout error profile is specified by a $2^ntimes 2^n$ matrix. Recent proposals to use sub-exponential approximations rely on small and/or short-ranged error correlations. In this paper, we introduce and demonstrate a methodology to categorize and quantify multiqubit readout error correlations. Two distinct types of error correlations are considered: sensitivity of the measurement of a given qubit to the state of nearby spectator qubits, and measurement operator covariances. We deploy this methodology on IBMQ quantum computers, finding that error correlations are indeed small compared to the single-qubit readout errors on IBMQ Melbourne (15 qubits) and IBMQ Manhattan (65 qubits), but that correlations on IBMQ Melbourne are long-ranged and do not decay with inter-qubit distance.
Readout errors on near-term quantum computers can introduce significant error to the empirical probability distribution sampled from the output of a quantum circuit. These errors can be mitigated by classical postprocessing given the access of an experimental emph{response matrix} that describes the error associated with measurement of each computational basis state. However, the resources required to characterize a complete response matrix and to compute the corrected probability distribution scale exponentially in the number of qubits $n$. In this work, we modify standard matrix inversion techniques using two perturbative approximations with significantly reduced complexity and bounded error when the likelihood of high order bitflip events is strongly suppressed. Given a characteristic error rate $q$, our first method recovers the probability of the all-zeros bitstring $p_0$ by sampling only a small subspace of the response matrix before inverting readout error resulting in a relative speedup of $text{poly}left(2^{n} / big(begin{smallmatrix} n w end{smallmatrix}big)right)$, which we motivate using a simplified error model for which the approximation incurs only $O(q^w)$ error for some integer $w$. We then provide a generalized technique to efficiently recover full output distributions with $O(q^w)$ error in the perturbative limit. These approximate techniques for readout error correction may greatly accelerate near term quantum computing applications.
With quantum computing technologies nearing the era of commercialization and quantum supremacy, machine learning (ML) appears as one of the promising killer applications. Despite significant effort, there has been a disconnect between most quantum ML proposals, the needs of ML practitioners, and the capabilities of near-term quantum devices to demonstrate quantum enhancement in the near future. In this contribution to the focus collection on What would you do with 1000 qubits?, we provide concrete examples of intractable ML tasks that could be enhanced with near-term devices. We argue that to reach this target, the focus should be on areas where ML researchers are struggling, such as generative models in unsupervised and semi-supervised learning, instead of the popular and more tractable supervised learning techniques. We also highlight the case of classical datasets with potential quantum-like statistical correlations where quantum models could be more suitable. We focus on hybrid quantum-classical approaches and illustrate some of the key challenges we foresee for near-term implementations. Finally, we introduce the quantum-assisted Helmholtz machine (QAHM), an attempt to use near-term quantum devices to tackle high-dimensional datasets of continuous variables. Instead of using quantum computers to assist deep learning, as previous approaches do, the QAHM uses deep learning to extract a low-dimensional binary representation of data, suitable for relatively small quantum processors which can assist the training of an unsupervised generative model. Although we illustrate this concept on a quantum annealer, other quantum platforms could benefit as well from this hybrid quantum-classical framework.