ترغب بنشر مسار تعليمي؟ اضغط هنا

Scalar Quantum Field Theories as a Benchmark for Near-Term Quantum Computers

72   0   0.0 ( 0 )
 نشر من قبل Eugen Dumitrescu
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Quantum field theory (QFT) simulations are a potentially important application for noisy intermediate scale quantum (NISQ) computers. The ability of a quantum computer to emulate a QFT, therefore, constitutes a natural application-centric benchmark. Foundational quantum algorithms to simulate QFT processes rely on fault-tolerant computational resources, but to be useful on NISQ machines, error-resilient algorithms are required. Here we outline and implement a hybrid algorithm to calculate the lowest energy levels of the paradigmatic 1+1--dimensional interacting scalar QFT. We calculate energy splittings and compare results with experimental values obtained on currently available quantum hardware. We show that the accuracy of mass-renormalization calculations represents a useful metric with which near-term hardware may be benchmarked. We also discuss the prospects of scaling the algorithm to full simulation of interacting QFTs on future hardware.

قيم البحث

اقرأ أيضاً

We present a quantum chemistry benchmark for noisy intermediate-scale quantum computers that leverages the variational quantum eigensolver, active space reduction, a reduced unitary coupled cluster ansatz, and reduced density purification as error mi tigation. We demonstrate this benchmark on the 20 qubit IBM Tokyo and 16 qubit Rigetti Aspen processors via the simulation of alkali metal hydrides (NaH, KH, RbH),with accuracy of the computed ground state energy serving as the primary benchmark metric. We further parameterize this benchmark suite on the trial circuit type, the level of symmetry reduction, and error mitigation strategies. Our results demonstrate the characteristically high noise level present in near-term superconducting hardware, but provide a relevant baseline for future improvement of the underlying hardware, and a means for comparison across near-term hardware types. We also demonstrate how to reduce the noise in post processing with specific error mitigation techniques. Particularly, the adaptation of McWeeny purification of noisy density matrices dramatically improves accuracy of quantum computations, which, along with adjustable active space, significantly extends the range of accessible molecular systems. We demonstrate that for specific benchmark settings, the accuracy metric can reach chemical accuracy when computing over the cloud on certain quantum computers.
We point out that realization of quantum communication protocols in programmable quantum computers provides a deep benchmark for capabilities of real quantum hardware. Particularly, it is prospective to focus on measurements of entropy-based characte ristics of the performance and to explore whether a quantum regime is preserved. We perform proof-of-principle implementations of superdense coding and quantum key distribution BB84 using 5- and 16-qubit superconducting quantum processors of IBM Quantum Experience. We focus on the ability of these quantum machines to provide an efficient transfer of information between distant parts of the processors by placing Alice and Bob at different qubits of the devices. We also examine the ability of quantum devices to serve as quantum memory and to store entangled states used in quantum communication. Another issue we address is an error mitigation. Although it is at odds with benchmarking, this problem is nevertheless of importance in a general context of quantum computation with noisy quantum devices. We perform such a mitigation and noticeably improve some results.
Quantum computers are capable of efficiently contracting unitary tensor networks, a task that is likely to remain difficult for classical computers. For instance, networks based on matrix product states or the multi-scale entanglement renormalization ansatz (MERA) can be contracted on a small quantum computer to aid the simulation of a large quantum system. However, without the ability to selectively reset qubits, the associated spatial cost can be exorbitant. In this paper, we propose a protocol that can unitarily reset qubits when the circuit has a common convolutional form, thus dramatically reducing the spatial cost for implementing the contraction algorithm on general near-term quantum computers. This protocol generates fresh qubits from used ones by partially applying the time-reversed quantum circuit over qubits that are no longer in use. In the absence of noise, we prove that the state of a subset of these qubits becomes $|0ldots 0rangle$, up to an error exponentially small in the number of gates applied. We also provide a numerical evidence that the protocol works in the presence of noise. We also provide a numerical evidence that the protocol works in the presence of noise, and formulate a condition under which the noise-resilience follows rigorously.
Readout errors on near-term quantum computers can introduce significant error to the empirical probability distribution sampled from the output of a quantum circuit. These errors can be mitigated by classical postprocessing given the access of an exp erimental emph{response matrix} that describes the error associated with measurement of each computational basis state. However, the resources required to characterize a complete response matrix and to compute the corrected probability distribution scale exponentially in the number of qubits $n$. In this work, we modify standard matrix inversion techniques using two perturbative approximations with significantly reduced complexity and bounded error when the likelihood of high order bitflip events is strongly suppressed. Given a characteristic error rate $q$, our first method recovers the probability of the all-zeros bitstring $p_0$ by sampling only a small subspace of the response matrix before inverting readout error resulting in a relative speedup of $text{poly}left(2^{n} / big(begin{smallmatrix} n w end{smallmatrix}big)right)$, which we motivate using a simplified error model for which the approximation incurs only $O(q^w)$ error for some integer $w$. We then provide a generalized technique to efficiently recover full output distributions with $O(q^w)$ error in the perturbative limit. These approximate techniques for readout error correction may greatly accelerate near term quantum computing applications.
Variational algorithms are a promising paradigm for utilizing near-term quantum devices for modeling electronic states of molecular systems. However, previous bounds on the measurement time required have suggested that the application of these techni ques to larger molecules might be infeasible. We present a measurement strategy based on a low rank factorization of the two-electron integral tensor. Our approach provides a cubic reduction in term groupings over prior state-of-the-art and enables measurement times three orders of magnitude smaller than those suggested by commonly referenced bounds for the largest systems we consider. Although our technique requires execution of a linear-depth circuit prior to measurement, this is compensated for by eliminating challenges associated with sampling non-local Jordan-Wigner transformed operators in the presence of measurement error, while enabling a powerful form of error mitigation based on efficient postselection. We numerically characterize these benefits with noisy quantum circuit simulations for ground state energies of strongly correlated electronic systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا