ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum equation of motion for computing molecular excitation energies on a noisy quantum processor

86   0   0.0 ( 0 )
 نشر من قبل Pauline Ollitrault
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The computation of molecular excitation energies is essential for predicting photo-induced reactions of chemical and technological interest. While the classical computing resources needed for this task scale poorly, quantum algorithms emerge as promising alternatives. In particular, the extension of the variational quantum eigensolver algorithm to the computation of the excitation energies is an attractive option. However, there is currently a lack of such algorithms for correlated molecular systems that is amenable to near-term, noisy hardware. In this work, we propose an extension of the well-established classical equation of motion approach to a quantum algorithm for the calculation of molecular excitation energies on noisy quantum computers. In particular, we demonstrate the efficiency of this approach in the calculation of the excitation energies of the LiH molecule on an IBM Quantum computer.

قيم البحث

اقرأ أيضاً

The successful implementation of algorithms on quantum processors relies on the accurate control of quantum bits (qubits) to perform logic gate operations. In this era of noisy intermediate-scale quantum (NISQ) computing, systematic miscalibrations, drift, and crosstalk in the control of qubits can lead to a coherent form of error which has no classical analog. Coherent errors severely limit the performance of quantum algorithms in an unpredictable manner, and mitigating their impact is necessary for realizing reliable quantum computations. Moreover, the average error rates measured by randomized benchmarking and related protocols are not sensitive to the full impact of coherent errors, and therefore do not reliably predict the global performance of quantum algorithms, leaving us unprepared to validate the accuracy of future large-scale quantum computations. Randomized compiling is a protocol designed to overcome these performance limitations by converting coherent errors into stochastic noise, dramatically reducing unpredictable errors in quantum algorithms and enabling accurate predictions of algorithmic performance from error rates measured via cycle benchmarking. In this work, we demonstrate significant performance gains under randomized compiling for the four-qubit quantum Fourier transform algorithm and for random circuits of variable depth on a superconducting quantum processor. Additionally, we accurately predict algorithm performance using experimentally-measured error rates. Our results demonstrate that randomized compiling can be utilized to leverage and predict the capabilities of modern-day noisy quantum processors, paving the way forward for scalable quantum computing.
The discovery of topological order has revolutionized the understanding of quantum matter in modern physics and provided the theoretical foundation for many quantum error correcting codes. Realizing topologically ordered states has proven to be extre mely challenging in both condensed matter and synthetic quantum systems. Here, we prepare the ground state of the toric code Hamiltonian using an efficient quantum circuit on a superconducting quantum processor. We measure a topological entanglement entropy near the expected value of $ln2$, and simulate anyon interferometry to extract the braiding statistics of the emergent excitations. Furthermore, we investigate key aspects of the surface code, including logical state injection and the decay of the non-local order parameter. Our results demonstrate the potential for quantum processors to provide key insights into topological quantum matter and quantum error correction.
We present a quantum kernel method for high-dimensional data analysis using Googles universal quantum processor, Sycamore. This method is successfully applied to the cosmological benchmark of supernova classification using real spectral features with no dimensionality reduction and without vanishing kernel elements. Instead of using a synthetic dataset of low dimension or pre-processing the data with a classical machine learning algorithm to reduce the data dimension, this experiment demonstrates that machine learning with real, high dimensional data is possible using a quantum processor; but it requires careful attention to shot statistics and mean kernel element size when constructing a circuit ansatz. Our experiment utilizes 17 qubits to classify 67 dimensional data - significantly higher dimensionality than the largest prior quantum kernel experiments - resulting in classification accuracy that is competitive with noiseless simulation and comparable classical techniques.
Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite lifetime. Hybrid algorithms leveraging classical resources have demonstrated promising initial results in the ef ficient calculation of Hamiltonian ground states--an important eigenvalue problem in the physical sciences that is often classically intractable. In these protocols, a Hamiltonian is parsed and evaluated term-wise with a shallow quantum circuit, and the resulting energy minimized using classical resources. This reduces the number of consecutive logical operations that must be performed on the quantum hardware before the onset of decoherence. We demonstrate a complete implementation of the Variational Quantum Eigensolver (VQE), augmented with a novel Quantum Subspace Expansion, to calculate the complete energy spectrum of the H2 molecule with near chemical accuracy. The QSE also enables the mitigation of incoherent errors, potentially allowing the implementation of larger-scale algorithms without complex quantum error correction techniques.
For variational algorithms on the near term quantum computing hardware, it is highly desirable to use very accurate ansatze with low implementation cost. Recent studies have shown that the antisymmetrized geminal power (AGP) wavefunction can be an ex cellent starting point for ansatze describing systems with strong pairing correlations, as those occurring in superconductors. In this work, we show how AGP can be efficiently implemented on a quantum computer with circuit depth, number of CNOTs, and number of measurements being linear in system size. Using AGP as the initial reference, we propose and implement a unitary correlator on AGP and benchmark it on the ground state of the pairing Hamiltonian. The results show highly accurate ground state energies in all correlation regimes of this model Hamiltonian.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا