ترغب بنشر مسار تعليمي؟ اضغط هنا

What limits the simulation of quantum computers?

105   0   0.0 ( 0 )
 نشر من قبل Xavier Waintal
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

It is imperative that useful quantum computers be very difficult to simulate classically; otherwise classical computers could be used for the applications envisioned for the quantum ones. Perfect quantum computers are unarguably exponentially difficult to simulate: the classical resources required grow exponentially with the number of qubits $N$ or the depth $D$ of the circuit. Real quantum computing devices, however, are characterized by an exponentially decaying fidelity $mathcal{F} sim (1-epsilon)^{ND}$ with an error rate $epsilon$ per operation as small as $approx 1%$ for current devices. In this work, we demonstrate that real quantum computers can be simulated at a tiny fraction of the cost that would be needed for a perfect quantum computer. Our algorithms compress the representations of quantum wavefunctions using matrix product states (MPS), which capture states with low to moderate entanglement very accurately. This compression introduces a finite error rate $epsilon$ so that the algorithms closely mimic the behavior of real quantum computing devices. The computing time of our algorithm increases only linearly with $N$ and $D$. We illustrate our algorithms with simulations of random circuits for qubits connected in both one and two dimensional lattices. We find that $epsilon$ can be decreased at a polynomial cost in computing power down to a minimum error $epsilon_infty$. Getting below $epsilon_infty$ requires computing resources that increase exponentially with $epsilon_infty/epsilon$. For a two dimensional array of $N=54$ qubits and a circuit with Control-Z gates, error rates better than state-of-the-art devices can be obtained on a laptop in a few hours. For more complex gates such as a swap gate followed by a controlled rotation, the error rate increases by a factor three for similar computing time.



قيم البحث

اقرأ أيضاً

Traditional algorithms for simulating quantum computers on classical ones require an exponentially large amount of memory, and so typically cannot simulate general quantum circuits with more than about 30 or so qubits on a typical PC-scale platform w ith only a few gigabytes of main memory. However, more memory-efficient simulations are possible, requiring only polynomial or even linear space in the size of the quantum circuit being simulated. In this paper, we describe one such technique, which was recently implemented at FSU in the form of a C++ program called SEQCSim, which we releasing publicly. We also discuss the potential benefits of this simulation in quantum computing research and education, and outline some possible directions for further progress.
Quantum simulation of quantum field theory is a flagship application of quantum computers that promises to deliver capabilities beyond classical computing. The realization of quantum advantage will require methods to accurately predict error scaling as a function of the resolution and parameters of the model that can be implemented efficiently on quantum hardware. In this paper, we address the representation of lattice bosonic fields in a discretized field amplitude basis, develop methods to predict error scaling, and present efficient qubit implementation strategies. A low-energy subspace of the bosonic Hilbert space, defined by a boson occupation cutoff, can be represented with exponentially good accuracy by a low-energy subspace of a finite size Hilbert space. The finite representation construction and the associated errors are directly related to the accuracy of the Nyquist-Shannon sampling and the Finite Fourier transforms of the boson number states in the field and the conjugate-field bases. We analyze the relation between the boson mass, the discretization parameters used for wavefunction sampling and the finite representation size. Numerical simulations of small size $Phi^4$ problems demonstrate that the boson mass optimizing the sampling of the ground state wavefunction is a good approximation to the optimal boson mass yielding the minimum low-energy subspace size. However, we find that accurate sampling of general wavefunctions does not necessarily result in accurate representation. We develop methods for validating and adjusting the discretization parameters to achieve more accurate simulations.
Intermediate-scale quantum technologies provide unprecedented opportunities for scientific discoveries while posing the challenge of identifying important problems that can take advantage of them through algorithmic innovations. A major open problem in quantum many-body physics is the table-top generation and detection of emergent excitations analogous to gravitons -- the elusive mediators of gravitational force in a quantum theory of gravity. In solid-state materials, fractional quantum Hall phases are one of the leading platforms for realizing graviton-like excitations. However, their direct observation remains an experimental challenge. Here, we generate these excitations on the IBM quantum processor. We first identify an effective one-dimensional model that captures the geometric properties and graviton dynamics of fractional quantum Hall states. We then develop an efficient, optimal-control-based variational quantum algorithm to simulate geometric quench and the subsequent graviton dynamics, which we successfully implement on the IBM quantum computer. Our results open a new avenue for studying the emergence of gravitons in a new class of tractable models that lend themselves to direct implementations on the existing quantum hardware.
97 - Tobias J. Osborne 2019
We describe a general procedure to give effective continuous descriptions of quantum lattice systems in terms of quantum fields. There are two key novelties of our method: firstly, it is framed in the hamiltonian setting and applies equally to distin guishable quantum spins, bosons, and fermions and, secondly, it works for arbitrary variational tensor network states and can easily produce computable non-gaussian quantum field states. Our construction extends the mean-field fluctuation formalism of Hepp and Lieb (developed later by Verbeure and coworkers) to identify emergent continuous large-scale degrees of freedom - the continuous degrees of freedom are not identified beforehand. We apply the construction to to tensor network states, including, matrix product states and projected entangled-pair states, where we recover their recently introduced continuous counterparts, and also for tree tensor networks and the multi-scale entanglement renormalisation ansatz. Finally, extending the continuum limit to include dynamics we obtain a strict light cone for the propagation of information.
Quantum simulation represents the most promising quantum application to demonstrate quantum advantage on near-term noisy intermediate-scale quantum (NISQ) computers, yet available quantum simulation algorithms are prone to errors and thus difficult t o be realized. Herein, we propose a novel scheme to utilize intrinsic gate errors of NISQ devices to enable controllable simulation of open quantum system dynamics without ancillary qubits or explicit bath engineering, thus turning unwanted quantum noises into useful quantum resources. Specifically, we simulate energy transfer process in a photosynthetic dimer system on IBM-Q cloud. By employing designed decoherence-inducing gates, we show that quantum dissipative dynamics can be simulated efficiently across coherent-to-incoherent regimes with results comparable to those of the numerically-exact classical method. Moreover, we demonstrate a calibration routine that enables consistent and predictive simulations of open-quantum system dynamics in the intermediate coupling regime. This work provides a new direction for quantum advantage in the NISQ era.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا