ترغب بنشر مسار تعليمي؟ اضغط هنا

Simulating Large Quantum Circuits on a Small Quantum Computer

114   0   0.0 ( 0 )
 نشر من قبل Xiaodi Wu
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Limited quantum memory is one of the most important constraints for near-term quantum devices. Understanding whether a small quantum computer can simulate a larger quantum system, or execute an algorithm requiring more qubits than available, is both of theoretical and practical importance. In this Letter, we introduce cluster parameters $K$ and $d$ of a quantum circuit. The tensor network of such a circuit can be decomposed into clusters of size at most $d$ with at most $K$ qubits of inter-cluster quantum communication. We propose a cluster simulation scheme that can simulate any $(K,d)$-clustered quantum circuit on a $d$-qubit machine in time roughly $2^{O(K)}$, with further speedups possible when taking more fine-grained circuit structure into account. We show how our scheme can be used to simulate clustered quantum systems -- such as large molecules -- that can be partitioned into multiple significantly smaller clusters with weak interactions among them. By using a suitable clustered ansatz, we also experimentally demonstrate that a quantum variational eigensolver can still achieve the desired performance for estimating the energy of the BeH$_2$ molecule while running on a physical quantum device with half the number of required qubits.

قيم البحث

اقرأ أيضاً

We present efficient quantum algorithms for simulating time-dependent Hamiltonian evolution of general input states using an oracular model of a quantum computer. Our algorithms use either constant or adaptively chosen time steps and are significant because they are the first to have time-complexities that are comparable to the best known methods for simulating time-independent Hamiltonian evolution, given appropriate smoothness criteria on the Hamiltonian are satisfied. We provide a thorough cost analysis of these algorithms that considers discretizion errors in both the time and the representation of the Hamiltonian. In addition, we provide the first upper bounds for the error in Lie-Trotter-Suzuki approximations to unitary evolution operators, that use adaptively chosen time steps.
165 - Daochen Wang 2019
In a recent breakthrough, Bravyi, Gosset and K{o}nig (BGK) [Science, 2018] proved that simulating constant depth quantum circuits takes classical circuits $Omega(log n)$ depth. In our paper, we first formalise their notion of simulation, which we cal l possibilistic simulation. Then, from well-known results, we deduce that their circuits can be simulated in depth $O(log^{2} n)$. Separately, we construct explicit classical circuits that can simulate any depth-$d$ quantum circuit with Clifford and $t$ $T$-gates in depth $O(d+t)$. Our classical circuits use ${text{NOT, AND, OR}}$ gates of fan-in $leq 2$.
Algorithms for triangle-finding, the smallest nontrivial instance of the k-clique problem, have been proposed for quantum computers. Still, those algorithms assume the use of fixed access time quantum RAM (QRAM). We present a practical gate-based app roach to both the triangle-finding problem and its NP-hard k-clique generalization. We examine both constant factors for near-term implementation on a Noisy Intermediate Scale Quantum computer (NISQ) device, and the scaling of the problem to evaluate long-term use of quantum computers. We compare the time complexity and circuit practicality of the theoretical approach and actual implementation. We propose and apply two different strategies to the k-clique problem, examining the circuit size of Qiskit implementations. We analyze our implementations by simulating triangle finding with various error models, observing the effect on damping the amplitude of the correct answer, and compare to execution on six real IBMQ machines. Finally, we estimate the date when the methods proposed can run effectively on an actual device based on IBMs quantum volume exponential growth forecast and the results of our error analysis.
172 - Jose Luis Rosales 2015
Modern cryptography is largely based on complexity assumptions, for example, the ubiquitous RSA is based on the supposed complexity of the prime factorization problem. Thus, it is of fundamental importance to understand how a quantum computer would e ventually weaken these algorithms. In this paper, one follows Feynmans prescription for a computer to simulate the physics corresponding to the algorithm of factoring a large number $N$ into primes. Using Dirac-Jordan transformation theory one translates factorization into the language of quantum hermitical operators, acting on the vectors of the Hilbert space. This leads to obtaining the ensemble of factorization of $N$ in terms of the Euler function $varphi(N)$, that is quantized. On the other hand, considering $N$ as a parameter of the computer, a Quantum Mechanical Prime Counting Function $pi_{QM}(x)$, where $x$ factorizes $N$, is derived. This function converges to $pi(x)$ when $Ngg x$. It has no counterpart in analytic number theory and its derivation relies on semiclassical quantization alone.
146 - Maksim Levental 2021
Most research in quantum computing today is performed against simulations of quantum computers rather than true quantum computers. Simulating a quantum computer entails implementing all of the unitary operators corresponding to the quantum gates as t ensors. For high numbers of qubits, performing tensor multiplications for these simulations becomes quite expensive, since $N$-qubit gates correspond to $2^{N}$-dimensional tensors. One way to accelerate such a simulation is to use field programmable gate array (FPGA) hardware to efficiently compute the matrix multiplications. Though FPGAs can efficiently perform tensor multiplications, they are memory bound, having relatively small block random access memory. One way to potentially reduce the memory footprint of a quantum computing system is to represent it as a tensor network; tensor networks are a formalism for representing compositions of tensors wherein economical tensor contractions are readily identified. Thus we explore tensor networks as a means to reducing the memory footprint of quantum computing systems and broadly accelerating simulations of such systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا