ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum algorithm for time-dependent Hamiltonian simulation by permutation expansion

140   0   0.0 ( 0 )
 نشر من قبل Yi-Hsiang Chen
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a quantum algorithm for the dynamical simulation of time-dependent Hamiltonians. Our method involves expanding the interaction-picture Hamiltonian as a sum of generalized permutations, which leads to an integral-free Dyson series of the time-evolution operator. Under this representation, we perform a quantum simulation for the time-evolution operator by means of the linear combination of unitaries technique. We optimize the time steps of the evolution based on the Hamiltonians dynamical characteristics, leading to a gate count that scales with an $L^1$-norm-like scaling with respect only to the norm of the interaction Hamiltonian, rather than that of the total Hamiltonian. We demonstrate that the cost of the algorithm is independent of the Hamiltonians frequencies, implying its advantage for systems with highly oscillating components, and for time-decaying systems the cost does not scale with the total evolution time asymptotically. In addition, our algorithm retains the near optimal $log(1/epsilon)/loglog(1/epsilon)$ scaling with simulation error $epsilon$.



قيم البحث

اقرأ أيضاً

We study how parallelism can speed up quantum simulation. A parallel quantum algorithm is proposed for simulating the dynamics of a large class of Hamiltonians with good sparse structures, called uniform-structured Hamiltonians, including various Ham iltonians of practical interest like local Hamiltonians and Pauli sums. Given the oracle access to the target sparse Hamiltonian, in both query and gate complexity, the running time of our parallel quantum simulation algorithm measured by the quantum circuit depth has a doubly (poly-)logarithmic dependence $operatorname{polylog}log(1/epsilon)$ on the simulation precision $epsilon$. This presents an exponential improvement over the dependence $operatorname{polylog}(1/epsilon)$ of previous optimal sparse Hamiltonian simulation algorithm without parallelism. To obtain this result, we introduce a novel notion of parallel quantum walk, based on Childs quantum walk. The target evolution unitary is approximated by a truncated Taylor series, which is obtained by combining these quantum walks in a parallel way. A lower bound $Omega(log log (1/epsilon))$ is established, showing that the $epsilon$-dependence of the gate depth achieved in this work cannot be significantly improved. Our algorithm is applied to simulating three physical models: the Heisenberg model, the Sachdev-Ye-Kitaev model and a quantum chemistry model in second quantization. By explicitly calculating the gate complexity for implementing the oracles, we show that on all these models, the total gate depth of our algorithm has a $operatorname{polylog}log(1/epsilon)$ dependence in the parallel setting.
114 - Dong An , Di Fang , Lin Lin 2020
The accuracy of quantum dynamics simulation is usually measured by the error of the unitary evolution operator in the operator norm, which in turn depends on certain norm of the Hamiltonian. For unbounded operators, after suitable discretization, the norm of the Hamiltonian can be very large, which significantly increases the simulation cost. However, the operator norm measures the worst-case error of the quantum simulation, while practical simulation concerns the error with respect to a given initial vector at hand. We demonstrate that under suitable assumptions of the Hamiltonian and the initial vector, if the error is measured in terms of the vector norm, the computational cost may not increase at all as the norm of the Hamiltonian increases using Trotter type methods. In this sense, our result outperforms all previous error bounds in the quantum simulation literature. Our result extends that of [Jahnke, Lubich, BIT Numer. Math. 2000] to the time-dependent setting. We also clarify the existence and the importance of commutator scalings of Trotter and generalized Trotter methods for time-dependent Hamiltonian simulations.
154 - Amir Kalev , Itay Hen 2020
We propose an efficient quantum algorithm for simulating the dynamics of general Hamiltonian systems. Our technique is based on a power series expansion of the time-evolution operator in its off-diagonal terms. The expansion decouples the dynamics du e to the diagonal component of the Hamiltonian from the dynamics generated by its off-diagonal part, which we encode using the linear combination of unitaries technique. Our method has an optimal dependence on the desired precision and, as we illustrate, generally requires considerably fewer resources than the current state-of-the-art. We provide an analysis of resource costs for several sample models.
Hamiltonian simulation is one of the most important problems in quantum computation, and quantum singular value transformation (QSVT) is an efficient way to simulate a general class of Hamiltonians. However, the QSVT circuit typically involves multip le ancilla qubits and multi-qubit control gates. We propose a drastically simplified quantum circuit called the minimal QSVT circuit, which uses only one ancilla qubit to simulate a class of $n$-qubit random Hamiltonians. We formulate a simple metric called the quantum unitary evolution score (QUES), which is a scalable quantum benchmark and can be verified without any need for classical computation. We demonstrate that QUES is directly related to the circuit fidelity, and the classical hardness of an associated quantum circuit sampling problem. Theoretical analysis suggests under suitable assumptions, there exists an optimal simulation time $t^{text{opt}}approx 4.81$, at which even a noisy quantum device may be sufficient to demonstrate the classical hardness.
156 - Shi Jin , Xiantao Li 2021
Given the Hamiltonian, the evaluation of unitary operators has been at the heart of many quantum algorithms. Motivated by existing deterministic and random methods, we present a hybrid approach, where Hamiltonians with large amplitude are evaluated a t each time step, while the remaining terms are evaluated at random. The bound for the mean square error is obtained, together with a concentration bound. The mean square error consists of a variance term and a bias term, arising respectively from the random sampling of the Hamiltonian terms and the operator splitting error. Leveraging on the bias/variance trade-off, the error can be minimized by balancing the two. The concentration bound provides an estimate on the number of gates. The estimates are verified by using numerical experiments on classical computers.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا