Do you want to publish a course? Click here

Parallel Quantum Algorithm for Hamiltonian Simulation

114   0   0.0 ( 0 )
 Added by Zhicheng Zhang
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

We study how parallelism can speed up quantum simulation. A parallel quantum algorithm is proposed for simulating the dynamics of a large class of Hamiltonians with good sparse structures, called uniform-structured Hamiltonians, including various Hamiltonians of practical interest like local Hamiltonians and Pauli sums. Given the oracle access to the target sparse Hamiltonian, in both query and gate complexity, the running time of our parallel quantum simulation algorithm measured by the quantum circuit depth has a doubly (poly-)logarithmic dependence $operatorname{polylog}log(1/epsilon)$ on the simulation precision $epsilon$. This presents an exponential improvement over the dependence $operatorname{polylog}(1/epsilon)$ of previous optimal sparse Hamiltonian simulation algorithm without parallelism. To obtain this result, we introduce a novel notion of parallel quantum walk, based on Childs quantum walk. The target evolution unitary is approximated by a truncated Taylor series, which is obtained by combining these quantum walks in a parallel way. A lower bound $Omega(log log (1/epsilon))$ is established, showing that the $epsilon$-dependence of the gate depth achieved in this work cannot be significantly improved. Our algorithm is applied to simulating three physical models: the Heisenberg model, the Sachdev-Ye-Kitaev model and a quantum chemistry model in second quantization. By explicitly calculating the gate complexity for implementing the oracles, we show that on all these models, the total gate depth of our algorithm has a $operatorname{polylog}log(1/epsilon)$ dependence in the parallel setting.



rate research

Read More

We present a quantum algorithm for the dynamical simulation of time-dependent Hamiltonians. Our method involves expanding the interaction-picture Hamiltonian as a sum of generalized permutations, which leads to an integral-free Dyson series of the time-evolution operator. Under this representation, we perform a quantum simulation for the time-evolution operator by means of the linear combination of unitaries technique. We optimize the time steps of the evolution based on the Hamiltonians dynamical characteristics, leading to a gate count that scales with an $L^1$-norm-like scaling with respect only to the norm of the interaction Hamiltonian, rather than that of the total Hamiltonian. We demonstrate that the cost of the algorithm is independent of the Hamiltonians frequencies, implying its advantage for systems with highly oscillating components, and for time-decaying systems the cost does not scale with the total evolution time asymptotically. In addition, our algorithm retains the near optimal $log(1/epsilon)/loglog(1/epsilon)$ scaling with simulation error $epsilon$.
Hamiltonian simulation is one of the most important problems in quantum computation, and quantum singular value transformation (QSVT) is an efficient way to simulate a general class of Hamiltonians. However, the QSVT circuit typically involves multiple ancilla qubits and multi-qubit control gates. We propose a drastically simplified quantum circuit called the minimal QSVT circuit, which uses only one ancilla qubit to simulate a class of $n$-qubit random Hamiltonians. We formulate a simple metric called the quantum unitary evolution score (QUES), which is a scalable quantum benchmark and can be verified without any need for classical computation. We demonstrate that QUES is directly related to the circuit fidelity, and the classical hardness of an associated quantum circuit sampling problem. Theoretical analysis suggests under suitable assumptions, there exists an optimal simulation time $t^{text{opt}}approx 4.81$, at which even a noisy quantum device may be sufficient to demonstrate the classical hardness.
156 - Shi Jin , Xiantao Li 2021
Given the Hamiltonian, the evaluation of unitary operators has been at the heart of many quantum algorithms. Motivated by existing deterministic and random methods, we present a hybrid approach, where Hamiltonians with large amplitude are evaluated at each time step, while the remaining terms are evaluated at random. The bound for the mean square error is obtained, together with a concentration bound. The mean square error consists of a variance term and a bias term, arising respectively from the random sampling of the Hamiltonian terms and the operator splitting error. Leveraging on the bias/variance trade-off, the error can be minimized by balancing the two. The concentration bound provides an estimate on the number of gates. The estimates are verified by using numerical experiments on classical computers.
Hamiltonian learning is crucial to the certification of quantum devices and quantum simulators. In this paper, we propose a hybrid quantum-classical Hamiltonian learning algorithm to find the coefficients of the Pauli operator components of the Hamiltonian. Its main subroutine is the practical log-partition function estimation algorithm, which is based on the minimization of the free energy of the system. Concretely, we devise a stochastic variational quantum eigensolver (SVQE) to diagonalize the Hamiltonians and then exploit the obtained eigenvalues to compute the free energys global minimum using convex optimization. Our approach not only avoids the challenge of estimating von Neumann entropy in free energy minimization, but also reduces the quantum resources via importance sampling in Hamiltonian diagonalization, facilitating the implementation of our method on near-term quantum devices. Finally, we demonstrate our approachs validity by conducting numerical experiments with Hamiltonians of interest in quantum many-body physics.
247 - Shibei Xue , Rebing Wu , Dewei Li 2019
In this paper, we present a gradient algorithm for identifying unknown parameters in an open quantum system from the measurements of time traces of local observables. The open system dynamics is described by a general Markovian master equation based on which the Hamiltonian identification problem can be formulated as minimizing the distance between the real time traces of the observables and those predicted by the master equation. The unknown parameters can then be learned with a gradient descent algorithm from the measurement data. We verify the effectiveness of our algorithm in a circuit QED system described by a Jaynes-Cumming model whose Hamiltonian identification has been rarely considered. We also show that our gradient algorithm can learn the spectrum of a non-Markovian environment based on an augmented system model.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا