No Arabic abstract
Given the Hamiltonian, the evaluation of unitary operators has been at the heart of many quantum algorithms. Motivated by existing deterministic and random methods, we present a hybrid approach, where Hamiltonians with large amplitude are evaluated at each time step, while the remaining terms are evaluated at random. The bound for the mean square error is obtained, together with a concentration bound. The mean square error consists of a variance term and a bias term, arising respectively from the random sampling of the Hamiltonian terms and the operator splitting error. Leveraging on the bias/variance trade-off, the error can be minimized by balancing the two. The concentration bound provides an estimate on the number of gates. The estimates are verified by using numerical experiments on classical computers.
Random batch algorithms are constructed for quantum Monte Carlo simulations. The main objective is to alleviate the computational cost associated with the calculations of two-body interactions, including the pairwise interactions in the potential energy, and the two-body terms in the Jastrow factor. In the framework of variational Monte Carlo methods, the random batch algorithm is constructed based on the over-damped Langevin dynamics, so that updating the position of each particle in an $N$-particle system only requires $mathcal{O}(1)$ operations, thus for each time step the computational cost for $N$ particles is reduced from $mathcal{O}(N^2)$ to $mathcal{O}(N)$. For diffusion Monte Carlo methods, the random batch algorithm uses an energy decomposition to avoid the computation of the total energy in the branching step. The effectiveness of the random batch method is demonstrated using a system of liquid ${}^4$He atoms interacting with a graphite surface.
While there has been extensive previous work on efficient quantum algorithms for linear differential equations, analogous progress for nonlinear differential equations has been severely limited due to the linearity of quantum mechanics. Despite this obstacle, we develop a quantum algorithm for initial value problems described by dissipative quadratic $n$-dimensional ordinary differential equations. Assuming $R < 1$, where $R$ is a parameter characterizing the ratio of the nonlinearity to the linear dissipation, this algorithm has complexity $T^2mathrm{poly}(log T, log n, log 1/epsilon)/epsilon$, where $T$ is the evolution time and $epsilon$ is the allowed error in the output quantum state. This is an exponential improvement over the best previous quantum algorithms, whose complexity is exponential in $T$. We achieve this improvement using the method of Carleman linearization, for which we give a novel convergence theorem. This method maps a system of nonlinear differential equations to an infinite-dimensional system of linear differential equations, which we discretize, truncate, and solve using the forward Euler method and the quantum linear system algorithm. We also provide a lower bound on the worst-case complexity of quantum algorithms for general quadratic differential equations, showing that the problem is intractable for $R ge sqrt{2}$. Finally, we discuss potential applications of this approach to problems arising in biology as well as in fluid and plasma dynamics.
The LINPACK benchmark reports the performance of a computer for solving a system of linear equations with dense random matrices. Although this task was not designed with a real application directly in mind, the LINPACK benchmark has been used to define the list of TOP500 supercomputers since the debut of the list in 1993. We propose that a similar benchmark, called the quantum LINPACK benchmark, could be used to measure the whole machine performance of quantum computers. The success of the quantum LINPACK benchmark should be viewed as the minimal requirement for a quantum computer to perform a useful task of solving linear algebra problems, such as linear systems of equations. We propose an input model called the RAndom Circuit Block-Encoded Matrix (RACBEM), which is a proper generalization of a dense random matrix in the quantum setting. The RACBEM model is efficient to be implemented on a quantum computer, and can be designed to optimally adapt to any given quantum architecture, with relying on a black-box quantum compiler. Besides solving linear systems, the RACBEM model can be used to perform a variety of linear algebra tasks relevant to many physical applications, such as computing spectral measures, time series generated by a Hamiltonian simulation, and thermal averages of the energy. We implement these linear algebra operations on IBM Q quantum devices as well as quantum virtual machines, and demonstrate their performance in solving scientific computing problems.
We study how parallelism can speed up quantum simulation. A parallel quantum algorithm is proposed for simulating the dynamics of a large class of Hamiltonians with good sparse structures, called uniform-structured Hamiltonians, including various Hamiltonians of practical interest like local Hamiltonians and Pauli sums. Given the oracle access to the target sparse Hamiltonian, in both query and gate complexity, the running time of our parallel quantum simulation algorithm measured by the quantum circuit depth has a doubly (poly-)logarithmic dependence $operatorname{polylog}log(1/epsilon)$ on the simulation precision $epsilon$. This presents an exponential improvement over the dependence $operatorname{polylog}(1/epsilon)$ of previous optimal sparse Hamiltonian simulation algorithm without parallelism. To obtain this result, we introduce a novel notion of parallel quantum walk, based on Childs quantum walk. The target evolution unitary is approximated by a truncated Taylor series, which is obtained by combining these quantum walks in a parallel way. A lower bound $Omega(log log (1/epsilon))$ is established, showing that the $epsilon$-dependence of the gate depth achieved in this work cannot be significantly improved. Our algorithm is applied to simulating three physical models: the Heisenberg model, the Sachdev-Ye-Kitaev model and a quantum chemistry model in second quantization. By explicitly calculating the gate complexity for implementing the oracles, we show that on all these models, the total gate depth of our algorithm has a $operatorname{polylog}log(1/epsilon)$ dependence in the parallel setting.
This article analyses the convergence of the Lie-Trotter splitting scheme for the stochastic Manakov equation, a system arising in the study of pulse propagation in randomly birefringent optical fibers. First, we prove that the strong order of the numerical approximation is 1/2 if the nonlinear term in the system is globally Lipschitz. Then, we show that the splitting scheme has convergence order 1/2 in probability and almost sure order 1/2- in the case of a cubic nonlinearity. We provide several numerical experiments illustrating the aforementioned results and the efficiency of the Lie-Trotter splitting scheme. Finally, we numerically investigate the possible blowup of solutions for some power-law nonlinearities.