No Arabic abstract
Gradient-based algorithms, popular strategies to optimization problems, are essential for many modern machine-learning techniques. Theoretically, extreme points of certain cost functions can be found iteratively along the directions of the gradient. The time required to calculating the gradient of $d$-dimensional problems is at a level of $mathcal{O}(poly(d))$, which could be boosted by quantum techniques, benefiting the high-dimensional data processing, especially the modern machine-learning engineering with the number of optimized parameters being in billions. Here, we propose a quantum gradient algorithm for optimizing general polynomials with the dressed amplitude encoding, aiming at solving fast-convergence polynomials problems within both time and memory consumption in $mathcal{O}(poly (log{d}))$. Furthermore, numerical simulations are carried out to inspect the performance of this protocol by considering the noises or perturbations from initialization, operation and truncation. For the potential values in high-dimension optimizations, this quantum gradient algorithm is supposed to facilitate the polynomial-optimizations, being a subroutine for future practical quantum computer.
In this paper, we present a gradient algorithm for identifying unknown parameters in an open quantum system from the measurements of time traces of local observables. The open system dynamics is described by a general Markovian master equation based on which the Hamiltonian identification problem can be formulated as minimizing the distance between the real time traces of the observables and those predicted by the master equation. The unknown parameters can then be learned with a gradient descent algorithm from the measurement data. We verify the effectiveness of our algorithm in a circuit QED system described by a Jaynes-Cumming model whose Hamiltonian identification has been rarely considered. We also show that our gradient algorithm can learn the spectrum of a non-Markovian environment based on an augmented system model.
In the quest to achieve scalable quantum information processing technologies, gradient-based optimal control algorithms (e.g., GRAPE) are broadly used for implementing high-precision quantum gates, but their performance is often hindered by deterministic or random errors in the system model and the control electronics. In this paper, we show that GRAPE can be taught to be more effective by jointly learning from the design model and the experimental data obtained from process tomography. The resulting data-driven gradient optimization algorithm (d-GRAPE) can in principle correct all deterministic gate errors, with a mild efficiency loss. The d-GRAPE algorithm may become more powerful with broadband controls that involve a large number of control parameters, while other algorithms usually slow down due to the increased size of the search space. These advantages are demonstrated by simulating the implementation of a two-qubit CNOT gate.
Using quantum algorithms to simulate complex physical processes and correlations in quantum matter has been a major direction of quantum computing research, towards the promise of a quantum advantage over classical approaches. In this work we develop a generalized quantum algorithm to simulate any dynamical process represented by either the operator sum representation or the Lindblad master equation. We then demonstrate the quantum algorithm by simulating the dynamics of the Fenna-Matthews-Olson (FMO) complex on the IBM QASM quantum simulator. This work represents a first demonstration of a quantum algorithm for open quantum dynamics with a moderately sophisticated dynamical process involving a realistic biological structure. We discuss the complexity of the quantum algorithm relative to the classical method for the same purpose, presenting a decisive query complexity advantage of the quantum approach based on the unique property of quantum measurement. An accurate yet tractable quantum algorithm for the description of complex open quantum systems (like the FMO complex) has a myriad of significant applications from catalytic chemistry and correlated materials physics to descriptions of hybrid quantum systems.
Visual tracking (VT) is the process of locating a moving object of interest in a video. It is a fundamental problem in computer vision, with various applications in human-computer interaction, security and surveillance, robot perception, traffic control, etc. In this paper, we address this problem for the first time in the quantum setting, and present a quantum algorithm for VT based on the framework proposed by Henriques et al. [IEEE Trans. Pattern Anal. Mach. Intell., 7, 583 (2015)]. Our algorithm comprises two phases: training and detection. In the training phase, in order to discriminate the object and background, the algorithm trains a ridge regression classifier in the quantum state form where the optimal fitting parameters of ridge regression are encoded in the amplitudes. In the detection phase, the classifier is then employed to generate a quantum state whose amplitudes encode the responses of all the candidate image patches. The algorithm is shown to be polylogarithmic in scaling, when the image data matrices have low condition numbers, and therefore may achieve exponential speedup over the best classical counterpart. However, only quadratic speedup can be achieved when the algorithm is applied to implement the ultimate task of Henriquess framework, i.e., detecting the object position. We also discuss two other important applications related to VT: (1) object disappearance detection and (2) motion behavior matching, where much more significant speedup over the classical methods can be achieved. This work demonstrates the power of quantum computing in solving computer vision problems.
For two unknown quantum states $rho$ and $sigma$ in an $N$-dimensional Hilbert space, computing their fidelity $F(rho,sigma)$ is a basic problem with many important applications in quantum computing and quantum information, for example verification and characterization of the outputs of a quantum computer, and design and analysis of quantum algorithms. In this Letter, we propose a quantum algorithm that solves this problem in $text{poly}(log (N), r)$ time, where $r$ is the lower rank of $rho$ and $sigma$. This algorithm exhibits an exponential improvement over the best-known algorithm (based on quantum state tomography) in $text{poly}(N, r)$ time.