No Arabic abstract
We introduce a new tool for quantum algorithms called quantum fast-forwarding (QFF). The tool uses quantum walks as a means to quadratically fast-forward a reversible Markov chain. More specifically, with $P$ the Markov chain transition matrix and $D = sqrt{Pcirc P^T}$ its discriminant matrix ($D=P$ if $P$ is symmetric), we construct a quantum walk algorithm that for any quantum state $|vrangle$ and integer $t$ returns a quantum state $epsilon$-close to the state $D^t|vrangle/|D^t|vrangle|$. The algorithm uses $OBig(|D^t|vrangle|^{-1}sqrt{tlog(epsilon|D^t|vrangle|)^{-1}}Big)$ expected quantum walk steps and $O(|D^t|vrangle|^{-1})$ expected reflections around $|vrangle$. This shows that quantum walks can accelerate the transient dynamics of Markov chains, complementing the line of results that proves the acceleration of their limit behavior. We show that this tool leads to speedups on random walk algorithms in a very natural way. Specifically we consider random walk algorithms for testing the graph expansion and clusterability, and show that we can quadratically improve the dependency of the classical property testers on the random walk runtime. Moreover, our quantum algorithm exponentially improves the space complexity of the classical tester to logarithmic. As a subroutine of independent interest, we use QFF for determining whether a given pair of nodes lies in the same cluster or in separate clusters. This solves a robust version of $s$-$t$ connectivity, relevant in a learning context for classifying objects among a set of examples. The different algorithms crucially rely on the quantum speedup of the transient behavior of random walks.
Expansion testing aims to decide whether an $n$-node graph has expansion at least $Phi$, or is far from any such graph. We propose a quantum expansion tester with complexity $widetilde{O}(n^{1/3}Phi^{-1})$. This accelerates the $widetilde{O}(n^{1/2}Phi^{-2})$ classical tester by Goldreich and Ron [Algorithmica 02], and combines the $widetilde{O}(n^{1/3}Phi^{-2})$ and $widetilde{O}(n^{1/2}Phi^{-1})$ quantum speedups by Ambainis, Childs and Liu [RANDOM 11] and Apers and Sarlette [QIC 19], respectively. The latter approach builds on a quantum fast-forwarding scheme, which we improve upon by initially growing a seed set in the graph. To grow this seed set we use a so-called evolving set process from the graph clustering literature, which allows to grow an appropriately local seed set.
A language L has a property tester if there exists a probabilistic algorithm that given an input x only asks a small number of bits of x and distinguishes the cases as to whether x is in L and x has large Hamming distance from all y in L. We define a similar notion of quantum property testing and show that there exist languages with quantum property testers but no good classical testers. We also show there exist languages which require a large number of queries even for quantumly testing.
We revisit the task of quantum state redistribution in the one-shot setting, and design a protocol for this task with communication cost in terms of a measure of distance from quantum Markov chains. More precisely, the distance is defined in terms of quantum max-relative entropy and quantum hypothesis testing entropy. Our result is the first to operationally connect quantum state redistribution and quantum Markov chains, and can be interpreted as an operational interpretation for a possible one-shot analogue of quantum conditional mutual information. The communication cost of our protocol is lower than all previously known ones and asymptotically achieves the well-known rate of quantum conditional mutual information. Thus, our work takes a step towards the important open question of near-optimal characterization of the one-shot quantum state redistribution.
We prove that any one-dimensional (1D) quantum state with small quantum conditional mutual information in all certain tripartite splits of the system, which we call a quantum approximate Markov chain, can be well-approximated by a Gibbs state of a short-range quantum Hamiltonian. Conversely, we also derive an upper bound on the (quantum) conditional mutual information of Gibbs states of 1D short-range quantum Hamiltonians. We show that the conditional mutual information between two regions A and C conditioned on the middle region B decays exponentially with the square root of the length of B. These two results constitute a variant of the Hammersley-Clifford theorem (which characterizes Markov networks, i.e. probability distributions which have vanishing conditional mutual information, as Gibbs states of classical short-range Hamiltonians) for 1D quantum systems. The result can be seen as a strengthening - for 1D systems - of the mutual information area law for thermal states. It directly implies an efficient preparation of any 1D Gibbs state at finite temperature by a constant-depth quantum circuit.
We discuss a non-reversible Markov chain Monte Carlo (MCMC) algorithm for particle systems, in which the direction of motion evolves deterministically. This sequential direction-sweep MCMC generalizes the widely spread MCMC sweep methods for particle or spin indices. The sequential direction-sweep MCMC can be applied to a wide range of original reversible or non-reversible Markov chains, such as the Metropolis algorithm or the event-chain Monte Carlo algorithm. For a simplified two-dimensional dipole model, we show rigorously that sequential MCMC leaves the stationary probability distribution unchanged, yet it profoundly modifies the Markov-chain trajectory. Long excursions, with persistent rotation in one direction, alternate with long sequences of rapid zigzags resulting in persistent rotation in the opposite direction. We show that sequential MCMC can have shorter mixing times than the algorithms with random updates of directions. We point out possible applications of sequential MCMC in polymer physics and in molecular simulation.