Do you want to publish a course? Click here

Quantum supremacy and random circuits

134   0   0.0 ( 0 )
 Added by Ramis Movassagh
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

As Moores law reaches its limits, quantum computers are emerging with the promise of dramatically outperforming classical computers. We have witnessed the advent of quantum processors with over $50$ quantum bits (qubits), which are expected to be beyond the reach of classical simulation. Quantum supremacy is the event at which the old Extended Church-Turing Thesis is overturned: A quantum computer performs a task that is practically impossible for any classical (super)computer. The demonstration requires both a solid theoretical guarantee and an experimental realization. The lead candidate is Random Circuit Sampling (RCS), which is the task of sampling from the output distribution of random quantum circuits. Google recently announced a $53-$qubit experimental demonstration of RCS. Soon after, classical algorithms appeared that challenge the supremacy of random circuits by estimating their outputs. How hard is it to classically simulate the output of random quantum circuits? We prove that estimating the output probabilities of random quantum circuits is formidably hard ($#P$-Hard) for any classical computer. This makes RCS the strongest candidate for demonstrating quantum supremacy relative to all other proposals. The robustness to the estimation error that we prove may serve as a new hardness criterion for the performance of classical algorithms. To achieve this, we introduce the Cayley path interpolation between any two gates of a quantum computation and convolve recent advances in quantum complexity and information with probability and random matrices. Furthermore, we apply algebraic geometry to generalize the well-known Berlekamp-Welch algorithm that is widely used in coding theory and cryptography. Our results imply that there is an exponential hardness barrier for the classical simulation of most quantum circuits.



rate research

Read More

It is believed that random quantum circuits are difficult to simulate classically. These have been used to demonstrate quantum supremacy: the execution of a computational task on a quantum computer that is infeasible for any classical computer. The task underlying the assertion of quantum supremacy by Arute et al. (Nature, 574, 505--510 (2019)) was initially estimated to require Summit, the worlds most powerful supercomputer today, approximately 10,000 years. The same task was performed on the Sycamore quantum processor in only 200 seconds. In this work, we present a tensor network-based classical simulation algorithm. Using a Summit-comparable cluster, we estimate that our simulator can perform this task in less than 20 days. On moderately-sized instances, we reduce the runtime from years to minutes, running several times faster than Sycamore itself. These estimates are based on explicit simulations of parallel subtasks, and leave no room for hidden costs. The simulators key ingredient is identifying and optimizing the stem of the computation: a sequence of pairwise tensor contractions that dominates the computational cost. This orders-of-magnitude reduction in classical simulation time, together with proposals for further significant improvements, indicates that achieving quantum supremacy may require a period of continuing quantum hardware developments without an unequivocal first demonstration.
198 - Ramis Movassagh 2018
One-parameter interpolations between any two unitary matrices (e.g., quantum gates) $U_1$ and $U_2$ along efficient paths contained in the unitary group are constructed. Motivated by applications, we propose the continuous unitary path $U(theta)$ obtained from the QR-factorization [ U(theta)R(theta)=(1-theta)A+theta B, ] where $U_1 R_1=A$ and $U_2 R_2=B$ are the QR-factorizations of $A$ and $B$, and $U(theta)$ is a unitary for all $theta$ with $U(0)=U_1$ and $U(1)=U_2$. The QR-algorithm is modified to, instead of $U(theta)$, output a matrix whose columns are proportional to the corresponding columns of $U(theta)$ and whose entries are polynomial or rational functions of $theta$. By an extension of the Berlekamp-Welch algorithm we show that rational functions can be efficiently and exactly interpolated with respect to $theta$. We then construct probability distributions over unitaries that are arbitrarily close to the Haar measure. Demonstration of computational advantages of NISQ over classical computers is an imperative near-term goal, especially with the exuberant experimental frontier in academia and industry (e.g., IBM and Google). A candidate for quantum computational supremacy is Random Circuit Sampling (RCS), which is the task of sampling from the output distribution of a random circuit. The aforementioned mathematical results provide a new way of scrambling quantum circuits and are applied to prove that exact RCS is $#P$-Hard on average, which is a simpler alternative to Bouland et als. (Dis)Proving the quantum supremacy conjecture requires approximate average case hardness; this remains an open problem for all quantum supremacy proposals.
We report a cluster of results on k-QSAT, the problem of quantum satisfiability for k-qubit projectors which generalizes classical satisfiability with k-bit clauses to the quantum setting. First we define the NP-complete problem of product satisfiability and give a geometrical criterion for deciding when a QSAT interaction graph is product satisfiable with positive probability. We show that the same criterion suffices to establish quantum satisfiability for all projectors. Second, we apply these results to the random graph ensemble with generic projectors and obtain improved lower bounds on the location of the SAT--unSAT transition. Third, we present numerical results on random, generic satisfiability which provide estimates for the location of the transition for k=3 and k=4 and mild evidence for the existence of a phase which is satisfiable by entangled states alone.
Motivated by the recent experimental demonstrations of quantum supremacy, proving the hardness of the output of random quantum circuits is an imperative near term goal. We prove under the complexity theoretical assumption of the non-collapse of the polynomial hierarchy that approximating the output probabilities of random quantum circuits to within $exp(-Omega(mlog m))$ additive error is hard for any classical computer, where $m$ is the number of gates in the quantum computation. More precisely, we show that the above problem is $#mathsf{P}$-hard under $mathsf{BPP}^{mathsf{NP}}$ reduction. In the recent experiments, the quantum circuit has $n$-qubits and the architecture is a two-dimensional grid of size $sqrt{n}timessqrt{n}$. Indeed for constant depth circuits approximating the output probabilities to within $2^{-Omega(nlog{n})}$ is hard. For circuits of depth $log{n}$ or $sqrt{n}$ for which the anti-concentration property holds, approximating the output probabilities to within $2^{-Omega(nlog^2{n})}$ and $2^{-Omega(n^{3/2}log n)}$ is hard respectively. We made an effort to find the best proofs and proved these results from first principles, which do not use the standard techniques such as the Berlekamp--Welch algorithm, the usual Paturis lemma, and Rakhmanovs result.
Random quantum circuits have played a central role in establishing the computational advantages of near-term quantum computers over their conventional counterparts. Here, we use ensembles of low-depth random circuits with local connectivity in $Dge 1$ spatial dimensions to generate quantum error-correcting codes. For random stabilizer codes and the erasure channel, we find strong evidence that a depth $O(log N)$ random circuit is necessary and sufficient to converge (with high probability) to zero failure probability for any finite amount below the optimal erasure threshold, set by the channel capacity, for any $D$. Previous results on random circuits have only shown that $O(N^{1/D})$ depth suffices or that $O(log^3 N)$ depth suffices for all-to-all connectivity ($D to infty$). We then study the critical behavior of the erasure threshold in the so-called moderate deviation limit, where both the failure probability and the distance to the optimal threshold converge to zero with $N$. We find that the requisite depth scales like $O(log N)$ only for dimensions $D ge 2$, and that random circuits require $O(sqrt{N})$ depth for $D=1$. Finally, we introduce an expurgation algorithm that uses quantum measurements to remove logical operators that cause the code to fail by turning them into additional stabilizers or gauge operators. With such targeted measurements, we can achieve sub-logarithmic depth in $Dge 2$ below capacity without increasing the maximum weight of the check operators. We find that for any rate beneath the capacity, high-performing codes with thousands of logical qubits are achievable with depth 4-8 expurgated random circuits in $D=2$ dimensions. These results indicate that finite-rate quantum codes are practically relevant for near-term devices and may significantly reduce the resource requirements to achieve fault tolerance for near-term applications.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا