No Arabic abstract
The only known way to study quantum field theories in non-perturbative regimes is using numerical calculations regulated on discrete space-time lattices. Such computations, however, are often faced with exponential signal-to-noise challenges that render key physics studies untenable even with next generation classical computing. Here, a method is presented by which the output of small-scale quantum computations on Noisy Intermediate-Scale Quantum era hardware can be used to accelerate larger-scale classical field theory calculations through the construction of optimized interpolating operators. The method is implemented and studied in the context of the 1+1-dimensional Schwinger model, a simple field theory which shares key features with the standard model of nuclear and particle physics.
Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of todays classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.
We discuss the successes and limitations of statistical sampling for a sequence of models studied in the context of lattice QCD and emphasize the need for new methods to deal with finite-density and real-time evolution. We show that these lattice models can be reformulated using tensorial methods where the field integrations in the path-integral formalism are replaced by discrete sums. These formulations involve various types of duality and provide exact coarse-graining formulas which can be combined with truncations to obtain practical implementations of the Wilson renormalization group program. Tensor reformulations are naturally discrete and provide manageable transfer matrices. Combining truncations with the time continuum limit, we derive Hamiltonians suitable to perform quantum simulation experiments, for instance using cold atoms, or to be programmed on existing quantum computers. We review recent progress concerning the tensor field theory treatment of non-compact scalar models, supersymmetric models, economical four-dimensional algorithms, noise-robust enforcement of Gausss law, symmetry preserving truncations and topological considerations.
As NISQ devices have several physical limitations and unavoidable noisy quantum operations, only small circuits can be executed on a quantum machine to get reliable results. This leads to the quantum hardware under-utilization issue. Here, we address this problem and improve the quantum hardware throughput by proposing a multiprogramming approach to execute multiple quantum circuits on quantum hardware simultaneously. We first introduce a parallelism manager to select an appropriate number of circuits to be executed at the same time. Second, we present two different qubit partitioning algorithms to allocate reliable partitions to multiple circuits-a greedy and a heuristic. Third, we use the Simultaneous Randomized Benchmarking protocol to characterize the crosstalk properties and consider them in the qubit partition process to avoid crosstalk effect during simultaneous executions. Finally, we enhance the mapping transition algorithm to make circuits executable on hardware using decreased number of inserted gates. We demonstrate the performance of our multi-programming approach by executing circuits of different size on IBM quantum hardware simultaneously. We also investigate this method on VQE algorithm to reduce its overhead.
Tunneling in quantum field theory is worth understanding properly, not least because it controls the long term fate of our universe. There are however, a number of features of tunneling rate calculations which lack a desirable transparency, such as the necessity of analytic continuation, the appropriateness of using an effective instead of classical potential, and the sensitivity to short-distance physics. This paper attempts to review in pedagogical detail the physical origin of tunneling and its connection to the path integral. Both the traditional potential-deformation method and a recent more direct propagator-based method are discussed. Some new insights from using approximate semi-classical solutions are presented. In addition, we explore the sensitivity of the lifetime of our universe to short distance physics, such as quantum gravity, emphasizing a number of important subtleties.
This is the report of the Computing Frontier working group on Lattice Field Theory prepared for the proceedings of the 2013 Community Summer Study (Snowmass). We present the future computing needs and plans of the U.S. lattice gauge theory community and argue that continued support of the U.S. (and worldwide) lattice-QCD effort is essential to fully capitalize on the enormous investment in the high-energy physics experimental program. We first summarize the dramatic progress of numerical lattice-QCD simulations in the past decade, with some emphasis on calculations carried out under the auspices of the U.S. Lattice-QCD Collaboration, and describe a broad program of lattice-QCD calculations that will be relevant for future experiments at the intensity and energy frontiers. We then present details of the computational hardware and software resources needed to undertake these calculations.