No Arabic abstract
We present a first attempt to design a quantum circuit for the determination of the parton content of the proton through the estimation of parton distribution functions (PDFs), in the context of high energy physics (HEP). The growing interest in quantum computing and the recent developments of new algorithms and quantum hardware devices motivates the study of methodologies applied to HEP. In this work we identify architectures of variational quantum circuits suitable for PDFs representation (qPDFs). We show experiments about the deployment of qPDFs on real quantum devices, taking into consideration current experimental limitations. Finally, we perform a global qPDF determination from collider data using quantum computer simulation on classical hardware and we compare the obtained partons and related phenomenological predictions involving hadronic processes to modern PDFs.
Forthcoming exascale digital computers will further advance our knowledge of quantum chromodynamics, but formidable challenges will remain. In particular, Euclidean Monte Carlo methods are not well suited for studying real-time evolution in hadronic collisions, or the properties of hadronic matter at nonzero temperature and chemical potential. Digital computers may never be able to achieve accurate simulations of such phenomena in QCD and other strongly-coupled field theories; quantum computers will do so eventually, though Im not sure when. Progress toward quantum simulation of quantum field theory will require the collaborative efforts of quantumists and field theorists, and though the physics payoff may still be far away, its worthwhile to get started now. Todays research can hasten the arrival of a new era in which quantum simulation fuels rapid progress in fundamental physics.
We investigate the feasibility of constraining parton distribution functions in the proton through a comparison with data on semi-inclusive deep-inelastic lepton-nucleon scattering. Specifically, we reweight replicas of these distributions according to how well they reproduce recent, very precise charged kaon multiplicity measurements and analyze how this procedure optimizes the determination of the sea quark densities and improves their uncertainties. The results can help to shed new light on the long standing question on the size of the flavor and charge symmetry breaking among quarks of radiative origin. An iterative method is proposed and adopted to account for the inevitable correlation with what is assumed about the parton-to-hadron fragmentation functions in the reweighting procedure. It is shown how the fragmentation functions can be optimized simultaneously in each step of the iteration. As a first case study, we implement this method to analyze kaon production data.
Recently, two photon PDF sets based on implementations of the LUX ansatz into the CT18 global analysis were released. In CT18lux, the photon PDF is calculated directly using the LUX master formula for all scales, $mu$. In an alternative realization, CT18qed, the photon PDF is initialized at the starting scale, $mu_0$, using the LUX formulation and evolved to higher scales $mu(>mu_0)$ with a combined QED+QCD kernel at $mathcal{O}(alpha),~mathcal{O}(alphaalpha_s)$ and $mathcal{O}(alpha^2)$. In the small-$x$ region, the photon PDF uncertainty is mainly induced by the quark and gluon PDFs, through the perturbative DIS structure functions. In comparison, the large-$x$ photon uncertainty comes from various low-energy, nonperturbative contributions, including variations of the inelastic structure functions in the resonance and continuum regions, higher-twist and target-mass corrections, and elastic electromagnetic form factors of the proton. We take the production of doubly-charged Higgs pairs, $(H^{++}H^{--})$, as an example of scenarios beyond the Standard Model to illustrate the phenomenological implications of these photon PDFs at the LHC.
We present a novel framework for simulating matrix models on a quantum computer. Supersymmetric matrix models have natural applications to superstring/M-theory and gravitational physics, in an appropriate limit of parameters. Furthermore, for certain states in the Berenstein-Maldacena-Nastase (BMN) matrix model, several supersymmetric quantum field theories dual to superstring/M-theory can be realized on a quantum device. Our prescription consists of four steps: regularization of the Hilbert space, adiabatic state preparation, simulation of real-time dynamics, and measurements. Regularization is performed for the BMN matrix model with the introduction of energy cut-off via the truncation in the Fock space. We use the Wan-Kim algorithm for fast digital adiabatic state preparation to prepare the low-energy eigenstates of this model as well as thermofield double state. Then, we provide an explicit construction for simulating real-time dynamics utilizing techniques of block-encoding, qubitization, and quantum signal processing. Lastly, we present a set of measurements and experiments that can be carried out on a quantum computer to further our understanding of superstring/M-theory beyond analytic results.
Atomic nuclei are important laboratories for exploring and testing new insights into the universe, such as experiments to directly detect dark matter or explore properties of neutrinos. The targets of interest are often heavy, complex nuclei that challenge our ability to reliably model them (as well as quantify the uncertainty of those models) with classical computers. Hence there is great interest in applying quantum computation to nuclear structure for these applications. As an early step in this direction, especially with regards to the uncertainties in the relevant quantum calculations, we develop circuits to implement variational quantum eigensolver (VQE) algorithms for the Lipkin-Meshkov-Glick model, which is often used in the nuclear physics community as a testbed for many-body methods. We present quantum circuits for VQE for 2 and 3 particles and discuss the construction of circuits for more particles. Implementing the VQE for a 2-particle system on the IBM Quantum Experience, we identify initialization and two-qubit gates as the largest sources of error. We find that error mitigation procedures reduce the errors in the results significantly, but additional quantum hardware improvements are needed for quantum calculations to be sufficiently accurate to be competitive with the best current classical methods.