No Arabic abstract
Quantum ESPRESSO is an open-source distribution of computer codes for quantum-mechanical materials modeling, based on density-functional theory, pseudopotentials, and plane waves, and renowned for its performance on a wide range of hardware architectures, from laptops to massively parallel computers, as well as for the breadth of its applications. In this paper we present a motivation and brief review of the ongoing effort to port Quantum ESPRESSO onto heterogeneous architectures based on hardware accelerators, which will overcome the energy constraints that are currently hindering the way towards exascale computing.
This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEPs research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.
In galaxy clusters, modern radio interferometers observe non-thermal radio sources with unprecedented spatial and spectral resolution. For the first time, the new data allows to infer the structure of the intra-cluster magnetic fields on small scales via Faraday tomography. This leap forward demands new numerical models for the amplification of magnetic fields in cosmic structure formation - the cosmological magnetic dynamo. Here we present a novel numerical approach to astrophyiscal MHD simulations aimed to resolve this small-scale dynamo in future cosmological simulations. As a first step, we implement a fifth order WENO scheme in the new code WOMBAT. We show that this scheme doubles the effective resolution of the simulation and is thus less expensive than common second order schemes. WOMBAT uses a novel approach to parallelization and load balancing developed in collaboration with performance engineers at Cray Inc. This will allow us scale simulation to the exaflop regime and achieve kpc resolution in future cosmological simulations of galaxy clusters. Here we demonstrate the excellent scaling properties of the code and argue that resolved simulations of the cosmological small scale dynamo within the whole virial radius are possible in the next years.
Drip-line nuclei have very different properties from those of the valley of stability, as they are weakly bound and resonant. Therefore, the models devised for stable nuclei can no longer be applied therein. Hence, a new theoretical tool, the Gamow Shell Model (GSM), has been developed to study the many-body states occurring at the limits of the nuclear chart. GSM is a configuration interaction model based on the use of the so-called Berggren basis, which contains bound, resonant and scattering states, so that inter-nucleon correlations are fully taken into account and the asymptotes of extended many-body wave functions are precisely handled. However, large complex symmetric matrices must be diagonalized in this framework, therefore the use of very powerful parallel machines is needed therein. In order to fully take advantage of their power, a 2D partitioning scheme using hybrid MPI/OpenMP parallelization has been developed in our GSM code. The specificities of the 2D partitioning scheme in the GSM framework will be described and illustrated with numerical examples. It will then be shown that the introduction of this scheme in the GSM code greatly enhances its capabilities.
The numerical simulation of quantum circuits is an indispensable tool for development, verification and validation of hybrid quantum-classical algorithms on near-term quantum co-processors. The emergence of exascale high-performance computing (HPC) platforms presents new opportunities for pushing the boundaries of quantum circuit simulation. We present a modernized version of the Tensor Network Quantum Virtual Machine (TNQVM) which serves as a quantum circuit simulation backend in the eXtreme-scale ACCelerator (XACC) framework. The new version is based on the general purpose, scalable tensor network processing library, ExaTN, and provides multiple configurable quantum circuit simulators enabling either exact quantum circuit simulation via the full tensor network contraction, or approximate quantum state representations via suitable tensor factorizations. Upon necessity, stochastic noise modeling from real quantum processors is incorporated into the simulations by modeling quantum channels with Kraus tensors. By combining the portable XACC quantum programming frontend and the scalable ExaTN numerical backend we introduce an end-to-end virtual quantum development environment which can scale from laptops to future exascale platforms. We report initial benchmarks of our framework which include a demonstration of the distributed execution, incorporation of quantum decoherence models, and simulation of the random quantum circuits used for the certification of quantum supremacy on the Google Sycamore superconducting architecture.
With quantum computers of significant size now on the horizon, we should understand how to best exploit their initially limited abilities. To this end, we aim to identify a practical problem that is beyond the reach of current classical computers, but that requires the fewest resources for a quantum computer. We consider quantum simulation of spin systems, which could be applied to understand condensed matter phenomena. We synthesize explicit circuits for three leading quantum simulation algorithms, employing diverse techniques to tighten error bounds and optimize circuit implementations. Quantum signal processing appears to be preferred among algorithms with rigorous performance guarantees, whereas higher-order product formulas prevail if empirical error estimates suffice. Our circuits are orders of magnitude smaller than those for the simplest classically-infeasible instances of factoring and quantum chemistry.