No Arabic abstract
In galaxy clusters, modern radio interferometers observe non-thermal radio sources with unprecedented spatial and spectral resolution. For the first time, the new data allows to infer the structure of the intra-cluster magnetic fields on small scales via Faraday tomography. This leap forward demands new numerical models for the amplification of magnetic fields in cosmic structure formation - the cosmological magnetic dynamo. Here we present a novel numerical approach to astrophyiscal MHD simulations aimed to resolve this small-scale dynamo in future cosmological simulations. As a first step, we implement a fifth order WENO scheme in the new code WOMBAT. We show that this scheme doubles the effective resolution of the simulation and is thus less expensive than common second order schemes. WOMBAT uses a novel approach to parallelization and load balancing developed in collaboration with performance engineers at Cray Inc. This will allow us scale simulation to the exaflop regime and achieve kpc resolution in future cosmological simulations of galaxy clusters. Here we demonstrate the excellent scaling properties of the code and argue that resolved simulations of the cosmological small scale dynamo within the whole virial radius are possible in the next years.
Due to increase in computing power, high-order Eulerian schemes will likely become instrumental for the simulations of turbulence and magnetic field amplification in astrophysical fluids in the next years. We present the implementation of a fifth order weighted essentially non-oscillatory scheme for constrained-transport magnetohydrodynamics into the code WOMBAT. We establish the correctness of our implementation with an extensive number tests. We find that the fifth order scheme performs as accurately as a common second order scheme at half the resolution. We argue that for a given solution quality the new scheme is more computationally efficient than lower order schemes in three dimensions. We also establish the performance characteristics of the solver in the WOMBAT framework. Our implementation fully vectorizes using flattened arrays in thread-local memory. It performs at about 0.6 Million zones per second per node on Intel Broadwell. We present scaling tests of the code up to 98 thousand cores on the Cray XC40 machine Hazel Hen, with a sustained performance of about 5 percent of peak at scale.
We investigate the applicability of curvilinear grids in the context of astrophysical simulations and WENO schemes. With the non-smooth mapping functions from Calhoun et al. (2008), we can tackle many astrophysical problems which were out of scope with the standard grids in numerical astrophysics. We describe the difficulties occurring when implementing curvilinear coordinates into our WENO code, and how we overcome them. We illustrate the theoretical results with numerical data. The WENO finite difference scheme works only for high Mach number flows and smooth mapping functions whereas the finite volume scheme gives accurate results even for low Mach number flows and on non-smooth grids.
The architecture of Exascale computing facilities, which involves millions of heterogeneous processing units, will deeply impact on scientific applications. Future astrophysical HPC applications must be designed to make such computing systems exploitable. The ExaNeSt H2020 EU-funded project aims to design and develop an exascale ready prototype based on low-energy-consumption ARM64 cores and FPGA accelerators. We participate to the design of the platform and to the validation of the prototype with cosmological N-body and hydrodynamical codes suited to perform large-scale, high-resolution numerical simulations of cosmic structures formation and evolution. We discuss our activities on astrophysical applications to take advantage of the underlying architecture.
Quantum ESPRESSO is an open-source distribution of computer codes for quantum-mechanical materials modeling, based on density-functional theory, pseudopotentials, and plane waves, and renowned for its performance on a wide range of hardware architectures, from laptops to massively parallel computers, as well as for the breadth of its applications. In this paper we present a motivation and brief review of the ongoing effort to port Quantum ESPRESSO onto heterogeneous architectures based on hardware accelerators, which will overcome the energy constraints that are currently hindering the way towards exascale computing.
This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEPs research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.