Do you want to publish a course? Click here

SWIFT: task-based hydrodynamics and gravity for cosmological simulations

226   0   0.0 ( 0 )
 Added by Matthieu Schaller
 Publication date 2015
  fields Physics
and research's language is English
 Authors Tom Theuns




Ask ChatGPT about the research

Simulations of galaxy formation follow the gravitational and hydrodynamical interactions between gas, stars and dark matter through cosmic time. The huge dynamic range of such calculations severely limits strong scaling behaviour of the community codes in use, with load-imbalance, cache inefficiencies and poor vectorisation limiting performance. The new swift code exploits task-based parallelism designed for many-core compute nodes interacting via MPI using asynchronous communication to improve speed and scaling. A graph-based domain decomposition schedules interdependent tasks over available resources. Strong scaling tests on realistic particle distributions yield excellent parallel efficiency, and efficient cache usage provides a large speed-up compared to current codes even on a single core. SWIFT is designed to be easy to use by shielding the astronomer from computational details such as the construction of the tasks or MPI communication. The techniques and algorithms used in SWIFT may benefit other computational physics areas as well, for example that of compressible hydrodynamics. For details of this open-source project, see www.swiftsim.com



rate research

Read More

The architecture of Exascale computing facilities, which involves millions of heterogeneous processing units, will deeply impact on scientific applications. Future astrophysical HPC applications must be designed to make such computing systems exploitable. The ExaNeSt H2020 EU-funded project aims to design and develop an exascale ready prototype based on low-energy-consumption ARM64 cores and FPGA accelerators. We participate to the design of the platform and to the validation of the prototype with cosmological N-body and hydrodynamical codes suited to perform large-scale, high-resolution numerical simulations of cosmic structures formation and evolution. We discuss our activities on astrophysical applications to take advantage of the underlying architecture.
This article describes a data center hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data center has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) {sc ClusterInspect} visualizes properties of member galaxies of a selected galaxy cluster; (II) {sc SimCut} returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) {sc Smac} creates idealised 2D maps of various, physical quantities and observables of a selected object; (IV) {sc Phox} generates virtual X-ray observations with specifications of various current and upcoming instruments.
157 - Emiliano Merlin 2009
We present EvoL, the new release of the Padova N-body code for cosmological simulations of galaxy formation and evolution. In this paper, the basic Tree + SPH code is presented and analysed, together with an overview on the software architectures. EvoL is a flexible parallel Fortran95 code, specifically designed for simulations of cosmological structure formation on cluster, galactic and sub-galactic scales. EvoL is a fully Lagrangian self-adaptive code, based on the classical Oct-tree and on the Smoothed Particle Hydrodynamics algorithm. It includes special features such as adaptive softening lengths with correcting extra-terms, and modern formulations of SPH and artificial viscosity. It is designed to be run in parallel on multiple CPUs to optimize the performance and save computational time. We describe the code in detail, and present the results of a number of standard hydrodynamical tests.
In magnetohydrodynamics (MHD), the magnetic field is evolved by the induction equation and coupled to the gas dynamics by the Lorentz force. We perform numerical smoothed particle magnetohydrodynamics (Spmhd) simulations and study the influence of a numerical magnetic divergence. For instabilities arising from divergence B related errors, we find the hyperbolic/parabolic cleaning scheme suggested by Dedner et al. 2002 to give good results and prevent numerical artifacts from growing. Additionally, we demonstrate that certain current Spmhd implementations of magnetic field regularizations give rise to unphysical instabilities in long-time simulations. We also find this effect when employing Euler potentials (divergenceless by definition), which are not able to follow the winding-up process of magnetic field lines properly. Furthermore, we present cosmological simulations of galaxy cluster formation at extremely high resolution including the evolution of magnetic fields. We show synthetic Faraday rotation maps and derive structure functions to compare them with observations. Comparing all the simulations with and without divergence cleaning, we are able to confirm the results of previous simulations performed with the standard implementation of MHD in Spmhd at normal resolution. However, at extremely high resolution, a cleaning scheme is needed to prevent the growth of numerical errors at small scales.
N-body simulations are essential tools in physical cosmology to understand the large-scale structure (LSS) formation of the Universe. Large-scale simulations with high resolution are important for exploring the substructure of universe and for determining fundamental physical parameters like neutrino mass. However, traditional particle-mesh (PM) based algorithms use considerable amounts of memory, which limits the scalability of simulations. Therefore, we designed a two-level PM algorithm CUBE towards optimal performance in memory consumption reduction. By using the fixed-point compression technique, CUBE reduces the memory consumption per N-body particle toward 6 bytes, an order of magnitude lower than the traditional PM-based algorithms. We scaled CUBE to 512 nodes (20,480 cores) on an Intel Cascade Lake based supercomputer with $simeq$95% weak-scaling efficiency. This scaling test was performed in Cosmo-$pi$ -- a cosmological LSS simulation using $simeq$4.4 trillion particles, tracing the evolution of the universe over $simeq$13.7 billion years. To our best knowledge, Cosmo-$pi$ is the largest completed cosmological N-body simulation. We believe CUBE has a huge potential to scale on exascale supercomputers for larger simulations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا