No Arabic abstract
In magnetohydrodynamics (MHD), the magnetic field is evolved by the induction equation and coupled to the gas dynamics by the Lorentz force. We perform numerical smoothed particle magnetohydrodynamics (Spmhd) simulations and study the influence of a numerical magnetic divergence. For instabilities arising from divergence B related errors, we find the hyperbolic/parabolic cleaning scheme suggested by Dedner et al. 2002 to give good results and prevent numerical artifacts from growing. Additionally, we demonstrate that certain current Spmhd implementations of magnetic field regularizations give rise to unphysical instabilities in long-time simulations. We also find this effect when employing Euler potentials (divergenceless by definition), which are not able to follow the winding-up process of magnetic field lines properly. Furthermore, we present cosmological simulations of galaxy cluster formation at extremely high resolution including the evolution of magnetic fields. We show synthetic Faraday rotation maps and derive structure functions to compare them with observations. Comparing all the simulations with and without divergence cleaning, we are able to confirm the results of previous simulations performed with the standard implementation of MHD in Spmhd at normal resolution. However, at extremely high resolution, a cleaning scheme is needed to prevent the growth of numerical errors at small scales.
We present an updated constrained hyperbolic/parabolic divergence cleaning algorithm for smoothed particle magnetohydrodynamics (SPMHD) that remains conservative with wave cleaning speeds which vary in space and time. This is accomplished by evolving the quantity $psi / c_h$ instead of $psi$. Doing so allows each particle to carry an individual wave cleaning speed, $c_h$, that can evolve in time without needing an explicit prescription for how it should evolve, preventing circumstances which we demonstrate could lead to runaway energy growth related to variable wave cleaning speeds. This modification requires only a minor adjustment to the cleaning equations and is trivial to adopt in existing codes. Finally, we demonstrate that our constrained hyperbolic/parabolic divergence cleaning algorithm, run for a large number of iterations, can reduce the divergence of the field to an arbitrarily small value, achieving $ abla cdot B=0$ to machine precision.
This article describes a data center hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data center has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) {sc ClusterInspect} visualizes properties of member galaxies of a selected galaxy cluster; (II) {sc SimCut} returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) {sc Smac} creates idealised 2D maps of various, physical quantities and observables of a selected object; (IV) {sc Phox} generates virtual X-ray observations with specifications of various current and upcoming instruments.
We present an implementation of smoothed particle hydrodynamics (SPH) with improved accuracy for simulations of galaxies and the large-scale structure. In particular, we combine, implement, modify and test a vast majority of SPH improvement techniques in the latest instalment of the GADGET code. We use the Wendland kernel functions, a particle wake-up time-step limiting mechanism and a time-dependent scheme for artificial viscosity, which includes a high-order gradient computation and shear flow limiter. Additionally, we include a novel prescription for time-dependent artificial conduction, which corrects for gravitationally induced pressure gradients and largely improves the SPH performance in capturing the development of gas-dynamical instabilities. We extensively test our new implementation in a wide range of hydrodynamical standard tests including weak and strong shocks as well as shear flows, turbulent spectra, gas mixing, hydrostatic equilibria and self-gravitating gas clouds. We jointly employ all modifications; however, when necessary we study the performance of individual code modules. We approximate hydrodynamical states more accurately and with significantly less noise than standard SPH. Furthermore, the new implementation promotes the mixing of entropy between different fluid phases, also within cosmological simulations. Finally, we study the performance of the hydrodynamical solver in the context of radiative galaxy formation and non-radiative galaxy cluster formation. We find galactic disks to be colder, thinner and more extended and our results on galaxy clusters show entropy cores instead of steadily declining entropy profiles. In summary, we demonstrate that our improved SPH implementation overcomes most of the undesirable limitations of standard SPH, thus becoming the core of an efficient code for large cosmological simulations.
Simulations of galaxy formation follow the gravitational and hydrodynamical interactions between gas, stars and dark matter through cosmic time. The huge dynamic range of such calculations severely limits strong scaling behaviour of the community codes in use, with load-imbalance, cache inefficiencies and poor vectorisation limiting performance. The new swift code exploits task-based parallelism designed for many-core compute nodes interacting via MPI using asynchronous communication to improve speed and scaling. A graph-based domain decomposition schedules interdependent tasks over available resources. Strong scaling tests on realistic particle distributions yield excellent parallel efficiency, and efficient cache usage provides a large speed-up compared to current codes even on a single core. SWIFT is designed to be easy to use by shielding the astronomer from computational details such as the construction of the tasks or MPI communication. The techniques and algorithms used in SWIFT may benefit other computational physics areas as well, for example that of compressible hydrodynamics. For details of this open-source project, see www.swiftsim.com
We present a new self-consistent method for incorporating dark matter annihilation feedback (DMAF) in cosmological N-body simulations. The power generated by DMAF is evaluated at each dark matter (DM) particle which allows for flexible energy injection into the surrounding gas based on the specific DM annihilation model under consideration. Adaptive, individual time steps for gas and DM particles are supported and a new time-step limiter, derived from the propagation of a Sedov--Taylor blast wave, is introduced. We compare this donor-based approach with a receiver-based approach used in recent studies and illustrate the differences by means of a toy example. Furthermore, we consider an isolated halo and a cosmological simulation and show that for these realistic cases, both methods agree well with each other. The extension of our implementation to scenarios such as non-local energy injection, velocity-dependent annihilation cross-sections, and DM decay is straightforward.