No Arabic abstract
Gravitational N-body simulations, that is numerical solutions of the equations of motions for N particles interacting gravitationally, are widely used tools in astrophysics, with applications from few body or solar system like systems all the way up to galactic and cosmological scales. In this article we present a summary review of the field highlighting the main methods for N-body simulations and the astrophysical context in which they are usually applied.
Gravitational softening length is one of the key parameters to properly set up a cosmological $N$-body simulation. In this paper, we perform a large suit of high-resolution $N$-body simulations to revise the optimal softening scheme proposed by Power et al. (P03). Our finding is that P03 optimal scheme works well but is over conservative. Using smaller softening lengths than that of P03 can achieve higher spatial resolution and numerically convergent results on both circular velocity and density profiles. However using an over small softening length overpredicts matter density at the inner most region of dark matter haloes. We empirically explore a better optimal softening scheme based on P03 form and find that a small modification works well. This work will be useful for setting up cosmological simulations.
Commercial graphics processors (GPUs) have high compute capacity at very low cost, which makes them attractive for general purpose scientific computing. In this paper we show how graphics processors can be used for N-body simulations to obtain improvements in performance over current generation CPUs. We have developed a highly optimized algorithm for performing the O(N^2) force calculations that constitute the major part of stellar and molecular dynamics simulations. In some of the calculations, we achieve sustained performance of nearly 100 GFlops on an ATI X1900XTX. The performance on GPUs is comparable to specialized processors such as GRAPE-6A and MDGRAPE-3, but at a fraction of the cost. Furthermore, the wide availability of GPUs has significant implications for cluster computing and distributed computing efforts like Folding@Home.
In the next decade, cosmological surveys will have the statistical power to detect the absolute neutrino mass scale. N-body simulations of large-scale structure formation play a central role in interpreting data from such surveys. Yet these simulations are Newtonian in nature. We provide a quantitative study of the limitations to treating neutrinos, implemented as N-body particles, in N-body codes, focusing on the error introduced by neglecting special relativistic effects. Special relativistic effects are potentially important due to the large thermal velocities of neutrino particles in the simulation box. We derive a self-consistent theory of linear perturbations in Newtonian and non-relativistic neutrinos and use this to demonstrate that N-body simulations overestimate the neutrino free-streaming scale, and cause errors in the matter power spectrum that depend on the initial redshift of the simulations. For $z_{i} lesssim 100$, and neutrino masses within the currently allowed range, this error is $lesssim 0.5%$, though represents an up to $sim 10%$ correction to the shape of the neutrino-induced suppression to the cold dark matter power spectrum. We argue that the simulations accurately model non-linear clustering of neutrinos so that the error is confined to linear scales.
The aim of this paper is to clarify the notion and cause of overmerging in N-body simulations, and to present analytical estimates for its timescale. Overmerging is the disruption of subhaloes within embedding haloes due to {it numerical} problems connected with the discreteness of N-body dynamics. It is shown that the process responsible for overmerging is particle-subhalo two-body heating. Various solutions to the overmerging problem are discussed
It is shown that the historical summary of the growth in size of N-body simulations as measured by particle number in this review is missing some key milestones. Size matters, because particle number with appropriate force smoothing is a key method to suppress unwanted discreteness, so that the initial conditions and equations of motion are appropriate to growth by gravitational instability in a Poisson-Vlasov system appropriate to a Universe with dark matter. Published strong constraints on what can be done are not included in the review.