Do you want to publish a course? Click here

On the Reliability of N-body Simulations

633   0   0.0 ( 0 )
 Added by Tjarda Boekholt
 Publication date 2014
  fields Physics
and research's language is English




Ask ChatGPT about the research

The general consensus in the N-body community is that statistical results of an ensemble of collisional N-body simulations are accurate, even though individual simulations are not. A way to test this hypothesis is to make a direct comparison of an ensemble of solutions obtained by conventional methods with an ensemble of true solutions. In order to make this possible, we wrote an N-body code called Brutus, that uses arbitrary-precision arithmetic. In combination with the Bulirsch--Stoer method, Brutus is able to obtain converged solutions, which are true up to a specified number of digits. We perform simulations of democratic 3-body systems, where after a sequence of resonances and ejections, a final configuration is reached consisting of a permanent binary and an escaping star. We do this with conventional double-precision methods, and with Brutus; both have the same set of initial conditions and initial realisations. The ensemble of solutions from the conventional simulations is compared directly to that of the converged simulations, both as an ensemble and on an individual basis to determine the distribution of the errors. We find that on average at least half of the conventional simulations diverge from the converged solution, such that the two solutions are microscopically incomparable. For the solutions which have not diverged significantly, we observe that if the integrator has a bias in energy and angular momentum, this propagates to a bias in the statistical properties of the binaries. In the case when the conventional solution has diverged onto an entirely different trajectory in phase-space, we find that the errors are centred around zero and symmetric; the error due to divergence is unbiased, as long as the time-step parameter, eta <= 2^(-5) and when simulations which violate energy conservation by more than 10% are excluded.



rate research

Read More

Commercial graphics processors (GPUs) have high compute capacity at very low cost, which makes them attractive for general purpose scientific computing. In this paper we show how graphics processors can be used for N-body simulations to obtain improvements in performance over current generation CPUs. We have developed a highly optimized algorithm for performing the O(N^2) force calculations that constitute the major part of stellar and molecular dynamics simulations. In some of the calculations, we achieve sustained performance of nearly 100 GFlops on an ATI X1900XTX. The performance on GPUs is comparable to specialized processors such as GRAPE-6A and MDGRAPE-3, but at a fraction of the cost. Furthermore, the wide availability of GPUs has significant implications for cluster computing and distributed computing efforts like Folding@Home.
92 - Nilanjan Banik , Jo Bovy 2021
Stellar tidal streams are sensitive tracers of the properties of the gravitational potential in which they orbit and detailed observations of their density structure can be used to place stringent constraints on fluctuations in the potential caused by, e.g., the expected populations of dark matter subhalos in the standard cold dark matter paradigm (CDM). Simulations of the evolution of stellar streams in live $N$-body halos without low-mass dark-matter subhalos, however, indicate that streams exhibit significant perturbations on small scales even in the absence of substructure. Here we demonstrate, using high-resolution $N$-body simulations combined with sophisticated semi-analytic and simple analytic models, that the mass resolutions of $10^4$--$10^5,rm{M}_{odot}$ commonly used to perform such simulations cause spurious stream density variations with a similar magnitude on large scales as those expected from a CDM-like subhalo population and an order of magnitude larger on small, yet observable, scales. We estimate that mass resolutions of $approx100,rm{M}_{odot}$ ($approx1,rm{M}_{odot}$) are necessary for spurious, numerical density variations to be well below the CDM subhalo expectation on large (small) scales. That streams are sensitive to a simulations particle mass down to such small masses indicates that streams are sensitive to dark matter clustering down to these low masses if a significant fraction of the dark matter is clustered or concentrated in this way, for example, in MACHO models with masses of $10$--$100,rm{M}_{odot}$.
Cosmology is entering an era of percent level precision due to current large observational surveys. This precision in observation is now demanding more accuracy from numerical methods and cosmological simulations. In this paper, we study the accuracy of $N$-body numerical simulations and their dependence on changes in the initial conditions and in the simulation algorithms. For this purpose, we use a series of cosmological $N$-body simulations with varying initial conditions. We test the influence of the initial conditions, namely the pre-initial configuration (preIC), the order of the Lagrangian perturbation theory (LPT), and the initial redshift, on the statistics associated with the large scale structures of the universe such as the halo mass function, the density power spectrum, and the maximal extent of the large scale structures. We find that glass or grid pre-initial conditions give similar results at $zlesssim 2$. However, the initial excess of power in the glass initial conditions yields a subtle difference in the power spectra and the mass function at high redshifts. The LPT order used to generate the ICs of the simulations is found to play a crucial role. First-order LPT (1LPT) simulations underestimate the number of massive haloes with respect to second-order (2LPT) ones, typically by 2% at $10^{14} h^{-1} M_odot$ for an initial redshift of 23, and the small-scale power with an underestimation of 6% near the Nyquist frequency for $z_mathrm{ini} = 23$. Moreover, at higher redshifts, the high-mass end of the mass function is significantly underestimated in 1LPT simulations. On the other hand, when the LPT order is fixed, the starting redshift has a systematic impact on the low-mass end of the halo mass function.
In the next decade, cosmological surveys will have the statistical power to detect the absolute neutrino mass scale. N-body simulations of large-scale structure formation play a central role in interpreting data from such surveys. Yet these simulations are Newtonian in nature. We provide a quantitative study of the limitations to treating neutrinos, implemented as N-body particles, in N-body codes, focusing on the error introduced by neglecting special relativistic effects. Special relativistic effects are potentially important due to the large thermal velocities of neutrino particles in the simulation box. We derive a self-consistent theory of linear perturbations in Newtonian and non-relativistic neutrinos and use this to demonstrate that N-body simulations overestimate the neutrino free-streaming scale, and cause errors in the matter power spectrum that depend on the initial redshift of the simulations. For $z_{i} lesssim 100$, and neutrino masses within the currently allowed range, this error is $lesssim 0.5%$, though represents an up to $sim 10%$ correction to the shape of the neutrino-induced suppression to the cold dark matter power spectrum. We argue that the simulations accurately model non-linear clustering of neutrinos so that the error is confined to linear scales.
We examine the deviation of Cold Dark Matter particle trajectories from the Newtonian result as the size of the region under study becomes comparable to or exceeds the particle horizon. To first order in the gravitational potential, the general relativistic result coincides with the Zeldovich approximation and hence the Newtonian prediction on all scales. At second order, General Relativity predicts corrections which overtake the corresponding second order Newtonian terms above a certain scale of the order of the Hubble radius. However, since second order corrections are very much suppressed on such scales, we conclude that simulations which exceed the particle horizon but use Newtonian equations to evolve the particles, reproduce the correct trajectories very well. The dominant relativistic corrections to the power spectrum on scales close to the horizon are at most of the order of $sim 10^{-5}$ at $z=49$ and $sim 10^{-3}$ at $z=0$. The differences in the positions of real space features are affected at a level below $10^{-6}$ at both redshifts. Our analysis also clarifies the relation of N-body results to relativistic considerations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا