Do you want to publish a course? Click here

PPPM and TreePM Methods on GRAPE Systems for Cosmological N-body Simulations

163   0   0.0 ( 0 )
 Added by Kohji Yoshikawa
 Publication date 2005
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present Particle-Particle-Particle-Mesh (PPPM) and Tree Particle-Mesh (TreePM) implementations on GRAPE-5 and GRAPE-6A systems, special-purpose hardware accelerators for gravitational many-body simulations. In our PPPM and TreePM implementations on GRAPE, the computational time is significantly reduced compared with the conventional implementations without GRAPE, especially under the strong particle clustering, and almost constant irrespective of the degree of particle clustering. We carry out the survey of two simulation parameters, the PM grid spacing and the opening parameter for the most optimal combination of force accuracy and computational speed. We also describe the parallelization of these implementations on a PC-GRAPE cluster, in which each node has one GRAPE board, and present the optimal configuration of simulation parameters for good parallel scalability.



rate research

Read More

Direct $N$-body simulations of star clusters are accurate but expensive, largely due to the numerous $mathcal{O} (N^2)$ pairwise force calculations. To solve the post-million-body problem, it will be necessary to use approximate force solvers, such as tree codes. In this work, we adapt a tree-based, optimized Fast Multipole Method (FMM) to the collisional $N$-body problem. The use of a rotation-accelerated translation operator and an error-controlled cell opening criterion leads to a code that can be tuned to arbitrary accuracy. We demonstrate that our code, Taichi, can be as accurate as direct summation when $N> 10^4$. This opens up the possibility of performing large-$N$, star-by-star simulations of massive stellar clusters, and would permit large parameter space studies that would require years with the current generation of direct summation codes. Using a series of tests and idealized models, we show that Taichi can accurately model collisional effects, such as dynamical friction and the core-collapse time of idealized clusters, producing results in strong agreement with benchmarks from other collisional codes such as NBODY6++GPU or PeTar. Parallelized using OpenMP and AVX, Taichi is demonstrated to be more efficient than other CPU-based direct $N$-body codes for simulating large systems. With future improvements to the handling of close encounters and binary evolution, we clearly demonstrate the potential of an optimized FMM for the modeling of collisional stellar systems, opening the door to accurate simulations of massive globular clusters, super star clusters, and even galactic nuclei.
Cosmology is entering an era of percent level precision due to current large observational surveys. This precision in observation is now demanding more accuracy from numerical methods and cosmological simulations. In this paper, we study the accuracy of $N$-body numerical simulations and their dependence on changes in the initial conditions and in the simulation algorithms. For this purpose, we use a series of cosmological $N$-body simulations with varying initial conditions. We test the influence of the initial conditions, namely the pre-initial configuration (preIC), the order of the Lagrangian perturbation theory (LPT), and the initial redshift, on the statistics associated with the large scale structures of the universe such as the halo mass function, the density power spectrum, and the maximal extent of the large scale structures. We find that glass or grid pre-initial conditions give similar results at $zlesssim 2$. However, the initial excess of power in the glass initial conditions yields a subtle difference in the power spectra and the mass function at high redshifts. The LPT order used to generate the ICs of the simulations is found to play a crucial role. First-order LPT (1LPT) simulations underestimate the number of massive haloes with respect to second-order (2LPT) ones, typically by 2% at $10^{14} h^{-1} M_odot$ for an initial redshift of 23, and the small-scale power with an underestimation of 6% near the Nyquist frequency for $z_mathrm{ini} = 23$. Moreover, at higher redshifts, the high-mass end of the mass function is significantly underestimated in 1LPT simulations. On the other hand, when the LPT order is fixed, the starting redshift has a systematic impact on the low-mass end of the halo mass function.
(Abridged) We use high resolution cosmological N-body simulations to study the growth of intermediate to supermassive black holes from redshift 49 to zero. We track the growth of black holes from the seeds of population III stars to black holes in the range of 10^3 < M < 10^7 Msun -- not quasars, but rather IMBH to low-mass SMBHs. These lower mass black holes are the primary observable for the Laser Interferometer Space Antenna (LISA). The large-scale dynamics of the black holes are followed accurately within the simulation down to scales of 1 kpc; thereafter, we follow the merger analytically from the last dynamical friction phase to black hole coalescence. We find that the merger rate of these black holes is R~25 per year between 8 < z < 11 and R = 10 per year at z=3. Before the merger occurs the incoming IMBH may be observed with a next generation of X-ray telescopes as a ULX source with a rate of about ~ 3 - 7 per year for 1 < z < 5. We develop an analytic prescription that captures the most important black hole growth mechanisms: galaxy merger-driven gas accretion and black hole coalescence. Using this, we find that SMBH at the center of Milky Way type galaxy was in place with most of its mass by z = 4.7, and most of the growth was driven by gas accretion excited by major mergers. Hundreds of black holes have failed to coalesce with the SMBH by z=0, some with masses of 10000 Msun, orbiting within the dark matter halo with luminosities up to ~ 30000 Lsun. These X-ray sources can easily be observed with Chandra at ~ 100 kpc.
Gravitational softening length is one of the key parameters to properly set up a cosmological $N$-body simulation. In this paper, we perform a large suit of high-resolution $N$-body simulations to revise the optimal softening scheme proposed by Power et al. (P03). Our finding is that P03 optimal scheme works well but is over conservative. Using smaller softening lengths than that of P03 can achieve higher spatial resolution and numerically convergent results on both circular velocity and density profiles. However using an over small softening length overpredicts matter density at the inner most region of dark matter haloes. We empirically explore a better optimal softening scheme based on P03 form and find that a small modification works well. This work will be useful for setting up cosmological simulations.
We use gauge-invariant cosmological perturbation theory to calculate the displacement field that sets the initial conditions for $N$-body simulations. Using first and second-order fully relativistic perturbation theory in the synchronous-comoving gauge, allows us to go beyond the Newtonian predictions and to calculate relativistic corrections to it. We use an Einstein--de Sitter model, including both growing and decaying modes in our solutions. The impact of our results should be assessed through the implementation of the featured displacement in cosmological $N$-body simulations.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا