ترغب بنشر مسار تعليمي؟ اضغط هنا

The optimal gravitational softening length for cosmological N-body simulations

96   0   0.0 ( 0 )
 نشر من قبل Tianchi Zhang
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Gravitational softening length is one of the key parameters to properly set up a cosmological $N$-body simulation. In this paper, we perform a large suit of high-resolution $N$-body simulations to revise the optimal softening scheme proposed by Power et al. (P03). Our finding is that P03 optimal scheme works well but is over conservative. Using smaller softening lengths than that of P03 can achieve higher spatial resolution and numerically convergent results on both circular velocity and density profiles. However using an over small softening length overpredicts matter density at the inner most region of dark matter haloes. We empirically explore a better optimal softening scheme based on P03 form and find that a small modification works well. This work will be useful for setting up cosmological simulations.



قيم البحث

اقرأ أيضاً

N-body simulations are essential tools in physical cosmology to understand the large-scale structure (LSS) formation of the Universe. Large-scale simulations with high resolution are important for exploring the substructure of universe and for determ ining fundamental physical parameters like neutrino mass. However, traditional particle-mesh (PM) based algorithms use considerable amounts of memory, which limits the scalability of simulations. Therefore, we designed a two-level PM algorithm CUBE towards optimal performance in memory consumption reduction. By using the fixed-point compression technique, CUBE reduces the memory consumption per N-body particle toward 6 bytes, an order of magnitude lower than the traditional PM-based algorithms. We scaled CUBE to 512 nodes (20,480 cores) on an Intel Cascade Lake based supercomputer with $simeq$95% weak-scaling efficiency. This scaling test was performed in Cosmo-$pi$ -- a cosmological LSS simulation using $simeq$4.4 trillion particles, tracing the evolution of the universe over $simeq$13.7 billion years. To our best knowledge, Cosmo-$pi$ is the largest completed cosmological N-body simulation. We believe CUBE has a huge potential to scale on exascale supercomputers for larger simulations.
Large redshift surveys of galaxies and clusters are providing the first opportunities to search for distortions in the observed pattern of large-scale structure due to such effects as gravitational redshift. We focus on non-linear scales and apply a quasi-Newtonian approach using N-body simulations to predict the small asymmetries in the cross-correlation function of two galaxy different populations. Following recent work by Bonvin et al., Zhao and Peacock and Kaiser on galaxy clusters, we include effects which enter at the same order as gravitational redshift: the transverse Doppler effect, light-cone effects, relativistic beaming, luminosity distance perturbation and wide-angle effects. We find that all these effects cause asymmetries in the cross-correlation functions. Quantifying these asymmetries, we find that the total effect is dominated by the gravitational redshift and luminosity distance perturbation at small and large scales, respectively. By adding additional subresolution modelling of galaxy structure to the large-scale structure information, we find that the signal is significantly increased, indicating that structure on the smallest scales is important and should be included. We report on comparison of our simulation results with measurements from the SDSS/BOSS galaxy redshift survey in a companion paper.
Cosmology is entering an era of percent level precision due to current large observational surveys. This precision in observation is now demanding more accuracy from numerical methods and cosmological simulations. In this paper, we study the accuracy of $N$-body numerical simulations and their dependence on changes in the initial conditions and in the simulation algorithms. For this purpose, we use a series of cosmological $N$-body simulations with varying initial conditions. We test the influence of the initial conditions, namely the pre-initial configuration (preIC), the order of the Lagrangian perturbation theory (LPT), and the initial redshift, on the statistics associated with the large scale structures of the universe such as the halo mass function, the density power spectrum, and the maximal extent of the large scale structures. We find that glass or grid pre-initial conditions give similar results at $zlesssim 2$. However, the initial excess of power in the glass initial conditions yields a subtle difference in the power spectra and the mass function at high redshifts. The LPT order used to generate the ICs of the simulations is found to play a crucial role. First-order LPT (1LPT) simulations underestimate the number of massive haloes with respect to second-order (2LPT) ones, typically by 2% at $10^{14} h^{-1} M_odot$ for an initial redshift of 23, and the small-scale power with an underestimation of 6% near the Nyquist frequency for $z_mathrm{ini} = 23$. Moreover, at higher redshifts, the high-mass end of the mass function is significantly underestimated in 1LPT simulations. On the other hand, when the LPT order is fixed, the starting redshift has a systematic impact on the low-mass end of the halo mass function.
We present a general framework for obtaining robust bounds on the nature of dark matter using cosmological $N$-body simulations and Lyman-alpha forest data. We construct an emulator of hydrodynamical simulations, which is a flexible, accurate and com putationally-efficient model for predicting the response of the Lyman-alpha forest flux power spectrum to different dark matter models, the state of the intergalactic medium (IGM) and the primordial power spectrum. The emulator combines a flexible parameterization for the small-scale suppression in the matter power spectrum arising in non-cold dark matter models, with an improved IGM model. We then demonstrate how to optimize the emulator for the case of ultra-light axion dark matter, presenting tests of convergence. We also carry out cross-validation tests of the accuracy of flux power spectrum prediction. This framework can be optimized for the analysis of many other dark matter candidates, e.g., warm or interacting dark matter. Our work demonstrates that a combination of an optimized emulator and cosmological effective theories, where many models are described by a single set of equations, is a powerful approach for robust and computationally-efficient inference from the cosmic large-scale structure.
We present a new method for generating initial conditions for numerical cosmological simulations in which massive neutrinos are treated as an extra set of N-body (collisionless) particles. It allows us to accurately follow the density field for both Cold Dark Matter (CDM) and neutrinos at both high and low redshifts. At high redshifts, the new method is able to reduce the shot noise in the neutrino power spectrum by a factor of more than $10^7$ compared to previous methods, where the power spectrum was dominated by shot noise at all scales. We find that our new approach also helps to reduce the noise on the total matter power spectrum on large scales, whereas on small scales the results agree with previous simulations. Our new method also allows for a systematic study of clustering of the low velocity tail of the distribution function of neutrinos. This method also allows for the study of the evolution of the overall velocity distribution as a function of the environment determined by the CDM field.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا