ترغب بنشر مسار تعليمي؟ اضغط هنا

Accurate estimators of power spectra in N-body simulations

474   0   0.0 ( 0 )
 نشر من قبل Andrew H. Jaffe
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Stephane Colombi




اسأل ChatGPT حول البحث

abridged] A method to rapidly estimate the Fourier power spectrum of a point distribution is presented. This method relies on a Taylor expansion of the trigonometric functions. It yields the Fourier modes from a number of FFTs, which is controlled by the order N of the expansion and by the dimension D of the system. In three dimensions, for the practical value N=3, the number of FFTs required is 20. We apply the method to the measurement of the power spectrum of a periodic point distribution that is a local Poisson realization of an underlying stationary field. We derive explicit analytic expression for the spectrum, which allows us to quantify--and correct for--the biases induced by discreteness and by the truncation of the Taylor expansion, and to bound the unknown effects of aliasing of the power spectrum. We show that these aliasing effects decrease rapidly with the order N. The only remaining significant source of errors is reduced to the unavoidable cosmic/sample variance due to the finite size of the sample. The analytical calculations are successfully checked against a cosmological N-body experiment. We also consider the initial conditions of this simulation, which correspond to a perturbed grid. This allows us to test a case where the local Poisson assumption is incorrect. Even in that extreme situation, the third-order Fourier-Taylor estimator behaves well. We also show how to reach arbitrarily large dynamic range in Fourier space (i.e., high wavenumber), while keeping statistical errors in control, by appropriately folding the particle distribution.



قيم البحث

اقرأ أيضاً

71 - Shaun Cole 1996
Tormen and Bertschinger have presented an algorithm which allows the dynamic range of N-body simulations to be extended by adding long-wavelength power to an evolved N-body simulation. This procedure is of considerable interest as it will enable mock galaxy catalogues to be constructed with volumes as large as those of the next generation of galaxy redshift surveys. Their algorithm, however, neglects the coupling between long-wavelength linear modes and short-wavelength non-linear modes. The growth of structure on small scales is coupled to the amplitude of long-wavelength density perturbations via their effect on the local value of the density parameter Omega_0.The effect of neglecting this coupling is quantified using a set of specially tailored N-body simulations. It is shown that the large-scale clustering of objects defined in the evolved density field such as galaxy clusters is strongly underestimated by their algorithm. An adaptation to their algorithm is proposed that, at the expense of additional complexity, remedies the shortcomings of the original one. Methods of constructing biased mock galaxy catalogues which utilise the basic algorithm of Tormen and Bertschinger, but avoid the pitfalls are discussed.
The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterises these models. In this work we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1% accuracy or below for all redshift relevant to nonlinear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, i.e. one-percent level, numerical simulations for this cosmological scenario.
In the next decade, cosmological surveys will have the statistical power to detect the absolute neutrino mass scale. N-body simulations of large-scale structure formation play a central role in interpreting data from such surveys. Yet these simulatio ns are Newtonian in nature. We provide a quantitative study of the limitations to treating neutrinos, implemented as N-body particles, in N-body codes, focusing on the error introduced by neglecting special relativistic effects. Special relativistic effects are potentially important due to the large thermal velocities of neutrino particles in the simulation box. We derive a self-consistent theory of linear perturbations in Newtonian and non-relativistic neutrinos and use this to demonstrate that N-body simulations overestimate the neutrino free-streaming scale, and cause errors in the matter power spectrum that depend on the initial redshift of the simulations. For $z_{i} lesssim 100$, and neutrino masses within the currently allowed range, this error is $lesssim 0.5%$, though represents an up to $sim 10%$ correction to the shape of the neutrino-induced suppression to the cold dark matter power spectrum. We argue that the simulations accurately model non-linear clustering of neutrinos so that the error is confined to linear scales.
180 - Eelco van Kampen 2000
The aim of this paper is to clarify the notion and cause of overmerging in N-body simulations, and to present analytical estimates for its timescale. Overmerging is the disruption of subhaloes within embedding haloes due to {it numerical} problems co nnected with the discreteness of N-body dynamics. It is shown that the process responsible for overmerging is particle-subhalo two-body heating. Various solutions to the overmerging problem are discussed
121 - M. Trenti 2008
Gravitational N-body simulations, that is numerical solutions of the equations of motions for N particles interacting gravitationally, are widely used tools in astrophysics, with applications from few body or solar system like systems all the way up to galactic and cosmological scales. In this article we present a summary review of the field highlighting the main methods for N-body simulations and the astrophysical context in which they are usually applied.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا