ترغب بنشر مسار تعليمي؟ اضغط هنا

UNIT project: Universe $N$-body simulations for the Investigation of Theoretical models from galaxy surveys

65   0   0.0 ( 0 )
 نشر من قبل Chia-Hsun Chuang
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present the UNIT $N$-body cosmological simulations project, designed to provide precise predictions for nonlinear statistics of the galaxy distribution. We focus on characterizing statistics relevant to emission line and luminous red galaxies in the current and upcoming generation of galaxy surveys. We use a suite of precise particle mesh simulations (FastPM) as well as with full $N$-body calculations with a mass resolution of $sim 1.2times10^9,h^{-1}$M$_{odot}$ to investigate the recently suggested technique of Angulo & Pontzen 2016 to suppress the variance of cosmological simulations We study redshift space distortions, cosmic voids, higher order statistics from $z=2$ down to $z=0$. We find that both two- and three-point statistics are unbiased. Over the scales of interest for baryon acoustic oscillations and redshift-space distortions, we find that the variance is greatly reduced in the two-point statistics and in the cross correlation between halos and cosmic voids, but is not reduced significantly for the three-point statistics. We demonstrate that the accuracy of the two-point correlation function for a galaxy survey with effective volume of 20 ($h^{-1}$Gpc)$^3$ is improved by about a factor of 40, indicating that two pairs of simulations with a volume of 1 ($h^{-1}$Gpc)$^3$ lead to the equivalent variance of $sim$150 such simulations. The $N$-body simulations presented here thus provide an effective survey volume of about seven times the effective survey volume of DESI or Euclid. The data from this project, including dark matter fields, halo catalogues, and their clustering statistics, are publicly available at http://www.unitsims.org.

قيم البحث

اقرأ أيضاً

We use N-body simulations to examine whether a characteristic turnaround radius, as predicted from the spherical collapse model in a $rm {Lambda CDM}$ Universe, can be meaningfully identified for galaxy clusters, in the presence of full three-dimensi onal effects. We use The Dark Sky Simulations and Illustris-TNG dark-matter--only cosmological runs to calculate radial velocity profiles around collapsed structures, extending out to many times the virial radius $R_{200}$. There, the turnaround radius can be unambiguously identified as the largest non-expanding scale around a center of gravity. We find that: (a) Indeed, a single turnaround scale can meaningfully describe strongly non-spherical structures. (b) For halos of masses $M_{200}>10^{13}M_odot$, the turnaround radius $R_{ta}$ scales with the enclosed mass $M_{ta}$ as $M_{ta}^{1/3}$, as predicted by the spherical collapse model. (c) The deviation of $R_{ta}$ in simulated halos from the spherical collapse model prediction is insensitive to halo asphericity. Rather, it is sensitive to the tidal forces due to massive neighbors when such are present. (d) Halos exhibit a characteristic average density within the turnaround scale. This characteristic density is dependent on cosmology and redshift. For the present cosmic epoch and for concordance cosmological parameters ($Omega_m sim 0.7$; $Omega_Lambda sim 0.3$) turnaround structures exhibit an average matter density contrast with the background Universe of $delta sim 11$. Thus $R_{ta}$ is equivalent to $R_{11}$ -- in a way analogous to defining the virial radius as $R_{200}$ -- with the advantage that $R_{11}$ is shown in this work to correspond to a kinematically relevant scale in N-body simulations.
We provide in depth MCMC comparisons of two different models for the halo redshift space power spectrum, namely a variant of the commonly applied Taruya-Nishimichi-Saito (TNS) model and an effective field theory of large scale structure (EFTofLSS) in spired model. Using many simulation realisations and Stage IV survey-like specifications for the covariance matrix, we check each models range of validity by testing for bias in the recovery of the fiducial growth rate of structure formation. The robustness of the determined range of validity is then tested by performing additional MCMC analyses using higher order multipoles, a larger survey volume and a more highly biased tracer catalogue. We find that under all tests, the TNS models range of validity remains robust and is found to be much higher than previous estimates. The EFTofLSS model fails to capture the spectra for highly biased tracers as well as becoming biased at higher wavenumbers when considering a very large survey volume. Further, we find that the marginalised constraints on $f$ for all analyses are stronger when using the TNS model.
79 - L. Guzzo , J. Bel , D. Bianchi 2018
Galaxy redshift surveys are one of the pillars of the current standard cosmological model and remain a key tool in the experimental effort to understand the origin of cosmic acceleration. To this end, the next generation of surveys aim at achieving s ub-percent precision in the measurement of the equation of state of dark energy $w(z)$ and the growth rate of structure $f(z)$. This however requires comparable control over systematic errors, stressing the need for improved modelling methods. In this contribution we review at the introductory level some highlights of the work done in this direction by the {it Darklight} project. Supported by an ERC Advanced Grant, {it Darklight} developed novel techniques for clustering analysis, which were tested through numerical simulations before being finally applied to galaxy data as in particular those of the recently completed VIPERS redshift survey. We focus in particular on: (a) advances on estimating the growth rate of structure from redshift-space distortions; (b) parameter estimation through global Bayesian reconstruction of the density field from survey data; (c) impact of massive neutrinos on large-scale structure measurements. Overall, {it Darklight} has contributed to paving the way for forthcoming high-precision experiments, such as {it Euclid}, the next ESA cosmological mission.
We introduce and demonstrate the power of a method to speed up current iterative techniques for N-body modified gravity simulations. Our method is based on the observation that the accuracy of the final result is not compromised if the calculation of the fifth force becomes less accurate, but substantially faster, in high-density regions where it is weak due to screening. We focus on the nDGP model which employs Vainshtein screening, and test our method by running AMR simulations in which the solutions on the finer levels of the mesh (high density) are not obtained iteratively, but instead interpolated from coarser levels. We show that the impact this has on the matter power spectrum is below $1%$ for $k < 5h/{rm Mpc}$ at $z = 0$, and even smaller at higher redshift. The impact on halo properties is also small ($lesssim 3%$ for abundance, profiles, mass; and $lesssim 0.05%$ for positions and velocities). The method can boost the performance of modified gravity simulations by more than a factor of 10, which allows them to be pushed to resolution levels that were previously hard to achieve.
140 - Sownak Bose , Baojiu Li 2016
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied $f(R)$ gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of $f(R)$ simulations. For example, a test simulation with $512^3$ particles in a box of size $512 , mathrm{Mpc}/h$ is now 5 times faster than before, while a Millennium-resolution simulation for $f(R)$ gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا