ترغب بنشر مسار تعليمي؟ اضغط هنا

sCOLA: The N-body COLA Method Extended to the Spatial Domain

56   0   0.0 ( 0 )
 نشر من قبل Svetlin Tassev
 تاريخ النشر 2015
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present sCOLA -- an extension of the N-body COmoving Lagrangian Acceleration (COLA) method to the spatial domain. Similar to the original temporal-domain COLA, sCOLA is an N-body method for solving for large-scale structure in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory. Incorporating the sCOLA method in an N-body code allows one to gain computational speed by capturing the gravitational potential from the far field using perturbative techniques, while letting the N-body code solve only for the near field. The far and near fields are completely decoupled, effectively localizing gravity for the N-body side of the code. Thus, running an N-body code for a small simulation volume using sCOLA can reproduce the results of a standard N-body run for the same small volume embedded inside a much larger simulation. We demonstrate that sCOLA can be safely combined with the original temporal-domain COLA. sCOLA can be used as a method for performing zoom-in simulations. It also allows N-body codes to be made embarrassingly parallel, thus allowing for efficiently tiling a volume of interest using grid computing. Moreover, sCOLA can be useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering. Surveys that will benefit the most are ones with large aspect ratios, such as pencil-beam surveys, where sCOLA can easily capture the effects of large-scale transverse modes without the need to substantially increase the simulated volume. As an illustration of the method, we present proof-of-concept zoom-in simulations using a freely available sCOLA-based N-body code.

قيم البحث

اقرأ أيضاً

The growth rate and expansion history of the Universe can be measured from large galaxy redshift surveys using the Alcock-Paczynski effect. We validate the Redshift Space Distortion models used in the final analysis of the Sloan Digital Sky Survey (S DSS) extended Baryon Oscillation Spectroscopic Survey (eBOSS) Data Release 16 quasar clustering sample, in configuration and Fourier space, using a series of HOD mock catalogues generated using the OuterRim N-body simulation. We test three models on a series of non-blind mocks, in the OuterRim cosmology, and blind mocks, which have been rescaled to new cosmologies, and investigate the effects of redshift smearing and catastrophic redshifts. We find that for the non-blind mocks, the models are able to recover $fsigma_8$ to within 3% and $alpha_parallel$ and $alpha_bot$ to within 1%. The scatter in the measurements is larger for the blind mocks, due to the assumption of an incorrect fiducial cosmology. From this mock challenge, we find that all three models perform well, with similar systematic errors on $fsigma_8$, $alpha_parallel$ and $alpha_bot$ at the level of $sigma_{fsigma_8}=0.013$, $sigma_{alpha_parallel}=0.012$ and $sigma_{alpha_bot}=0.008$. The systematic error on the combined consensus is $sigma_{fsigma_8}=0.011$, $sigma_{alpha_parallel}=0.008$ and $sigma_{alpha_bot}=0.005$, which is used in the final DR16 analysis. For BAO fits in configuration and Fourier space, we take conservative systematic errors of $sigma_{alpha_parallel}=0.010$ and $sigma_{alpha_bot}=0.007$.
We develop a series of N-body data challenges, functional to the final analysis of the extended Baryon Oscillation Spectroscopic Survey (eBOSS) Data Release 16 (DR16) galaxy sample. The challenges are primarily based on high-fidelity catalogs constru cted from the Outer Rim simulation - a large box size realization (3 Gpc/h) characterized by an unprecedented combination of volume and mass resolution, down to 1.85x10^9 M_sun/h. We generate synthetic galaxy mocks by populating Outer Rim halos with a variety of halo occupation distribution (HOD) schemes of increasing complexity, spanning different redshift intervals. We then assess the performance of three complementary redshift space distortion (RSD) models in configuration and Fourier space, adopted for the analysis of the complete DR16 eBOSS sample of Luminous Red Galaxies (LRGs). We find all the methods mutually consistent, with comparable systematic errors on the Alcock-Paczynski parameters and the growth of structure, and robust to different HOD prescriptions - thus validating the robustness of the models and the pipelines used for the baryon acoustic oscillation (BAO) and full shape clustering analysis. In particular, all the techniques are able to recover a_par and a_perp to within 0.9%, and fsig8 to within 1.5%. As a by-product of our work, we are also able to gain interesting insights on the galaxy-halo connection. Our study is relevant for the final eBOSS DR16 `consensus cosmology, as the systematic error budget is informed by testing the results of analyses against these high-resolution mocks. In addition, it is also useful for future large-volume surveys, since similar mock-making techniques and systematic corrections can be readily extended to model for instance the Dark Energy Spectroscopic Instrument (DESI) galaxy sample.
Cosmological growth can be measured in the redshift space clustering of galaxies targeted by spectroscopic surveys. Accurate prediction of clustering of galaxies will require understanding galaxy physics which is a very hard and highly non-linear pro blem. Approximate models of redshift space distortion (RSD) take a perturbative approach to solve the evolution of dark matter and galaxies in the universe. In this paper we focus on eBOSS emission line galaxies (ELGs) which live in intermediate mass haloes. We create a series of mock catalogues using haloes from the Multidark and {sc Outer Rim} dark matter only N-body simulations. Our mock catalogues include various effects inspired by baryonic physics such as assembly bias and the characteristics of satellite galaxies kinematics, dynamics and statistics deviating from dark matter particles. We analyse these mocks using the TNS RSD model in Fourier space and the CLPT in configuration space. We conclude that these two RSD models provide an unbiased measurement of redshift space distortion within the statistical error of our mocks. We obtain the conservative theoretical systematic uncertainty of $3.3%$, $1.8%$ and $1.5%$ in $fsigma_8$, $alpha_{parallel}$ and $alpha_{bot}$ respectively for the TNS and CLPT models. We note that the estimated theoretical systematic error is an order of magnitude smaller than the statistical error of the eBOSS ELG sample and hence are negligible for the purpose of the current eBOSS ELG analysis.
We present a general parallelized and easy-to-use code to perform numerical simulations of structure formation using the COLA (COmoving Lagrangian Acceleration) method for cosmological models that exhibit scale-dependent growth at the level of first and second order Lagrangian perturbation theory. For modified gravity theories we also include screening using a fast approximate method that covers all the main examples of screening mechanisms in the literature. We test the code by comparing it to full simulations of two popular modified gravity models, namely $f(R)$ gravity and nDGP, and find good agreement in the modified gravity boost-factors relative to $Lambda$CDM even when using a fairly small number of COLA time steps.
This work discusses the main analogies and differences between the deterministic approach underlying most cosmological N-body simulations and the probabilistic interpretation of the problem that is often considered in mathematics and statistical mech anics. In practice, we advocate for averaging over an ensemble of $S$ independent simulations with $N$ particles each in order to study the evolution of the one-point probability density $Psi$ of finding a particle at a given location of phase space $(mathbf{x},mathbf{v})$ at time $t$. The proposed approach is extremely efficient from a computational point of view, with modest CPU and memory requirements, and it provides an alternative to traditional N-body simulations when the goal is to study the average properties of N-body systems, at the cost of abandoning the notion of well-defined trajectories for each individual particle. In one spatial dimension, our results, fully consistent with those previously reported in the literature for the standard deterministic formulation of the problem, highlight the differences between the evolution of the one-point probability density $Psi(x,v,t)$ and the predictions of the collisionless Boltzmann (Vlasov-Poisson) equation, as well as the relatively subtle dependence on the actual finite number $N$ of particles in the system. We argue that understanding this dependence with $N$ may actually shed more light on the dynamics of real astrophysical systems than the limit $Ntoinfty$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا