ترغب بنشر مسار تعليمي؟ اضغط هنا

Assessing non-linear models for galaxy clustering III: Theoretical accuracy for Stage IV surveys

65   0   0.0 ( 0 )
 نشر من قبل Benjamin Bose
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We provide in depth MCMC comparisons of two different models for the halo redshift space power spectrum, namely a variant of the commonly applied Taruya-Nishimichi-Saito (TNS) model and an effective field theory of large scale structure (EFTofLSS) inspired model. Using many simulation realisations and Stage IV survey-like specifications for the covariance matrix, we check each models range of validity by testing for bias in the recovery of the fiducial growth rate of structure formation. The robustness of the determined range of validity is then tested by performing additional MCMC analyses using higher order multipoles, a larger survey volume and a more highly biased tracer catalogue. We find that under all tests, the TNS models range of validity remains robust and is found to be much higher than previous estimates. The EFTofLSS model fails to capture the spectra for highly biased tracers as well as becoming biased at higher wavenumbers when considering a very large survey volume. Further, we find that the marginalised constraints on $f$ for all analyses are stronger when using the TNS model.

قيم البحث

اقرأ أيضاً

In recent years forecasting activities have become a very important tool for designing and optimising large scale structure surveys. To predict the performance of such surveys, the Fisher matrix formalism is frequently used as a fast and easy way to compute constraints on cosmological parameters. Among them lies the study of the properties of dark energy which is one of the main goals in modern cosmology. As so, a metric for the power of a survey to constrain dark energy is provided by the Figure of merit (FoM). This is defined as the inverse of the surface contour given by the joint variance of the dark energy equation of state parameters ${w_0,w_a}$ in the Chevallier-Polarski-Linder parameterisation, which can be evaluated from the covariance matrix of the parameters. This covariance matrix is obtained as the inverse of the Fisher matrix. Inversion of an ill-conditioned matrix can result in large errors on the covariance coefficients if the elements of the Fisher matrix have been estimated with insufficient precision. The conditioning number is a metric providing a mathematical lower limit to the required precision for a reliable inversion, but it is often too stringent in practice for Fisher matrices with size larger than $2times2$. In this paper we propose a general numerical method to guarantee a certain precision on the inferred constraints, like the FoM. It consists on randomly vibrating (perturbing) the Fisher matrix elements with Gaussian perturbations of a given amplitude, and then evaluating the maximum amplitude that keeps the FoM within the chosen precision. The steps used in the numerical derivatives and integrals involved in the calculation of the Fisher matrix elements can then be chosen accordingly in order to keep the precision of the Fisher matrix elements below this maximum amplitude...
When analyzing galaxy clustering in multi-band imaging surveys, there is a trade-off between selecting the largest galaxy samples (to minimize the shot noise) and selecting samples with the best photometric redshift (photo-z) precision, which general ly include only a small subset of galaxies. In this paper, we systematically explore this trade-off. Our analysis is targeted towards the third year data of the Dark Energy Survey (DES), but our methods hold generally for other data sets. Using a simple Gaussian model for the redshift uncertainties, we carry out a Fisher matrix forecast for cosmological constraints from angular clustering in the redshift range $z = 0.2-0.95$. We quantify the cosmological constraints using a Figure of Merit (FoM) that measures the combined constraints on $Omega_m$ and $sigma_8$ in the context of $Lambda$CDM cosmology. We find that the trade-off between sample size and photo-z precision is sensitive to 1) whether cross-correlations between redshift bins are included or not, and 2) the ratio of the redshift bin width $delta z$ and the photo-z precision $sigma_z$. When cross-correlations are included and the redshift bin width is allowed to vary, the highest FoM is achieved when $delta z sim sigma_z$. We find that for the typical case of $5-10$ redshift bins, optimal results are reached when we use larger, less precise photo-z samples, provided that we include cross-correlations. For samples with higher $sigma_{z}$, the overlap between redshift bins is larger, leading to higher cross-correlation amplitudes. This leads to the self-calibration of the photo-z parameters and therefore tighter cosmological constraints. These results can be used to help guide galaxy sample selection for clustering analysis in ongoing and future photometric surveys.
We present the UNIT $N$-body cosmological simulations project, designed to provide precise predictions for nonlinear statistics of the galaxy distribution. We focus on characterizing statistics relevant to emission line and luminous red galaxies in t he current and upcoming generation of galaxy surveys. We use a suite of precise particle mesh simulations (FastPM) as well as with full $N$-body calculations with a mass resolution of $sim 1.2times10^9,h^{-1}$M$_{odot}$ to investigate the recently suggested technique of Angulo & Pontzen 2016 to suppress the variance of cosmological simulations We study redshift space distortions, cosmic voids, higher order statistics from $z=2$ down to $z=0$. We find that both two- and three-point statistics are unbiased. Over the scales of interest for baryon acoustic oscillations and redshift-space distortions, we find that the variance is greatly reduced in the two-point statistics and in the cross correlation between halos and cosmic voids, but is not reduced significantly for the three-point statistics. We demonstrate that the accuracy of the two-point correlation function for a galaxy survey with effective volume of 20 ($h^{-1}$Gpc)$^3$ is improved by about a factor of 40, indicating that two pairs of simulations with a volume of 1 ($h^{-1}$Gpc)$^3$ lead to the equivalent variance of $sim$150 such simulations. The $N$-body simulations presented here thus provide an effective survey volume of about seven times the effective survey volume of DESI or Euclid. The data from this project, including dark matter fields, halo catalogues, and their clustering statistics, are publicly available at http://www.unitsims.org.
124 - Peder Norberg IfA 2011
For galaxy clustering to provide robust constraints on cosmological parameters and galaxy formation models, it is essential to make reliable estimates of the errors on clustering measurements. We present a new technique, based on a spatial Jackknife (JK) resampling, which provides an objective way to estimate errors on clustering statistics. Our approach allows us to set the appropriate size for the Jackknife subsamples. The method also provides a means to assess the impact of individual regions on the measured clustering, and thereby to establish whether or not a given galaxy catalogue is dominated by one or several large structures, preventing it to be considered as a fair sample. We apply this methodology to the two- and three-point correlation functions measured from a volume limited sample of M* galaxies drawn from data release seven of the Sloan Digital Sky Survey (SDSS). The frequency of jackknife subsample outliers in the data is shown to be consistent with that seen in large N-body simulations of clustering in the cosmological constant plus cold dark matter cosmology. We also present a comparison of the three-point correlation function in SDSS and 2dFGRS using this approach and find consistent measurements between the two samples.
We study the importance of gravitational lensing in the modelling of the number counts of galaxies. We confirm previous results for photometric surveys, showing that lensing cannot be neglected in a survey like LSST since it would infer a significant shift of cosmological parameters. For a spectroscopic survey like SKA2, we find that neglecting lensing in the monopole, quadrupole and hexadecapole of the correlation function also induces an important shift of parameters. For ${Lambda}$CDM parameters, the shift is moderate, of the order of 0.6${sigma}$ or less. However, for a model-independent analysis, that measures the growth rate of structure in each redshift bin, neglecting lensing introduces a shift of up to 2.3${sigma}$ at high redshift. Since the growth rate is directly used to test the theory of gravity, such a strong shift would wrongly be interpreted as the breakdown of General Relativity. This shows the importance of including lensing in the analysis of future surveys. On the other hand, for a survey like DESI, we find that lensing is not important, mainly due to the value of the magnification bias parameter of DESI, $s(z)$, which strongly reduces the lensing contribution at high redshift. We also propose a way of improving the analysis of spectroscopic surveys, by including the cross-correlations between different redshift bins (which is neglected in spectroscopic surveys) from the spectroscopic survey or from a different photometric sample. We show that including the cross-correlations in the SKA2 analysis does not improve the constraints. On the other hand replacing the cross-correlations from SKA2 by cross-correlations measured with LSST improves the constraints by 10 to 20 %. Interestingly, for ${Lambda}$CDM parameters, we find that LSST and SKA2 are highly complementary, since they are affected differently by degeneracies between parameters.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا