ترغب بنشر مسار تعليمي؟ اضغط هنا

Statistical Analysis of Galaxy Surveys-IV: An objective way to quantify the impact of superstructures on galaxy clustering statistics

123   0   0.0 ( 0 )
 نشر من قبل Peder Norberg
 تاريخ النشر 2011
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Peder Norberg IfA




اسأل ChatGPT حول البحث

For galaxy clustering to provide robust constraints on cosmological parameters and galaxy formation models, it is essential to make reliable estimates of the errors on clustering measurements. We present a new technique, based on a spatial Jackknife (JK) resampling, which provides an objective way to estimate errors on clustering statistics. Our approach allows us to set the appropriate size for the Jackknife subsamples. The method also provides a means to assess the impact of individual regions on the measured clustering, and thereby to establish whether or not a given galaxy catalogue is dominated by one or several large structures, preventing it to be considered as a fair sample. We apply this methodology to the two- and three-point correlation functions measured from a volume limited sample of M* galaxies drawn from data release seven of the Sloan Digital Sky Survey (SDSS). The frequency of jackknife subsample outliers in the data is shown to be consistent with that seen in large N-body simulations of clustering in the cosmological constant plus cold dark matter cosmology. We also present a comparison of the three-point correlation function in SDSS and 2dFGRS using this approach and find consistent measurements between the two samples.

قيم البحث

اقرأ أيضاً

62 - Peder Norberg 2008
We present a test of different error estimators for 2-point clustering statistics, appropriate for present and future large galaxy redshift surveys. Using an ensemble of very large dark matter LambdaCDM N-body simulations, we compare internal error e stimators (jackknife and bootstrap) to external ones (Monte-Carlo realizations). For 3-dimensional clustering statistics, we find that none of the internal error methods investigated are able to reproduce neither accurately nor robustly the errors of external estimators on 1 to 25 Mpc/h scales. The standard bootstrap overestimates the variance of xi(s) by ~40% on all scales probed, but recovers, in a robust fashion, the principal eigenvectors of the underlying covariance matrix. The jackknife returns the correct variance on large scales, but significantly overestimates it on smaller scales. This scale dependence in the jackknife affects the recovered eigenvectors, which tend to disagree on small scales with the external estimates. Our results have important implications for the use of galaxy clustering in placing constraints on cosmological parameters. For example, in a 2-parameter fit to the projected correlation function, we find that the standard bootstrap systematically overestimates the 95% confidence interval, while the jackknife method remains biased, but to a lesser extent. The scatter we find between realizations, for Gaussian statistics, implies that a 2-sigma confidence interval, as inferred from an internal estimator, could correspond in practice to anything from 1-sigma to 3-sigma. Finally, by an oversampling of sub-volumes, it is possible to obtain bootstrap variances and confidence intervals that agree with external error estimates, but it is not clear if this prescription will work for a general case.
We examine the impact of fiber assignment on clustering measurements from fiber-fed spectroscopic galaxy surveys. We identify new effects which were absent in previous, relatively shallow galaxy surveys such as Baryon Oscillation Spectroscopic Survey . Specifically, we consider deep surveys covering a wide redshift range from z=0.6 to z=2.4, as in the Subaru Prime Focus Spectrograph survey. Such surveys will have more target galaxies than we can place fibers on. This leads to two effects. First, it eliminates fluctuations with wavelengths longer than the size of the field of view, as the number of observed galaxies per field is nearly fixed to the number of available fibers. We find that we can recover the long-wavelength fluctuation by weighting galaxies in each field by the number of target galaxies. Second, it makes the preferential selection of galaxies in under-dense regions. We mitigate this effect by weighting galaxies using the so-called individual inverse probability. Correcting these two effects, we recover the underlying correlation function at better than 1 percent accuracy on scales greater than 10 Mpc/h.
Galaxy clusters are a recent cosmological probe. The precision and accuracy of the cosmological parameters inferred from these objects are affected by the knowledge of cluster physics, entering the analysis through the mass-observable scaling relatio ns, and the theoretical description of their mass and redshift distribution, modelled by the mass function. In this work, we forecast the impact of different modelling of these ingredients for clusters detected by future optical and near-IR surveys. We consider the standard cosmological scenario and the case with a time-dependent equation of state for dark energy. We analyse the effect of increasing accuracy on the scaling relation calibration, finding improved constraints on the cosmological parameters. This higher accuracy exposes the impact of the mass function evaluation, which is a subdominant source of systematics for current data. We compare two different evaluations for the mass function. In both cosmological scenarios, the use of different mass functions leads to biases in the parameter constraints. For the $Lambda$CDM model, we find a $1.6 , sigma$ shift in the $(Omega_m,sigma_8)$ parameter plane and a discrepancy of $sim 7 , sigma$ for the redshift evolution of the scatter of the scaling relations. For the scenario with a time-evolving dark energy equation of state, the assumption of different mass functions results in a $sim 8 , sigma$ tension in the $w_0$ parameter. These results show the impact, and the necessity for a precise modelling, of the interplay between the redshift evolution of the mass function and of the scaling relations in the cosmological analysis of galaxy clusters.
We provide in depth MCMC comparisons of two different models for the halo redshift space power spectrum, namely a variant of the commonly applied Taruya-Nishimichi-Saito (TNS) model and an effective field theory of large scale structure (EFTofLSS) in spired model. Using many simulation realisations and Stage IV survey-like specifications for the covariance matrix, we check each models range of validity by testing for bias in the recovery of the fiducial growth rate of structure formation. The robustness of the determined range of validity is then tested by performing additional MCMC analyses using higher order multipoles, a larger survey volume and a more highly biased tracer catalogue. We find that under all tests, the TNS models range of validity remains robust and is found to be much higher than previous estimates. The EFTofLSS model fails to capture the spectra for highly biased tracers as well as becoming biased at higher wavenumbers when considering a very large survey volume. Further, we find that the marginalised constraints on $f$ for all analyses are stronger when using the TNS model.
We study the importance of gravitational lensing in the modelling of the number counts of galaxies. We confirm previous results for photometric surveys, showing that lensing cannot be neglected in a survey like LSST since it would infer a significant shift of cosmological parameters. For a spectroscopic survey like SKA2, we find that neglecting lensing in the monopole, quadrupole and hexadecapole of the correlation function also induces an important shift of parameters. For ${Lambda}$CDM parameters, the shift is moderate, of the order of 0.6${sigma}$ or less. However, for a model-independent analysis, that measures the growth rate of structure in each redshift bin, neglecting lensing introduces a shift of up to 2.3${sigma}$ at high redshift. Since the growth rate is directly used to test the theory of gravity, such a strong shift would wrongly be interpreted as the breakdown of General Relativity. This shows the importance of including lensing in the analysis of future surveys. On the other hand, for a survey like DESI, we find that lensing is not important, mainly due to the value of the magnification bias parameter of DESI, $s(z)$, which strongly reduces the lensing contribution at high redshift. We also propose a way of improving the analysis of spectroscopic surveys, by including the cross-correlations between different redshift bins (which is neglected in spectroscopic surveys) from the spectroscopic survey or from a different photometric sample. We show that including the cross-correlations in the SKA2 analysis does not improve the constraints. On the other hand replacing the cross-correlations from SKA2 by cross-correlations measured with LSST improves the constraints by 10 to 20 %. Interestingly, for ${Lambda}$CDM parameters, we find that LSST and SKA2 are highly complementary, since they are affected differently by degeneracies between parameters.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا