ترغب بنشر مسار تعليمي؟ اضغط هنا

Statistical Analysis of Galaxy Surveys - I. Robust error estimation for 2-point clustering statistics

100   0   0.0 ( 0 )
 نشر من قبل Peder Norberg
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Peder Norberg




اسأل ChatGPT حول البحث

We present a test of different error estimators for 2-point clustering statistics, appropriate for present and future large galaxy redshift surveys. Using an ensemble of very large dark matter LambdaCDM N-body simulations, we compare internal error estimators (jackknife and bootstrap) to external ones (Monte-Carlo realizations). For 3-dimensional clustering statistics, we find that none of the internal error methods investigated are able to reproduce neither accurately nor robustly the errors of external estimators on 1 to 25 Mpc/h scales. The standard bootstrap overestimates the variance of xi(s) by ~40% on all scales probed, but recovers, in a robust fashion, the principal eigenvectors of the underlying covariance matrix. The jackknife returns the correct variance on large scales, but significantly overestimates it on smaller scales. This scale dependence in the jackknife affects the recovered eigenvectors, which tend to disagree on small scales with the external estimates. Our results have important implications for the use of galaxy clustering in placing constraints on cosmological parameters. For example, in a 2-parameter fit to the projected correlation function, we find that the standard bootstrap systematically overestimates the 95% confidence interval, while the jackknife method remains biased, but to a lesser extent. The scatter we find between realizations, for Gaussian statistics, implies that a 2-sigma confidence interval, as inferred from an internal estimator, could correspond in practice to anything from 1-sigma to 3-sigma. Finally, by an oversampling of sub-volumes, it is possible to obtain bootstrap variances and confidence intervals that agree with external error estimates, but it is not clear if this prescription will work for a general case.



قيم البحث

اقرأ أيضاً

156 - Peder Norberg IfA 2011
For galaxy clustering to provide robust constraints on cosmological parameters and galaxy formation models, it is essential to make reliable estimates of the errors on clustering measurements. We present a new technique, based on a spatial Jackknife (JK) resampling, which provides an objective way to estimate errors on clustering statistics. Our approach allows us to set the appropriate size for the Jackknife subsamples. The method also provides a means to assess the impact of individual regions on the measured clustering, and thereby to establish whether or not a given galaxy catalogue is dominated by one or several large structures, preventing it to be considered as a fair sample. We apply this methodology to the two- and three-point correlation functions measured from a volume limited sample of M* galaxies drawn from data release seven of the Sloan Digital Sky Survey (SDSS). The frequency of jackknife subsample outliers in the data is shown to be consistent with that seen in large N-body simulations of clustering in the cosmological constant plus cold dark matter cosmology. We also present a comparison of the three-point correlation function in SDSS and 2dFGRS using this approach and find consistent measurements between the two samples.
299 - E.Gaztanaga 2005
We present new results for the 3-point correlation function, zeta, measured as a function of scale, luminosity and colour from the final version of the two-degree field galaxy redshift survey (2dFGRS). The reduced three point correlation function, Q_ 3 is estimated for different triangle shapes and sizes, employing a full covariance analysis. The form of Q_3 is consistent with the expectations for the Lambda-cold dark matter model, confirming that the primary influence shaping the distribution of galaxies is gravitational instability acting on Gaussian primordial fluctuations. However, we find a clear offset in amplitude between Q_3 for galaxies and the predictions for the dark matter. We are able to rule out the scenario in which galaxies are unbiased tracers of the mass at the 9-sigma level. On weakly non-linear scales, we can interpret our results in terms of galaxy bias parameters. We find a linear bias term that is consistent with unity, b_1 = 0.93^{+0.10}_{-0.08} and a quadratic bias c_2 = b_2 /b_1 = -0.34^{+0.11}_{-0.08}. This is the first significant detection of a non-zero quadratic bias, indicating a small but important non-gravitational contribution to the three point function. Our estimate of the linear bias from the three point function is independent of the normalisation of underlying density fluctuations, so we can combine this with the measurement of the power spectrum of 2dFGRS galaxies to constrain the amplitude of matter fluctuations. We find that the rms linear theory variance in spheres of radius 8Mpc/h is sigma_8 = 0.88^{+0.12}_{-0.10}, providing an independent confirmation of values derived from other techniques. On non-linear scales, where xi>1, we find that Q_3 has a strong dependence on scale, colour and luminosity.
We perform an ensemble of $N$-body simulations with $2048^3$ particles for 101 flat $w$CDM cosmological models sampled based on a maximin-distance Sliced Latin Hypercube Design. By using the halo catalogs extracted at multiple redshifts in the range of $z=[0,1.48]$, we develop Dark Emulator, which enables fast and accurate computations of the halo mass function, halo-matter cross-correlation, and halo auto-correlation as a function of halo masses, redshift, separations and cosmological models, based on the Principal Component Analysis and the Gaussian Process Regression for the large-dimensional input and output data vector. We assess the performance of the emulator using a validation set of $N$-body simulations that are not used in training the emulator. We show that, for typical halos hosting CMASS galaxies in the Sloan Digital Sky Survey, the emulator predicts the halo-matter cross correlation, relevant for galaxy-galaxy weak lensing, with an accuracy better than $2%$ and the halo auto-correlation, relevant for galaxy clustering correlation, with an accuracy better than $4%$. We give several demonstrations of the emulator. It can be used to study properties of halo mass density profiles such as the mass-concentration relation and splashback radius for different cosmologies. The emulator outputs can be combined with an analytical prescription of halo-galaxy connection such as the halo occupation distribution at the equation level, instead of using the mock catalogs, to make accurate predictions of galaxy clustering statistics such as the galaxy-galaxy weak lensing and the projected correlation function for any model within the $w$CDM cosmologies, in a few CPU seconds.
The colour-magnitude diagrams of resolved single stellar populations, such as open and globular clusters, have provided the best natural laboratories to test stellar evolution theory. Whilst a variety of techniques have been used to infer the basic p roperties of these simple populations, systematic uncertainties arise from the purely geometrical degeneracy produced by the similar shape of isochrones of different ages and metallicities. Here we present an objective and robust statistical technique which lifts this degeneracy to a great extent through the use of a key observable: the number of stars along the isochrone. Through extensive Monte Carlo simulations we show that, for instance, we can infer the four main parameters (age, metallicity, distance and reddening) in an objective way, along with robust confidence intervals and their full covariance matrix. We show that systematic uncertainties due to field contamination, unresolved binaries, initial or present-day stellar mass function are either negligible or well under control. This technique provides, for the first time, a proper way to infer with unprecedented accuracy the fundamental properties of simple stellar populations, in an easy-to-implement algorithm.
When analyzing galaxy clustering in multi-band imaging surveys, there is a trade-off between selecting the largest galaxy samples (to minimize the shot noise) and selecting samples with the best photometric redshift (photo-z) precision, which general ly include only a small subset of galaxies. In this paper, we systematically explore this trade-off. Our analysis is targeted towards the third year data of the Dark Energy Survey (DES), but our methods hold generally for other data sets. Using a simple Gaussian model for the redshift uncertainties, we carry out a Fisher matrix forecast for cosmological constraints from angular clustering in the redshift range $z = 0.2-0.95$. We quantify the cosmological constraints using a Figure of Merit (FoM) that measures the combined constraints on $Omega_m$ and $sigma_8$ in the context of $Lambda$CDM cosmology. We find that the trade-off between sample size and photo-z precision is sensitive to 1) whether cross-correlations between redshift bins are included or not, and 2) the ratio of the redshift bin width $delta z$ and the photo-z precision $sigma_z$. When cross-correlations are included and the redshift bin width is allowed to vary, the highest FoM is achieved when $delta z sim sigma_z$. We find that for the typical case of $5-10$ redshift bins, optimal results are reached when we use larger, less precise photo-z samples, provided that we include cross-correlations. For samples with higher $sigma_{z}$, the overlap between redshift bins is larger, leading to higher cross-correlation amplitudes. This leads to the self-calibration of the photo-z parameters and therefore tighter cosmological constraints. These results can be used to help guide galaxy sample selection for clustering analysis in ongoing and future photometric surveys.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا