ترغب بنشر مسار تعليمي؟ اضغط هنا

Mitigating the impact of fiber assignment on clustering measurements from deep galaxy redshift surveys

57   0   0.0 ( 0 )
 نشر من قبل Tomomi Sunayama
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We examine the impact of fiber assignment on clustering measurements from fiber-fed spectroscopic galaxy surveys. We identify new effects which were absent in previous, relatively shallow galaxy surveys such as Baryon Oscillation Spectroscopic Survey . Specifically, we consider deep surveys covering a wide redshift range from z=0.6 to z=2.4, as in the Subaru Prime Focus Spectrograph survey. Such surveys will have more target galaxies than we can place fibers on. This leads to two effects. First, it eliminates fluctuations with wavelengths longer than the size of the field of view, as the number of observed galaxies per field is nearly fixed to the number of available fibers. We find that we can recover the long-wavelength fluctuation by weighting galaxies in each field by the number of target galaxies. Second, it makes the preferential selection of galaxies in under-dense regions. We mitigate this effect by weighting galaxies using the so-called individual inverse probability. Correcting these two effects, we recover the underlying correlation function at better than 1 percent accuracy on scales greater than 10 Mpc/h.



قيم البحث

اقرأ أيضاً

We present a large-scale Bayesian inference framework to constrain cosmological parameters using galaxy redshift surveys, via an application of the Alcock-Paczynski (AP) test. Our physical model of the non-linearly evolved density field, as probed by galaxy surveys, employs Lagrangian perturbation theory (LPT) to connect Gaussian initial conditions to the final density field, followed by a coordinate transformation to obtain the redshift space representation for comparison with data. We generate realizations of primordial and present-day matter fluctuations given a set of observations. This hierarchical approach encodes a novel AP test, extracting several orders of magnitude more information from the cosmological expansion compared to classical approaches, to infer cosmological parameters and jointly reconstruct the underlying 3D dark matter density field. The novelty of this AP test lies in constraining the comoving-redshift transformation to infer the appropriate cosmology which yields isotropic correlations of the galaxy density field, with the underlying assumption relying purely on the cosmological principle. Such an AP test does not rely explicitly on modelling the full statistics of the field. We verify in depth via simulations that this renders our test robust to model misspecification. This leads to another crucial advantage, namely that the cosmological parameters exhibit extremely weak dependence on the currently unresolved phenomenon of galaxy bias, thereby circumventing a potentially key limitation. This is consequently among the first methods to extract a large fraction of information from statistics other than that of direct density contrast correlations, without being sensitive to the amplitude of density fluctuations. We perform several statistical efficiency and consistency tests on a mock galaxy catalogue, using the SDSS-III survey as template.
When analyzing galaxy clustering in multi-band imaging surveys, there is a trade-off between selecting the largest galaxy samples (to minimize the shot noise) and selecting samples with the best photometric redshift (photo-z) precision, which general ly include only a small subset of galaxies. In this paper, we systematically explore this trade-off. Our analysis is targeted towards the third year data of the Dark Energy Survey (DES), but our methods hold generally for other data sets. Using a simple Gaussian model for the redshift uncertainties, we carry out a Fisher matrix forecast for cosmological constraints from angular clustering in the redshift range $z = 0.2-0.95$. We quantify the cosmological constraints using a Figure of Merit (FoM) that measures the combined constraints on $Omega_m$ and $sigma_8$ in the context of $Lambda$CDM cosmology. We find that the trade-off between sample size and photo-z precision is sensitive to 1) whether cross-correlations between redshift bins are included or not, and 2) the ratio of the redshift bin width $delta z$ and the photo-z precision $sigma_z$. When cross-correlations are included and the redshift bin width is allowed to vary, the highest FoM is achieved when $delta z sim sigma_z$. We find that for the typical case of $5-10$ redshift bins, optimal results are reached when we use larger, less precise photo-z samples, provided that we include cross-correlations. For samples with higher $sigma_{z}$, the overlap between redshift bins is larger, leading to higher cross-correlation amplitudes. This leads to the self-calibration of the photo-z parameters and therefore tighter cosmological constraints. These results can be used to help guide galaxy sample selection for clustering analysis in ongoing and future photometric surveys.
Galaxy clusters are a recent cosmological probe. The precision and accuracy of the cosmological parameters inferred from these objects are affected by the knowledge of cluster physics, entering the analysis through the mass-observable scaling relatio ns, and the theoretical description of their mass and redshift distribution, modelled by the mass function. In this work, we forecast the impact of different modelling of these ingredients for clusters detected by future optical and near-IR surveys. We consider the standard cosmological scenario and the case with a time-dependent equation of state for dark energy. We analyse the effect of increasing accuracy on the scaling relation calibration, finding improved constraints on the cosmological parameters. This higher accuracy exposes the impact of the mass function evaluation, which is a subdominant source of systematics for current data. We compare two different evaluations for the mass function. In both cosmological scenarios, the use of different mass functions leads to biases in the parameter constraints. For the $Lambda$CDM model, we find a $1.6 , sigma$ shift in the $(Omega_m,sigma_8)$ parameter plane and a discrepancy of $sim 7 , sigma$ for the redshift evolution of the scatter of the scaling relations. For the scenario with a time-evolving dark energy equation of state, the assumption of different mass functions results in a $sim 8 , sigma$ tension in the $w_0$ parameter. These results show the impact, and the necessity for a precise modelling, of the interplay between the redshift evolution of the mass function and of the scaling relations in the cosmological analysis of galaxy clusters.
163 - Peder Norberg IfA 2011
For galaxy clustering to provide robust constraints on cosmological parameters and galaxy formation models, it is essential to make reliable estimates of the errors on clustering measurements. We present a new technique, based on a spatial Jackknife (JK) resampling, which provides an objective way to estimate errors on clustering statistics. Our approach allows us to set the appropriate size for the Jackknife subsamples. The method also provides a means to assess the impact of individual regions on the measured clustering, and thereby to establish whether or not a given galaxy catalogue is dominated by one or several large structures, preventing it to be considered as a fair sample. We apply this methodology to the two- and three-point correlation functions measured from a volume limited sample of M* galaxies drawn from data release seven of the Sloan Digital Sky Survey (SDSS). The frequency of jackknife subsample outliers in the data is shown to be consistent with that seen in large N-body simulations of clustering in the cosmological constant plus cold dark matter cosmology. We also present a comparison of the three-point correlation function in SDSS and 2dFGRS using this approach and find consistent measurements between the two samples.
We present a deep machine learning (ML)-based technique for accurately determining $sigma_8$ and $Omega_m$ from mock 3D galaxy surveys. The mock surveys are built from the AbacusCosmos suite of $N$-body simulations, which comprises 40 cosmological vo lume simulations spanning a range of cosmological models, and we account for uncertainties in galaxy formation scenarios through the use of generalized halo occupation distributions (HODs). We explore a trio of ML models: a 3D convolutional neural network (CNN), a power-spectrum-based fully connected network, and a hybrid approach that merges the two to combine physically motivated summary statistics with flexible CNNs. We describe best practices for training a deep model on a suite of matched-phase simulations and we test our model on a completely independent sample that uses previously unseen initial conditions, cosmological parameters, and HOD parameters. Despite the fact that the mock observations are quite small ($sim0.07h^{-3},mathrm{Gpc}^3$) and the training data span a large parameter space (6 cosmological and 6 HOD parameters), the CNN and hybrid CNN can constrain $sigma_8$ and $Omega_m$ to $sim3%$ and $sim4%$, respectively.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا