ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimizing galaxy samples for clustering measurements in photometric surveys

86   0   0.0 ( 0 )
 نشر من قبل Dimitrios Tanoglidis
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

When analyzing galaxy clustering in multi-band imaging surveys, there is a trade-off between selecting the largest galaxy samples (to minimize the shot noise) and selecting samples with the best photometric redshift (photo-z) precision, which generally include only a small subset of galaxies. In this paper, we systematically explore this trade-off. Our analysis is targeted towards the third year data of the Dark Energy Survey (DES), but our methods hold generally for other data sets. Using a simple Gaussian model for the redshift uncertainties, we carry out a Fisher matrix forecast for cosmological constraints from angular clustering in the redshift range $z = 0.2-0.95$. We quantify the cosmological constraints using a Figure of Merit (FoM) that measures the combined constraints on $Omega_m$ and $sigma_8$ in the context of $Lambda$CDM cosmology. We find that the trade-off between sample size and photo-z precision is sensitive to 1) whether cross-correlations between redshift bins are included or not, and 2) the ratio of the redshift bin width $delta z$ and the photo-z precision $sigma_z$. When cross-correlations are included and the redshift bin width is allowed to vary, the highest FoM is achieved when $delta z sim sigma_z$. We find that for the typical case of $5-10$ redshift bins, optimal results are reached when we use larger, less precise photo-z samples, provided that we include cross-correlations. For samples with higher $sigma_{z}$, the overlap between redshift bins is larger, leading to higher cross-correlation amplitudes. This leads to the self-calibration of the photo-z parameters and therefore tighter cosmological constraints. These results can be used to help guide galaxy sample selection for clustering analysis in ongoing and future photometric surveys.



قيم البحث

اقرأ أيضاً

We study the importance of gravitational lensing in the modelling of the number counts of galaxies. We confirm previous results for photometric surveys, showing that lensing cannot be neglected in a survey like LSST since it would infer a significant shift of cosmological parameters. For a spectroscopic survey like SKA2, we find that neglecting lensing in the monopole, quadrupole and hexadecapole of the correlation function also induces an important shift of parameters. For ${Lambda}$CDM parameters, the shift is moderate, of the order of 0.6${sigma}$ or less. However, for a model-independent analysis, that measures the growth rate of structure in each redshift bin, neglecting lensing introduces a shift of up to 2.3${sigma}$ at high redshift. Since the growth rate is directly used to test the theory of gravity, such a strong shift would wrongly be interpreted as the breakdown of General Relativity. This shows the importance of including lensing in the analysis of future surveys. On the other hand, for a survey like DESI, we find that lensing is not important, mainly due to the value of the magnification bias parameter of DESI, $s(z)$, which strongly reduces the lensing contribution at high redshift. We also propose a way of improving the analysis of spectroscopic surveys, by including the cross-correlations between different redshift bins (which is neglected in spectroscopic surveys) from the spectroscopic survey or from a different photometric sample. We show that including the cross-correlations in the SKA2 analysis does not improve the constraints. On the other hand replacing the cross-correlations from SKA2 by cross-correlations measured with LSST improves the constraints by 10 to 20 %. Interestingly, for ${Lambda}$CDM parameters, we find that LSST and SKA2 are highly complementary, since they are affected differently by degeneracies between parameters.
The accuracy of photometric redshifts (photo-zs) particularly affects the results of the analyses of galaxy clustering with photometrically-selected galaxies (GCph) and weak lensing. In the next decade, space missions like Euclid will collect photome tric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this paper, we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from Euclid. We focus on GCph and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We use the Fisher matrix formalism and these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with equal width in redshift provide a higher Figure of Merit (FoM) than equipopulated bins and that increasing the number of redshift bins from 10 to 13 improves the FoM by 35% and 15% for GCph and its combination with GGL, respectively. For GCph, an increase of the survey depth provides a higher FoM. But the addition of faint galaxies beyond the limit of the spectroscopic training data decreases the FoM due to the spurious photo-zs. When combining both probes, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. We conclude that there is more information that can be extracted beyond the nominal 10 tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample, since they can degrade the cosmological constraints.
We examine the impact of fiber assignment on clustering measurements from fiber-fed spectroscopic galaxy surveys. We identify new effects which were absent in previous, relatively shallow galaxy surveys such as Baryon Oscillation Spectroscopic Survey . Specifically, we consider deep surveys covering a wide redshift range from z=0.6 to z=2.4, as in the Subaru Prime Focus Spectrograph survey. Such surveys will have more target galaxies than we can place fibers on. This leads to two effects. First, it eliminates fluctuations with wavelengths longer than the size of the field of view, as the number of observed galaxies per field is nearly fixed to the number of available fibers. We find that we can recover the long-wavelength fluctuation by weighting galaxies in each field by the number of target galaxies. Second, it makes the preferential selection of galaxies in under-dense regions. We mitigate this effect by weighting galaxies using the so-called individual inverse probability. Correcting these two effects, we recover the underlying correlation function at better than 1 percent accuracy on scales greater than 10 Mpc/h.
Contamination of interloper galaxies due to misidentified emission lines can be a big issue in the spectroscopic galaxy clustering surveys, especially in future high-precision observations. We propose a statistical method based on the cross-correlati ons of the observational data itself between two redshift bins to efficiently reduce this effect, and it also can derive the interloper fraction f_i in a redshift bin with a high level of accuracy. The ratio of cross and auto angular correlation functions or power spectra between redshift bins are suggested to estimate f_i, and the key equations are derived for theoretical discussion. In order to explore and prove the feasibility and effectiveness of this method, we also run simulations, generate mock data, and perform cosmological constraints considering systematics based on the observation of the China Space Station Telescope (CSST). We find that this method can effectively reduce the interloper effect, and accurately constrain the cosmological parameters for f_i<1%~10%, which is suitable for most future surveys. This method also can be applied to other kinds of galaxy clustering surveys like line intensity mapping.
Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. Accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two -point statistics. Usually, only a limited set of accurate N-body simulations is available. Thus, assessing the data covariance is not possible or only leads to a noisy estimate. Further, relying on simulated realisations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these points in mind, this work presents a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles) of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. As main result of this paper, we give the explicit formulae for Fourier and configuration space covariance matrices. To validate our model, we create synthetic HOD galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find very good agreement between the model predictions and the measurements on the synthetic catalogues in the quasi-linear regime.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا