ترغب بنشر مسار تعليمي؟ اضغط هنا

Self-consistent redshift estimation using correlation functions without a spectroscopic reference sample

51   0   0.0 ( 0 )
 نشر من قبل Ben Hoyle Dr
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a new method to estimate redshift distributions and galaxy-dark matter bias parameters using correlation functions in a fully data driven and self-consistent manner. Unlike other machine learning, template, or correlation redshift methods, this approach does not require a reference sample with known redshifts. By measuring the projected cross- and auto- correlations of different galaxy sub-samples, e.g., as chosen by simple cells in color-magnitude space, we are able to estimate the galaxy-dark matter bias model parameters, and the shape of the redshift distributions of each sub-sample. This method fully marginalises over a flexible parameterisation of the redshift distribution and galaxy-dark matter bias parameters of sub-samples of galaxies, and thus provides a general Bayesian framework to incorporate redshift uncertainty into the cosmological analysis in a data-driven, consistent, and reproducible manner. This result is improved by an order of magnitude by including cross-correlations with the CMB and with galaxy-galaxy lensing. We showcase how this method could be applied to real galaxies. By using idealised data vectors, in which all galaxy-dark matter model parameters and redshift distributions are known, this method is demonstrated to recover unbiased estimates on important quantities, such as the offset $Delta_z$ between the mean of the true and estimated redshift distribution and the 68% and 95% and 99.5% widths of the redshift distribution to an accuracy required by current and future surveys.

قيم البحث

اقرأ أيضاً

All estimators of the two-point correlation function are based on a random catalogue, a set of points with no intrinsic clustering following the selection function of a survey. High-accuracy estimates require the use of large random catalogues, which imply a high computational cost. We propose to replace the standard random catalogues by glass-like point distributions or glass catalogues, which are characterized by a power spectrum $P(k)propto k^4$ and exhibit significantly less power than a Poisson distribution with the same number of points on scales larger than the mean inter-particle separation. We show that these distributions can be obtained by iteratively applying the technique of Zeldovich reconstruction commonly used in studies of baryon acoustic oscillations (BAO). We provide a modified version of the widely used Landy-Szalay estimator of the correlation function adapted to the use of glass catalogues and compare its performance with the results obtained using random samples. Our results show that glass-like samples do not add any bias with respect to the results obtained using Poisson distributions. On scales larger than the mean inter-particle separation of the glass catalogues, the modified estimator leads to a significant reduction of the variance of the Legendre multipoles $xi_ell(s)$ with respect to the standard Landy-Szalay results with the same number of points. The size of the glass catalogue required to achieve a given accuracy in the correlation function is significantly smaller than when using random samples. Even considering the small additional cost of constructing the glass catalogues, their use could help to drastically reduce the computational cost of configuration-space clustering analysis of future surveys while maintaining high-accuracy requirements.
195 - C. Davis , E. Rozo , A. Roodman 2017
Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of ga laxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty of $Delta z sim pm 0.01$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.
Multiphase estimation is a paradigmatic example of a multiparameter problem. When measuring multiple phases embedded in interferometric networks, specially-tailored input quantum states achieve enhanced sensitivities compared with both single-paramet er and classical estimation schemes. Significant attention has been devoted to defining the optimal strategies for the scenario in which all of the phases are evaluated with respect to a common reference mode, in terms of optimal probe states and optimal measurement operators. As well, the strategies assume unlimited external resources, which is experimentally unrealistic. Here, we optimize a generalized scenario that treats all of the phases on an equal footing and takes into account the resources provided by external references. We show that the absence of an external reference mode reduces the number of simultaneously estimatable parameters, owing to the immeasurability of global phases, and that the symmetries of the parameters being estimated dictate the symmetries of the optimal probe states. Finally, we provide insight for constructing optimal measurements in this generalized scenario. The experimental viability of this work underlies its immediate practical importance beyond fundamental physics.
We report on the small scale (0.5<r<40h^-1 Mpc) clustering of 78895 massive (M*~10^11.3M_sun) galaxies at 0.2<z<0.4 from the first two years of data from the Baryon Oscillation Spectroscopic Survey (BOSS), to be released as part of SDSS Data Release 9 (DR9). We describe the sample selection, basic properties of the galaxies, and caveats for working with the data. We calculate the real- and redshift-space two-point correlation functions of these galaxies, fit these measurements using Halo Occupation Distribution (HOD) modeling within dark matter cosmological simulations, and estimate the errors using mock catalogs. These galaxies lie in massive halos, with a mean halo mass of 5.2x10^13 h^-1 M_sun, a large scale bias of ~2.0, and a satellite fraction of 12+/-2%. Thus, these galaxies occupy halos with average masses in between those of the higher redshift BOSS CMASS sample and the original SDSS I/II LRG sample.
We present an $8.1sigma$ detection of the non-Gaussian 4-Point Correlation Function (4PCF) using a sample of $N_{rm g} approx 8times 10^5$ galaxies from the BOSS CMASS dataset. Our measurement uses the $mathcal{O}(N_{rm g}^2)$ NPCF estimator of Philc ox et al. (2021), including a new modification to subtract the disconnected 4PCF contribution (arising from the product of two 2PCFs) at the estimator level. This approach is unlike previous work and ensures that our signal is a robust detection of gravitationally-induced non-Gaussianity. The estimator is validated with a suite of lognormal simulations, and the analytic form of the disconnected contribution is discussed. Due to the high dimensionality of the 4PCF, data compression is required; we use a signal-to-noise-based scheme calibrated from theoretical covariance matrices to restrict to $sim$ $100$ basis vectors. The compression has minimal impact on the detection significance and facilitates traditional $chi^2$-like analyses using a suite of mock catalogs. The significance is stable with respect to different treatments of noise in the sample covariance (arising from the limited number of mocks), but decreases to $4.7sigma$ when a minimum galaxy separation of $14 h^{-1}mathrm{Mpc}$ is enforced on the 4PCF tetrahedra (such that the statistic can be modelled more easily). The detectability of the 4PCF in the quasi-linear regime implies that it will become a useful tool in constraining cosmological and galaxy formation parameters from upcoming spectroscopic surveys.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا