No Arabic abstract
The ESA Euclid mission will produce photometric galaxy samples over 15 000 square degrees of the sky that will be rich for clustering and weak lensing statistics. The accuracy of the cosmological constraints derived from these measurements will depend on the knowledge of the underlying redshift distributions based on photometric redshift calibrations. A new approach is proposed to use the stacked spectra from Euclid slitless spectroscopy to augment the broad-band photometric information to constrain the redshift distribution with spectral energy distribution fitting. The high spectral resolution available in the stacked spectra complements the photometry and helps to break the colour-redshift degeneracy and constrain the redshift distribution of galaxy samples. We model the stacked spectra as a linear mixture of spectral templates. The mixture may be inverted to infer the underlying redshift distribution using constrained regression algorithms. We demonstrate the method on simulated Vera C. Rubin Observatory and Euclid mock survey data sets based on the Euclid Flagship mock galaxy catalogue. We assess the accuracy of the reconstruction by considering the inference of the baryon acoustic scale from angular two-point correlation function measurements. We select mock photometric galaxy samples at redshift z>1 using the self-organizing map algorithm. Considering the idealized case without dust attenuation, we find that the redshift distributions of these samples can be recovered with 0.5% accuracy on the baryon acoustic scale. The estimates are not significantly degraded by the spectroscopic measurement noise due to the large sample size. However, the error degrades to 2% when the dust attenuation model is left free. We find that the colour degeneracies introduced by attenuation limit the accuracy considering the wavelength coverage of the Euclid near-infrared spectroscopy.
Forthcoming large photometric surveys for cosmology require precise and accurate photometric redshift (photo-z) measurements for the success of their main science objectives. However, to date, no method has been able to produce photo-$z$s at the required accuracy using only the broad-band photometry that those surveys will provide. An assessment of the strengths and weaknesses of current methods is a crucial step in the eventual development of an approach to meet this challenge. We report on the performance of 13 photometric redshift code single value redshift estimates and redshift probability distributions (PDZs) on a common set of data, focusing particularly on the 0.2--2.6 redshift range that the Euclid mission will probe. We design a challenge using emulated Euclid data drawn from three photometric surveys of the COSMOS field. The data are divided into two samples: one calibration sample for which photometry and redshifts are provided to the participants; and the validation sample, containing only the photometry, to ensure a blinded test of the methods. Participants were invited to provide a redshift single value estimate and a PDZ for each source in the validation sample, along with a rejection flag that indicates sources they consider unfit for use in cosmological analyses. The performance of each method is assessed through a set of informative metrics, using cross-matched spectroscopic and highly-accurate photometric redshifts as the ground truth. We show that the rejection criteria set by participants are efficient in removing strong outliers, sources for which the photo-z deviates by more than 0.15(1+z) from the spectroscopic-redshift (spec-z). We also show that, while all methods are able to provide reliable single value estimates, several machine-learning methods do not manage to produce useful PDZs. [abridged]
Low density regions are less affected by the nonlinear structure formation and baryonic physics. They are ideal places for probing the nature of dark energy, a possible explanation for the cosmic acceleration. Unlike void lensing, which requires identifications of individual voids, we study the stacked lensing signals around the low-density-positions (LDP), defined as places that are devoid of foreground bright galaxies in projection. The method allows a direct comparison with numerical results by drawing correspondence between the bright galaxies with halos. It leads to lensing signals that are significant enough for differentiating several dark energy models. In this work, we use the CFHTLenS catalogue to define LDPs, as well as measuring their background lensing signals. We consider several different definitions of the foreground bright galaxies (redshift range & magnitude cut). Regarding the cosmological model, we run six simulations: the first set of simulations have the same initial conditions, with $rm{w_{de}=-1,-0.5,-0.8,-1.2}$; the second set of simulations include a slightly different $Lambda$CDM model and a w(z) model from cite{2017NatAs...1..627Z}. The lensing results indicate that the models with $rm{w_{de}=-0.5,-0.8}$ are not favored, and the other four models all achieve comparable agreement with the data.
We perform a forecast analysis on how well a Euclid-like photometric galaxy cluster survey will constrain the total neutrino mass and effective number of neutrino species. We base our analysis on the Monte Carlo Markov Chains technique by combining information from cluster number counts and cluster power spectrum. We find that combining cluster data with CMB measurements from Planck improves by more than an order of magnitude the constraint on neutrino masses compared to each probe used independently. For the LCDM+m_nu model the 2 sigma upper limit on total neutrino mass shifts from M_nu < 0.35 eV using cluster data alone to M_nu < 0.031 eV when combined with CMB data. When a non-standard model with N_eff number of neutrino species is considered, we estimate N_eff<3.14 (95% CL), while the bounds on neutrino mass are relaxed to M_nu < 0.040 eV. This accuracy would be sufficient for a 2 sigma detection of neutrino mass even in the minimal normal hierarchy scenario. We also consider scenarios with a constant dark energy equation of state and a non-vanishing curvature. When these models are considered the error on M_nu is only slightly affected, while there is a larger impact of the order of ~ 15 % and ~ 20% respectively on the 2 sigma error bar of N_eff with respect to the standard case. We also treat the LCDM+m_nu+N_eff case with free nuisance parameters, which parameterize the uncertainties on the cluster mass determination. In this case, the upper bounds on M_nu are relaxed by a factor larger than two, M_nu < 0.083 eV (95% CL), hence compromising the possibility of detecting the total neutrino mass with good significance. We thus confirm the potential that a large optical/near-IR cluster survey, like that to be carried out by Euclid, could have in constraining neutrino properties [abridged].
We present redshift probability distributions for galaxies in the SDSS DR8 imaging data. We used the nearest-neighbor weighting algorithm presented in Lima et al. 2008 and Cunha et al. 2009 to derive the ensemble redshift distribution N(z), and individual redshift probability distributions P(z) for galaxies with r < 21.8. As part of this technique, we calculated weights for a set of training galaxies with known redshifts such that their density distribution in five dimensional color-magnitude space was proportional to that of the photometry-only sample, producing a nearly fair sample in that space. We then estimated the ensemble N(z) of the photometric sample by constructing a weighted histogram of the training set redshifts. We derived P(z) s for individual objects using the same technique, but limiting to training set objects from the local color-magnitude space around each photometric object. Using the P(z) for each galaxy, rather than an ensemble N(z), can reduce the statistical error in measurements that depend on the redshifts of individual galaxies. The spectroscopic training sample is substantially larger than that used for the DR7 release, and the newly added PRIMUS catalog is now the most important training set used in this analysis by a wide margin. We expect the primary source of error in the N(z) reconstruction is sample variance: the training sets are drawn from relatively small volumes of space. Using simulations we estimated the uncertainty in N(z) at a given redshift is 10-15%. The uncertainty on calculations incorporating N(z) or P(z) depends on how they are used; we discuss the case of weak lensing measurements. The P(z) catalog is publicly available from the SDSS website.
Accurately characterizing the redshift distributions of galaxies is essential for analysing deep photometric surveys and testing cosmological models. We present a technique to simultaneously infer redshift distributions and individual redshifts from photometric galaxy catalogues. Our model constructs a piecewise constant representation (effectively a histogram) of the distribution of galaxy types and redshifts, the parameters of which are efficiently inferred from noisy photometric flux measurements. This approach can be seen as a generalization of template-fitting photometric redshift methods and relies on a library of spectral templates to relate the photometric fluxes of individual galaxies to their redshifts. We illustrate this technique on simulated galaxy survey data, and demonstrate that it delivers correct posterior distributions on the underlying type and redshift distributions, as well as on the individual types and redshifts of galaxies. We show that even with uninformative priors, large photometric errors and parameter degeneracies, the redshift and type distributions can be recovered robustly thanks to the hierarchical nature of the model, which is not possible with common photometric redshift estimation techniques. As a result, redshift uncertainties can be fully propagated in cosmological analyses for the first time, fulfilling an essential requirement for the current and future generations of surveys.