Do you want to publish a course? Click here

Euclid: The importance of galaxy clustering and weak lensing cross-correlations within the photometric Euclid survey

76   0   0.0 ( 0 )
 Added by Isaac Tutusaus
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

The data from the Euclid mission will enable the measurement of the photometric redshifts, angular positions, and weak lensing shapes for over a billion galaxies. This large dataset will allow for cosmological analyses using the angular clustering of galaxies and cosmic shear. The cross-correlation (XC) between these probes can tighten constraints and it is therefore important to quantify their impact for Euclid. In this study we carefully quantify the impact of XC not only on the final parameter constraints for different cosmological models, but also on the nuisance parameters. In particular, we aim at understanding the amount of additional information that XC can provide for parameters encoding systematic effects, such as galaxy bias or intrinsic alignments (IA). We follow the formalism presented in Euclid Collaboration: Blanchard et al. (2019) and make use of the codes validated therein. We show that XC improves the dark energy Figure of Merit (FoM) by a factor $sim 5$, whilst it also reduces the uncertainties on galaxy bias by $sim 17%$ and the uncertainties on IA by a factor $sim 4$. We observe that the role of XC on the final parameter constraints is qualitatively the same irrespective of the galaxy bias model used. We also show that XC can help in distinguishing between different IA models, and that if IA terms are neglected then this can lead to significant biases on the cosmological parameters. We find that the XC terms are necessary to extract the full information content from the data in future analyses. They help in better constraining the cosmological model, and lead to a better understanding of the systematic effects that contaminate these probes. Furthermore, we find that XC helps in constraining the mean of the photometric-redshift distributions, but it requires a more precise knowledge of this mean in order not to degrade the final FoM. [Abridged]



rate research

Read More

The accuracy of photometric redshifts (photo-zs) particularly affects the results of the analyses of galaxy clustering with photometrically-selected galaxies (GCph) and weak lensing. In the next decade, space missions like Euclid will collect photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this paper, we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from Euclid. We focus on GCph and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We use the Fisher matrix formalism and these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with equal width in redshift provide a higher Figure of Merit (FoM) than equipopulated bins and that increasing the number of redshift bins from 10 to 13 improves the FoM by 35% and 15% for GCph and its combination with GGL, respectively. For GCph, an increase of the survey depth provides a higher FoM. But the addition of faint galaxies beyond the limit of the spectroscopic training data decreases the FoM due to the spurious photo-zs. When combining both probes, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. We conclude that there is more information that can be extracted beyond the nominal 10 tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample, since they can degrade the cosmological constraints.
Euclid is an ESA mission designed to constrain the properties of dark energy and gravity via weak gravitational lensing and galaxy clustering. It will carry out a wide area imaging and spectroscopy survey (EWS) in visible and near-infrared, covering roughly 15,000 square degrees of extragalactic sky on six years. The wide-field telescope and instruments are optimized for pristine PSF and reduced straylight, producing very crisp images. This paper presents the building of the Euclid reference survey: the sequence of pointings of EWS, Deep fields, Auxiliary fields for calibrations, and spacecraft movements followed by Euclid as it operates in a step-and-stare mode from its orbit around the Lagrange point L2. Each EWS pointing has four dithered frames; we simulate the dither pattern at pixel level to analyse the effective coverage. We use up-to-date models for the sky background to define the Euclid region-of-interest (RoI). The building of the reference survey is highly constrained from calibration cadences, spacecraft constraints and background levels; synergies with ground-based coverage are also considered. Via purposely-built software optimized to prioritize best sky areas, produce a compact coverage, and ensure thermal stability, we generate a schedule for the Auxiliary and Deep fields observations and schedule the RoI with EWS transit observations. The resulting reference survey RSD_2021A fulfills all constraints and is a good proxy for the final solution. Its wide survey covers 14,500 square degrees. The limiting AB magnitudes ($5sigma$ point-like source) achieved in its footprint are estimated to be 26.2 (visible) and 24.5 (near-infrared); for spectroscopy, the H$_alpha$ line flux limit is $2times 10^{-16}$ erg cm$^{-2}$ s$^{-1}$ at 1600 nm; and for diffuse emission the surface brightness limits are 29.8 (visible) and 28.4 (near-infrared) mag arcsec$^{-2}$.
Forthcoming large photometric surveys for cosmology require precise and accurate photometric redshift (photo-z) measurements for the success of their main science objectives. However, to date, no method has been able to produce photo-$z$s at the required accuracy using only the broad-band photometry that those surveys will provide. An assessment of the strengths and weaknesses of current methods is a crucial step in the eventual development of an approach to meet this challenge. We report on the performance of 13 photometric redshift code single value redshift estimates and redshift probability distributions (PDZs) on a common set of data, focusing particularly on the 0.2--2.6 redshift range that the Euclid mission will probe. We design a challenge using emulated Euclid data drawn from three photometric surveys of the COSMOS field. The data are divided into two samples: one calibration sample for which photometry and redshifts are provided to the participants; and the validation sample, containing only the photometry, to ensure a blinded test of the methods. Participants were invited to provide a redshift single value estimate and a PDZ for each source in the validation sample, along with a rejection flag that indicates sources they consider unfit for use in cosmological analyses. The performance of each method is assessed through a set of informative metrics, using cross-matched spectroscopic and highly-accurate photometric redshifts as the ground truth. We show that the rejection criteria set by participants are efficient in removing strong outliers, sources for which the photo-z deviates by more than 0.15(1+z) from the spectroscopic-redshift (spec-z). We also show that, while all methods are able to provide reliable single value estimates, several machine-learning methods do not manage to produce useful PDZs. [abridged]
We study the importance of gravitational lensing in the modelling of the number counts of galaxies. We confirm previous results for photometric surveys, showing that lensing cannot be neglected in a survey like LSST since it would infer a significant shift of cosmological parameters. For a spectroscopic survey like SKA2, we find that neglecting lensing in the monopole, quadrupole and hexadecapole of the correlation function also induces an important shift of parameters. For ${Lambda}$CDM parameters, the shift is moderate, of the order of 0.6${sigma}$ or less. However, for a model-independent analysis, that measures the growth rate of structure in each redshift bin, neglecting lensing introduces a shift of up to 2.3${sigma}$ at high redshift. Since the growth rate is directly used to test the theory of gravity, such a strong shift would wrongly be interpreted as the breakdown of General Relativity. This shows the importance of including lensing in the analysis of future surveys. On the other hand, for a survey like DESI, we find that lensing is not important, mainly due to the value of the magnification bias parameter of DESI, $s(z)$, which strongly reduces the lensing contribution at high redshift. We also propose a way of improving the analysis of spectroscopic surveys, by including the cross-correlations between different redshift bins (which is neglected in spectroscopic surveys) from the spectroscopic survey or from a different photometric sample. We show that including the cross-correlations in the SKA2 analysis does not improve the constraints. On the other hand replacing the cross-correlations from SKA2 by cross-correlations measured with LSST improves the constraints by 10 to 20 %. Interestingly, for ${Lambda}$CDM parameters, we find that LSST and SKA2 are highly complementary, since they are affected differently by degeneracies between parameters.
Galaxy cluster counts in bins of mass and redshift have been shown to be a competitive probe to test cosmological models. This method requires an efficient blind detection of clusters from surveys with a well-known selection function and robust mass estimates. The Euclid wide survey will cover 15000 deg$^2$ of the sky in the optical and near-infrared bands, down to magnitude 24 in the $H$-band. The resulting data will make it possible to detect a large number of galaxy clusters spanning a wide-range of masses up to redshift $sim 2$. This paper presents the final results of the Euclid Cluster Finder Challenge (CFC). The objective of these challenges was to select the cluster detection algorithms that best meet the requirements of the Euclid mission. The final CFC included six independent detection algorithms, based on different techniques, such as photometric redshift tomography, optimal filtering, hierarchical approach, wavelet and friend-of-friends algorithms. These algorithms were blindly applied to a mock galaxy catalog with representative Euclid-like properties. The relative performance of the algorithms was assessed by matching the resulting detections to known clusters in the simulations. Several matching procedures were tested, thus making it possible to estimate the associated systematic effects on completeness to $<3$%. All the tested algorithms are very competitive in terms of performance, with three of them reaching $>80$% completeness for a mean purity of 80% down to masses of $10^{14}$ M$_{odot}$ and up to redshift $z=2$. Based on these results, two algorithms were selected to be implemented in the Euclid pipeline, the AMICO code, based on matched filtering, and the PZWav code, based on an adaptive wavelet approach. [abridged]
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا