Do you want to publish a course? Click here

Euclid preparation: VII. Forecast validation for Euclid cosmological probes

93   0   0.0 ( 0 )
 Added by Santiago Casas
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. Estimation of the expected performance of the experiment, in terms of predicted constraints on cosmological parameters, has so far relied on different methodologies and numerical implementations, developed for different observational probes and for their combination. In this paper we present validated forecasts, that combine both theoretical and observational expertise for different cosmological probes. This is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts. We describe in detail the methodology adopted for Fisher matrix forecasts, applied to galaxy clustering, weak lensing and their combination. We estimate the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations that can be used by the reader to validate their own codes if required. We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setup, i.e. flat or non-flat spatial cosmologies, or different cuts at nonlinear scales. The validated numerical implementations can now be reliably used for any setup. We present results for an optimistic and a pessimistic choice of such settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy Figure of Merit by at least a factor of three.



rate research

Read More

The combination and cross-correlation of the upcoming $Euclid$ data with cosmic microwave background (CMB) measurements is a source of great expectation since it will provide the largest lever arm of epochs, ranging from recombination to structure formation across the entire past light cone. In this work, we present forecasts for the joint analysis of $Euclid$ and CMB data on the cosmological parameters of the standard cosmological model and some of its extensions. This work expands and complements the recently published forecasts based on $Euclid$-specific probes, namely galaxy clustering, weak lensing, and their cross-correlation. With some assumptions on the specifications of current and future CMB experiments, the predicted constraints are obtained from both a standard Fisher formalism and a posterior-fitting approach based on actual CMB data. Compared to a $Euclid$-only analysis, the addition of CMB data leads to a substantial impact on constraints for all cosmological parameters of the standard $Lambda$-cold-dark-matter model, with improvements reaching up to a factor of ten. For the parameters of extended models, which include a redshift-dependent dark energy equation of state, non-zero curvature, and a phenomenological modification of gravity, improvements can be of the order of two to three, reaching higher than ten in some cases. The results highlight the crucial importance for cosmological constraints of the combination and cross-correlation of $Euclid$ probes with CMB data.
Euclid is an ESA mission designed to constrain the properties of dark energy and gravity via weak gravitational lensing and galaxy clustering. It will carry out a wide area imaging and spectroscopy survey (EWS) in visible and near-infrared, covering roughly 15,000 square degrees of extragalactic sky on six years. The wide-field telescope and instruments are optimized for pristine PSF and reduced straylight, producing very crisp images. This paper presents the building of the Euclid reference survey: the sequence of pointings of EWS, Deep fields, Auxiliary fields for calibrations, and spacecraft movements followed by Euclid as it operates in a step-and-stare mode from its orbit around the Lagrange point L2. Each EWS pointing has four dithered frames; we simulate the dither pattern at pixel level to analyse the effective coverage. We use up-to-date models for the sky background to define the Euclid region-of-interest (RoI). The building of the reference survey is highly constrained from calibration cadences, spacecraft constraints and background levels; synergies with ground-based coverage are also considered. Via purposely-built software optimized to prioritize best sky areas, produce a compact coverage, and ensure thermal stability, we generate a schedule for the Auxiliary and Deep fields observations and schedule the RoI with EWS transit observations. The resulting reference survey RSD_2021A fulfills all constraints and is a good proxy for the final solution. Its wide survey covers 14,500 square degrees. The limiting AB magnitudes ($5sigma$ point-like source) achieved in its footprint are estimated to be 26.2 (visible) and 24.5 (near-infrared); for spectroscopy, the H$_alpha$ line flux limit is $2times 10^{-16}$ erg cm$^{-2}$ s$^{-1}$ at 1600 nm; and for diffuse emission the surface brightness limits are 29.8 (visible) and 28.4 (near-infrared) mag arcsec$^{-2}$.
Forthcoming large photometric surveys for cosmology require precise and accurate photometric redshift (photo-z) measurements for the success of their main science objectives. However, to date, no method has been able to produce photo-$z$s at the required accuracy using only the broad-band photometry that those surveys will provide. An assessment of the strengths and weaknesses of current methods is a crucial step in the eventual development of an approach to meet this challenge. We report on the performance of 13 photometric redshift code single value redshift estimates and redshift probability distributions (PDZs) on a common set of data, focusing particularly on the 0.2--2.6 redshift range that the Euclid mission will probe. We design a challenge using emulated Euclid data drawn from three photometric surveys of the COSMOS field. The data are divided into two samples: one calibration sample for which photometry and redshifts are provided to the participants; and the validation sample, containing only the photometry, to ensure a blinded test of the methods. Participants were invited to provide a redshift single value estimate and a PDZ for each source in the validation sample, along with a rejection flag that indicates sources they consider unfit for use in cosmological analyses. The performance of each method is assessed through a set of informative metrics, using cross-matched spectroscopic and highly-accurate photometric redshifts as the ground truth. We show that the rejection criteria set by participants are efficient in removing strong outliers, sources for which the photo-z deviates by more than 0.15(1+z) from the spectroscopic-redshift (spec-z). We also show that, while all methods are able to provide reliable single value estimates, several machine-learning methods do not manage to produce useful PDZs. [abridged]
In metric theories of gravity with photon number conservation, the luminosity and angular diameter distances are related via the Etherington relation, also known as the distance-duality relation (DDR). A violation of this relation would rule out the standard cosmological paradigm and point at the presence of new physics. We quantify the ability of Euclid, in combination with contemporary surveys, to improve the current constraints on deviations from the DDR in the redshift range $0<z<1.6$. We start by an analysis of the latest available data, improving previously reported constraints by a factor of 2.5. We then present a detailed analysis of simulated Euclid and external data products, using both standard parametric methods (relying on phenomenological descriptions of possible DDR violations) and a machine learning reconstruction using Genetic Algorithms. We find that for parametric methods Euclid can (in combination with external probes) improve current constraints by approximately a factor of six, while for non-parametric methods Euclid can improve current constraints by a factor of three. Our results highlight the importance of surveys like Euclid in accurately testing the pillars of the current cosmological paradigm and constraining physics beyond the standard cosmological model.
The accuracy of photometric redshifts (photo-zs) particularly affects the results of the analyses of galaxy clustering with photometrically-selected galaxies (GCph) and weak lensing. In the next decade, space missions like Euclid will collect photometric measurements for millions of galaxies. These data should be complemented with upcoming ground-based observations to derive precise and accurate photo-zs. In this paper, we explore how the tomographic redshift binning and depth of ground-based observations will affect the cosmological constraints expected from Euclid. We focus on GCph and extend the study to include galaxy-galaxy lensing (GGL). We add a layer of complexity to the analysis by simulating several realistic photo-z distributions based on the Euclid Consortium Flagship simulation and using a machine learning photo-z algorithm. We use the Fisher matrix formalism and these galaxy samples to study the cosmological constraining power as a function of redshift binning, survey depth, and photo-z accuracy. We find that bins with equal width in redshift provide a higher Figure of Merit (FoM) than equipopulated bins and that increasing the number of redshift bins from 10 to 13 improves the FoM by 35% and 15% for GCph and its combination with GGL, respectively. For GCph, an increase of the survey depth provides a higher FoM. But the addition of faint galaxies beyond the limit of the spectroscopic training data decreases the FoM due to the spurious photo-zs. When combining both probes, the number density of the sample, which is set by the survey depth, is the main factor driving the variations in the FoM. We conclude that there is more information that can be extracted beyond the nominal 10 tomographic redshift bins of Euclid and that we should be cautious when adding faint galaxies into our sample, since they can degrade the cosmological constraints.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا