No Arabic abstract
In this work we study the consequences of allowing non pressureless dark matter when determining dark energy constraints. We show that present-day dark energy constraints from low-redshift probes are extremely degraded when allowing this dark matter variation. However, adding the cosmic microwave background (CMB) we can recover the $w_{DM} = 0$ case constraints. We also show that with the future Euclid redshift survey we expect to largely improve these constraints; but, without the complementary information of the CMB, there is still a strong degeneracy between dark energy and dark matter equation of state parameters.
In this paper we study the consequences of relaxing the hypothesis of the pressureless nature of the dark matter component when determining constraints on dark energy. To this aim we consider simple generalized dark matter models with constant equation of state parameter. We find that present-day low-redshift probes (type-Ia supernovae and baryonic acoustic oscillations) lead to a complete degeneracy between the dark energy and the dark matter sectors. However, adding the cosmic microwave background (CMB) high-redshift probe restores constraints similar to those on the standard $Lambda$CDM model. We then examine the anticipated constraints from the galaxy clustering probe of the future Euclid survey on the same class of models, using a Fisher forecast estimation. We show that the Euclid survey allows us to break the degeneracy between the dark sectors, although the constraints on dark energy are much weaker than with standard dark matter. The use of CMB in combination allows us to restore the high precision on the dark energy sector constraints.
Euclid is an ESA mission designed to constrain the properties of dark energy and gravity via weak gravitational lensing and galaxy clustering. It will carry out a wide area imaging and spectroscopy survey (EWS) in visible and near-infrared, covering roughly 15,000 square degrees of extragalactic sky on six years. The wide-field telescope and instruments are optimized for pristine PSF and reduced straylight, producing very crisp images. This paper presents the building of the Euclid reference survey: the sequence of pointings of EWS, Deep fields, Auxiliary fields for calibrations, and spacecraft movements followed by Euclid as it operates in a step-and-stare mode from its orbit around the Lagrange point L2. Each EWS pointing has four dithered frames; we simulate the dither pattern at pixel level to analyse the effective coverage. We use up-to-date models for the sky background to define the Euclid region-of-interest (RoI). The building of the reference survey is highly constrained from calibration cadences, spacecraft constraints and background levels; synergies with ground-based coverage are also considered. Via purposely-built software optimized to prioritize best sky areas, produce a compact coverage, and ensure thermal stability, we generate a schedule for the Auxiliary and Deep fields observations and schedule the RoI with EWS transit observations. The resulting reference survey RSD_2021A fulfills all constraints and is a good proxy for the final solution. Its wide survey covers 14,500 square degrees. The limiting AB magnitudes ($5sigma$ point-like source) achieved in its footprint are estimated to be 26.2 (visible) and 24.5 (near-infrared); for spectroscopy, the H$_alpha$ line flux limit is $2times 10^{-16}$ erg cm$^{-2}$ s$^{-1}$ at 1600 nm; and for diffuse emission the surface brightness limits are 29.8 (visible) and 28.4 (near-infrared) mag arcsec$^{-2}$.
In the last decade, astronomers have found a new type of supernova called `superluminous supernovae (SLSNe) due to their high peak luminosity and long light-curves. These hydrogen-free explosions (SLSNe-I) can be seen to z~4 and therefore, offer the possibility of probing the distant Universe. We aim to investigate the possibility of detecting SLSNe-I using ESAs Euclid satellite, scheduled for launch in 2020. In particular, we study the Euclid Deep Survey (EDS) which will provide a unique combination of area, depth and cadence over the mission. We estimated the redshift distribution of Euclid SLSNe-I using the latest information on their rates and spectral energy distribution, as well as known Euclid instrument and survey parameters, including the cadence and depth of the EDS. We also applied a standardization method to the peak magnitudes to create a simulated Hubble diagram to explore possible cosmological constraints. We show that Euclid should detect approximately 140 high-quality SLSNe-I to z ~ 3.5 over the first five years of the mission (with an additional 70 if we lower our photometric classification criteria). This sample could revolutionize the study of SLSNe-I at z>1 and open up their use as probes of star-formation rates, galaxy populations, the interstellar and intergalactic medium. In addition, a sample of such SLSNe-I could improve constraints on a time-dependent dark energy equation-of-state, namely w(a), when combined with local SLSNe-I and the expected SN Ia sample from the Dark Energy Survey. We show that Euclid will observe hundreds of SLSNe-I for free. These luminous transients will be in the Euclid data-stream and we should prepare now to identify them as they offer a new probe of the high-redshift Universe for both astrophysics and cosmology.
The data from the Euclid mission will enable the measurement of the photometric redshifts, angular positions, and weak lensing shapes for over a billion galaxies. This large dataset will allow for cosmological analyses using the angular clustering of galaxies and cosmic shear. The cross-correlation (XC) between these probes can tighten constraints and it is therefore important to quantify their impact for Euclid. In this study we carefully quantify the impact of XC not only on the final parameter constraints for different cosmological models, but also on the nuisance parameters. In particular, we aim at understanding the amount of additional information that XC can provide for parameters encoding systematic effects, such as galaxy bias or intrinsic alignments (IA). We follow the formalism presented in Euclid Collaboration: Blanchard et al. (2019) and make use of the codes validated therein. We show that XC improves the dark energy Figure of Merit (FoM) by a factor $sim 5$, whilst it also reduces the uncertainties on galaxy bias by $sim 17%$ and the uncertainties on IA by a factor $sim 4$. We observe that the role of XC on the final parameter constraints is qualitatively the same irrespective of the galaxy bias model used. We also show that XC can help in distinguishing between different IA models, and that if IA terms are neglected then this can lead to significant biases on the cosmological parameters. We find that the XC terms are necessary to extract the full information content from the data in future analyses. They help in better constraining the cosmological model, and lead to a better understanding of the systematic effects that contaminate these probes. Furthermore, we find that XC helps in constraining the mean of the photometric-redshift distributions, but it requires a more precise knowledge of this mean in order not to degrade the final FoM. [Abridged]
The concordance model in cosmology, $Lambda$CDM, is able to fit the main cosmological observations with a high level of accuracy. However, around 95% of the energy content of the Universe within this framework remains still unknown. In this work we focus on the dark matter component and we investigate the generalized dark matter (GDM) model, which allows for non-pressure-less dark matter and a non-vanishing sound speed and viscosity. We first focus on current observations, showing that GDM could alleviate the tension between cosmic microwave background and weak lensing observations. We then investigate the ability of the photometric Euclid survey (photometric galaxy clustering, weak lensing, and their cross-correlations) to constrain the nature of dark matter. We conclude that Euclid will provide us with very good constraints on GDM, enabling us to better understand the nature of this fluid, but a non-linear recipe adapted to GDM is clearly needed in order to correct for non-linearities and get reliable results down to small scales.