No Arabic abstract
Accurate weak-lensing analysis requires not only accurate measurement of galaxy shapes but also precise and unbiased measurement of galaxy redshifts. The photometric redshift technique appears as the only possibility to determine the redshift of the background galaxies used in the weak-lensing analysis. Using the photometric redshift quality, simple shape measurement requirements, and a proper sky model, we explore what could be an optimal weak-lensing dark energy mission based on FoM calculation. We found that photometric redshifts reach their best accuracy for the bulk of the faint galaxy population when filters have a resolution R~3.2. We show that an optimal mission would survey the sky through 8 filters using 2 cameras (visible and near infrared). Assuming a 5-year mission duration, a mirror size of 1.5m, a 0.5deg2 FOV with a visible pixel scale of 0.15, we found that a homogeneous survey reaching IAB=25.6 (10sigma) with a sky coverage of ~11000deg2 maximizes the Weak Lensing FoM. The effective number density of galaxies then used for WL is ~45gal/arcmin2, at least a factor of two better than ground based survey. This work demonstrates that a full account of the observational strategy is required to properly optimize the instrument parameters to maximize the FoM of the future weak-lensing space dark energy mission.
Future dark energy space missions such as JDEM and EUCLID are being designed to survey the galaxy population to trace the geometry of the universe and the growth of structure, which both depend on the cosmological model. To reach the goal of high precision cosmology they need to evaluate the capabilities of different instrument designs based on realistic mock catalog. The aim of this paper is to construct realistic and flexible mock catalogs based on our knowledge of galaxy population from current deep surveys. We explore two categories of mock catalog : (i) based on luminosity functions fit of observations (GOODS, UDF,COSMOS,VVDS) using the Le Phare software (ii) based on the observed COSMOS galaxy distribution which benefits from all the properties of the data-rich COSMOS survey. For these two catalogs, we have produced simulated number counts in several bands, color diagrams and redshift distribution for validation against real observational data. We also derive some basic requirements to help designing future Dark Energy mission in terms of number of galaxies available for the weak-lensing analysis as a function of the PSF size and depth of the survey. We also compute the spectroscopic success rate for future spectroscopic redshift surveys (i) aiming at measuring BAO in the case of the wide field spectroscopic redshift survey, and (ii) for the photometric redshift calibration survey which is required to achieve weak lensing tomography with great accuracy. They will be publicly accessible at http://lamwws.oamp.fr/cosmowiki/RealisticSpectroPhotCat, or by request to the first author of this paper.
We study the accuracy with which weak lensing measurements could be made from a future space-based survey, predicting the subsequent precisions of 3-dimensional dark matter maps, projected 2-dimensional dark matter maps, and mass-selected cluster catalogues. As a baseline, we use the instrumental specifications of the Supernova/Acceleration Probe (SNAP) satellite. We first compute its sensitivity to weak lensing shear as a function of survey depth. Our predictions are based on detailed image simulations created using `shapelets, a complete and orthogonal parameterization of galaxy morphologies. We incorporate a realistic redshift distribution of source galaxies, and calculate the average precision of photometric redshift recovery using the SNAP filter set to be Delta z=0.034. The high density of background galaxies resolved in a wide space-based survey allows projected dark matter maps with a rms sensitivity of 3% shear in 1 square arcminute cells. This will be further improved using a proposed deep space-based survey, which will be able to detect isolated clusters using a 3D lensing inversion techniques with a 1 sigma mass sensitivity of approximately 10^13 solar masses at z~0.25. Weak lensing measurements from space will thus be able to capture non-Gaussian features arising from gravitational instability and map out dark matter in the universe with unprecedented resolution.
We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributions $n^i_{PZ}(z)$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $n^i(z)=n^i_{PZ}(z-Delta z^i)$ to correct the mean redshift of $n^i(z)$ for biases in $n^i_{rm PZ}$. The $Delta z^i$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $Delta z^i$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15<z<0.9. This paper details the BPZ and COSMOS procedures, and demonstrates that the cosmological inference is insensitive to details of the $n^i(z)$ beyond the choice of $Delta z^i$. The clustering and COSMOS validation methods produce consistent estimates of $Delta z^i$, with combined uncertainties of $sigma_{Delta z^i}=$0.015, 0.013, 0.011, and 0.022 in the four bins. We marginalize over these in all analyses to follow, which does not diminish the constraining power significantly. Repeating the photo-z procedure using the Directional Neighborhood Fitting (DNF) algorithm instead of BPZ, or using the $n^i(z)$ directly estimated from COSMOS, yields no discernible difference in cosmological inferences.
Interacting dark energy models have been proposed as attractive alternatives to $Lambda$CDM. Forthcoming Stage-IV galaxy clustering surveys will constrain these models, but they require accurate modelling of the galaxy power spectrum multipoles on mildly non-linear scales. In this work we consider a dark scattering model with a simple 1-parameter extension to $w$CDM - adding only $A$, which describes a pure momentum exchange between dark energy and dark matter. We then provide a comprehensive comparison of three approaches of modeling non-linearities, while including the effects of this dark sector coupling. We base our modeling of non-linearities on the two most popular perturbation theory approaches: TNS and EFTofLSS. To test the validity and precision of the modelling, we perform an MCMC analysis using simulated data corresponding to a $Lambda$CDM fiducial cosmology and Stage-IV surveys specifications in two redshift bins, $z=0.5$ and $z=1$. We find the most complex EFTofLSS-based model studied to be better suited at both, describing the mock data up to smaller scales, and extracting the most information. Using this model, we forecast uncertainties on the dark energy equation of state, $w$, and on the interaction parameter, $A$, finding $sigma_w=0.06$ and $sigma_A=1.1$ b/GeV for the analysis at $z=0.5$ and $sigma_w=0.06$ and $sigma_A=2.0$ b/GeV for the analysis at $z=1$. In addition, we show that a false detection of exotic dark energy up to 3$sigma$ would occur should the non-linear modelling be incorrect, demonstrating the importance of the validation stage for accurate interpretation of measurements.
We present two galaxy shape catalogues from the Dark Energy Survey Year 1 data set, covering 1500 square degrees with a median redshift of $0.59$. The catalogues cover two main fields: Stripe 82, and an area overlapping the South Pole Telescope survey region. We describe our data analysis process and in particular our shape measurement using two independent shear measurement pipelines, METACALIBRATION and IM3SHAPE. The METACALIBRATION catalogue uses a Gaussian model with an innovative internal calibration scheme, and was applied to $riz$-bands, yielding 34.8M objects. The IM3SHAPE catalogue uses a maximum-likelihood bulge/disc model calibrated using simulations, and was applied to $r$-band data, yielding 21.9M objects. Both catalogues pass a suite of null tests that demonstrate their fitness for use in weak lensing science. We estimate the 1$sigma$ uncertainties in multiplicative shear calibration to be $0.013$ and $0.025$ for the METACALIBRATION and IM3SHAPE catalogues, respectively.