No Arabic abstract
Redshift space distortions privilege the location of the observer in cosmological redshift surveys, breaking the translational symmetry of the underlying theory. This violation of statistical homogeneity has consequences for the modeling of clustering observables, leading to what are frequently called `wide angle effects. We study these effects analytically, computing their signature in the clustering of the multipoles in configuration and Fourier space. We take into account both physical wide angle contributions as well as the terms generated by the galaxy selection function. Similar considerations also affect the way power spectrum estimators are constructed. We quantify, in an analytical way the biases which enter and clarify the relation between what we measure and the underlying theoretical modeling. The presence of an angular window function is also discussed. Motivated by this analysis we present new estimators for the three dimensional Cartesian power spectrum and bispectrum multipoles written in terms of spherical Fourier-Bessel coefficients. We show how the latter have several interesting properties, allowing in particular a clear separation between angular and radial modes.
Calibrating the photometric redshifts of >10^9 galaxies for upcoming weak lensing cosmology experiments is a major challenge for the astrophysics community. The path to obtaining the required spectroscopic redshifts for training and calibration is daunting, given the anticipated depths of the surveys and the difficulty in obtaining secure redshifts for some faint galaxy populations. Here we present an analysis of the problem based on the self-organizing map, a method of mapping the distribution of data in a high-dimensional space and projecting it onto a lower-dimensional representation. We apply this method to existing photometric data from the COSMOS survey selected to approximate the anticipated Euclid weak lensing sample, enabling us to robustly map the empirical distribution of galaxies in the multidimensional color space defined by the expected Euclid filters. Mapping this multicolor distribution lets us determine where - in galaxy color space - redshifts from current spectroscopic surveys exist and where they are systematically missing. Crucially, the method lets us determine whether a spectroscopic training sample is representative of the full photometric space occupied by the galaxies in a survey. We explore optimal sampling techniques and estimate the additional spectroscopy needed to map out the color-redshift relation, finding that sampling the galaxy distribution in color space in a systematic way can efficiently meet the calibration requirements. While the analysis presented here focuses on the Euclid survey, similar analysis can be applied to other surveys facing the same calibration challenge, such as DES, LSST, and WFIRST.
Galaxy redshift surveys are one of the pillars of the current standard cosmological model and remain a key tool in the experimental effort to understand the origin of cosmic acceleration. To this end, the next generation of surveys aim at achieving sub-percent precision in the measurement of the equation of state of dark energy $w(z)$ and the growth rate of structure $f(z)$. This however requires comparable control over systematic errors, stressing the need for improved modelling methods. In this contribution we review at the introductory level some highlights of the work done in this direction by the {it Darklight} project. Supported by an ERC Advanced Grant, {it Darklight} developed novel techniques for clustering analysis, which were tested through numerical simulations before being finally applied to galaxy data as in particular those of the recently completed VIPERS redshift survey. We focus in particular on: (a) advances on estimating the growth rate of structure from redshift-space distortions; (b) parameter estimation through global Bayesian reconstruction of the density field from survey data; (c) impact of massive neutrinos on large-scale structure measurements. Overall, {it Darklight} has contributed to paving the way for forthcoming high-precision experiments, such as {it Euclid}, the next ESA cosmological mission.
The kinetic Sunyaev Zeldovich effect (kSZ) effect is a potentially powerful probe to the missing baryons. However, the kSZ signal is overwhelmed by various contaminations and the cosmological application is hampered by loss of redshift information due to the projection effect. We propose a kSZ tomography method to alleviate these problems, with the aid of galaxy spectroscopic redshift surveys. We propose to estimate the large scale peculiar velocity through the 3D galaxy distribution, weigh it by the 3D galaxy density and adopt the product projected along the line of sight with a proper weighting as an estimator of the true kSZ temperature fluctuation $Theta$. We thus propose to measure the kSZ signal through the $Hat{Theta}$-$Theta$ cross correlation. This approach has a number of advantages (see details in the abstract of the paper). We test the proposed kSZ tomography against non-adiabatic and adiabatic hydrodynamical simulations. We confirm that $hat{Theta}$ is indeed tightly correlated with $Theta$ at $kla 1h/$Mpc, although nonlinearities in the density and velocity fields and nonlinear redshift distortion do weaken the tightness of the $hat{Theta}$-$Theta$ correlation. We further quantify the reconstruction noise in $Hat{Theta}$ from galaxy distribution shot noise. Based on these results, we quantify the applicability of the proposed kSZ tomography for future surveys. We find that, in combination with the BigBOSS-N spectroscopic redshift survey, the PLANCK CMB experiment will be able to detect the kSZ with an overall significance of $sim 50sigma$ and further measure its redshift distribution at many redshift bins over $0<z<2$.
Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., Vsurvey is roughly 10 Gpc3) to be covered, and thus tends to be expensive. A sparse sampling method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, Vsurvey, we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of Vsurvey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by Vsurvey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.
We present a public code to generate random fields with an arbitrary probability distribution function (PDF) and an arbitrary correlation function. The algorithm is cosmology-independent, applicable to any stationary stochastic process over a three dimensional grid. We implement it in the case of the matter density field, showing its benefits over the lognormal approximation, which is often used in cosmology for generation of mock catalogues. We find that the covariance of the power spectrum from the new fast realizations is more accurate than that from a lognormal model. As a proof of concept, we also apply the new simulation scheme to the divergence of the Lagrangian displacement field. We find that information from the correlation function and the PDF of the displacement-divergence provides modest improvement over other standard analytical techniques to describe the particle field in the simulation. This suggests that further progress in this direction should come from multi-scale or non-local properties of the initial matter distribution.