No Arabic abstract
We study the parity-odd part (that we shall call Doppler term) of the linear galaxy two-point correlation function that arises from wide-angle, velocity, Doppler lensing and cosmic acceleration effects. As it is important at low redshift and at large angular separations, the Doppler term is usually neglected in the current generation of galaxy surveys. For future wide-angle galaxy surveys such as Euclid, SPHEREx and SKA, however, we show that the Doppler term must be included. The effect of these terms is dominated by the magnification due to relativistic aberration effects and the slope of the galaxy redshift distribution and it generally mimics the effect of the local type primordial non-Gaussianity with the effective nonlinearity parameter $f_{rm NL}^{rm eff}$ of a few, we show that this would affect forecasts on measurements of $f_{rm NL}$ at low-redshift. Our results show that a survey at low redshift with large number density over a wide area of the sky could detect the Doppler term with a signal-to-noise ratio of $sim 1-20$, depending on survey specifications.
Doppler lensing is the apparent change in object size and magnitude due to peculiar velocities. Objects falling into an overdensity appear larger on its near side, and smaller on its far side, than typical objects at the same redshifts. This effect dominates over the usual gravitational lensing magnification at low redshift. Doppler lensing is a promising new probe of cosmology, and we explore in detail how to utilize the effect with forthcoming surveys. We present cosmological simulations of the Doppler and gravitational lensing effects based on the Millennium simulation. We show that Doppler lensing can be detected around stacked voids or unvirialised over-densities. New power spectra and correlation functions are proposed which are designed to be sensitive to Doppler lensing. We consider the impact of gravitational lensing and intrinsic size correlations on these quantities. We compute the correlation functions and forecast the errors for realistic forthcoming surveys, providing predictions for constraints on cosmological parameters. Finally, we demonstrate how we can make 3-D potential maps of large volumes of the Universe using Doppler lensing.
We perform theoretical and numerical studies of the full relativistic two-point galaxy correlation function, considering the linear-order scalar and tensor perturbation contributions and the wide-angle effects. Using the gauge-invariant relativistic description of galaxy clustering and accounting for the contributions at the observer position, we demonstrate that the complete theoretical expression is devoid of any long-mode contributions from scalar or tensor perturbations and it lacks the infrared divergences in agreement with the equivalence principle. By showing that the gravitational potential contribution to the correlation function converges in the infrared, our study justifies an IR cut-off $(k_{text{IR}} leq H_0)$ in computing the gravitational potential contribution. Using the full gauge-invariant expression, we numerically compute the galaxy two-point correlation function and study the individual contributions in the conformal Newtonian gauge. We find that the terms at the observer position such as the coordinate lapses and the observer velocity (missing in the standard formalism) dominate over the other relativistic contributions in the conformal Newtonian gauge such as the source velocity, the gravitational potential, the integrated Sachs-Wolf effect, the Shapiro time-delay and the lensing convergence. Compared to the standard Newtonian theoretical predictions that consider only the density fluctuation and redshift-space distortions, the relativistic effects in galaxy clustering result in a few percent-level systematic errors beyond the scale of the baryonic acoustic oscillation. Our theoretical and numerical study provides a comprehensive understanding of the relativistic effects in the galaxy two-point correlation function, as it proves the validity of the theoretical prediction and accounts for effects that are often neglected in its numerical evaluation.
The growth history of large-scale structure in the Universe is a powerful probe of the cosmological model, including the nature of dark energy. We study the growth rate of cosmic structure to redshift $z = 0.9$ using more than $162{,}000$ galaxy redshifts from the WiggleZ Dark Energy Survey. We divide the data into four redshift slices with effective redshifts $z = [0.2,0.4,0.6,0.76]$ and in each of the samples measure and model the 2-point galaxy correlation function in parallel and transverse directions to the line-of-sight. After simultaneously fitting for the galaxy bias factor we recover values for the cosmic growth rate which are consistent with our assumed $Lambda$CDM input cosmological model, with an accuracy of around 20% in each redshift slice. We investigate the sensitivity of our results to the details of the assumed model and the range of physical scales fitted, making close comparison with a set of N-body simulations for calibration. Our measurements are consistent with an independent power-spectrum analysis of a similar dataset, demonstrating that the results are not driven by systematic errors. We determine the pairwise velocity dispersion of the sample in a non-parametric manner, showing that it systematically increases with decreasing redshift, and investigate the Alcock-Paczynski effects of changing the assumed fiducial model on the results. Our techniques should prove useful for current and future galaxy surveys mapping the growth rate of structure using the 2-dimensional correlation function.
We show an efficient way to compute wide-angle or all-sky statistics of galaxy intrinsic alignment in three-dimensional configuration space. For this purpose, we expand the two-point correlation function using a newly introduced spin-dependent tripolar spherical harmonic basis. Therefore, the angular dependences on the two line-of-sight (LOS) directions pointing to each pair of objects, which are degenerate with each other in the conventional analysis under the small-angle or plane-parallel (PP) approximation, are unambiguously decomposed. By means of this, we, for the first time, compute the wide-angle auto and cross correlations between intrinsic ellipticities, number densities and velocities of galaxies, and compare them with the PP-limit results. For the ellipticity-ellipticity and density-ellipticity correlations, we find more than $10%$ deviation from the PP-limit results if the opening angle between two LOS directions exceeds $30^circ - 50^circ$. It is also shown that even if the PP-limit result is strictly zero, the non-vanishing correlation is obtained over the various scales, arising purely from the curved-sky effects. Our results indicate the importance of the data analysis not relying on the PP approximation in order to determine the cosmological parameters more precisely and/or find new physics via ongoing and forthcoming wide-angle galaxy surveys.
The two-point correlation function of the galaxy distribution is a key cosmological observable that allows us to constrain the dynamical and geometrical state of our Universe. To measure the correlation function we need to know both the galaxy positions and the expected galaxy density field. The expected field is commonly specified using a Monte-Carlo sampling of the volume covered by the survey and, to minimize additional sampling errors, this random catalog has to be much larger than the data catalog. Correlation function estimators compare data-data pair counts to data-random and random-random pair counts, where random-random pairs usually dominate the computational cost. Future redshift surveys will deliver spectroscopic catalogs of tens of millions of galaxies. Given the large number of random objects required to guarantee sub-percent accuracy, it is of paramount importance to improve the efficiency of the algorithm without degrading its precision. We show both analytically and numerically that splitting the random catalog into a number of subcatalogs of the same size as the data catalog when calculating random-random pairs, and excluding pairs across different subcatalogs provides the optimal error at fixed computational cost. For a random catalog fifty times larger than the data catalog, this reduces the computation time by a factor of more than ten without affecting estimator variance or bias.