No Arabic abstract
Over the next decade, improvements in cosmological parameter constraints will be driven by surveys of large-scale structure. Its inherent non-linearity suggests that significant information will be embedded in higher correlations beyond the two-point function. Extracting this information is extremely challenging: it requires accurate theoretical modelling and significant computational resources to estimate the covariance matrix describing correlations between different Fourier configurations. We investigate whether it is possible to reduce the covariance matrix without significant loss of information by using a proxy that aggregates the bispectrum over a subset of Fourier configurations. Specifically, we study the constraints on $Lambda$CDM parameters from combining the power spectrum with (a) the modal bispectrum decomposition, (b) the line correlation function and (c) the integrated bispectrum. We forecast the error bars achievable on $Lambda$CDM parameters using these proxies in a future galaxy survey and compare them to those obtained from measurements of the Fourier bispectrum, including simple estimates of their degradation in the presence of shot noise. Our results demonstrate that the modal bispectrum performs as well as the Fourier bispectrum, even with considerably fewer modes than Fourier configurations. The line correlation function has good performance but does not match the modal bispectrum. The integrated bispectrum is comparatively insensitive to changes in the background cosmology. We find that adding bispectrum data can improve constraints on bias parameters and the normalization $sigma_8$ by up to 5 compared to power spectrum measurements alone. For other parameters, improvements of up to $sim$ 20% are possible. Finally, we use a range of theoretical models to explore how the sophistication required for realistic predictions varies with each proxy. (abridged)
Higher-order clustering statistics, like the galaxy bispectrum, can add complementary cosmological information to what is accessible with two-point statistics, like the power spectrum. While the standard way of measuring the bispectrum involves estimating a bispectrum value in a large number of Fourier triangle bins, the compressed modal bispectrum approximates the bispectrum as a linear combination of basis functions and estimates the expansion coefficients on the chosen basis. In this work, we compare the two estimators by using parallel pipelines to analyze the real-space halo bispectrum measured in a suite of $N$-body simulations corresponding to a total volume of $sim 1{,}000 ,h^{-3},{rm Gpc}^3$, with covariance matrices estimated from 10,000 mock halo catalogs. We find that the modal bispectrum yields constraints that are consistent and competitive with the standard bispectrum analysis: for the halo bias and shot noise parameters within the tree-level halo bispectrum model up to $k_{rm max} approx 0.06 , (0.10) ,h,{rm Mpc}^{-1}$, only 6 (10) modal expansion coefficients are necessary to obtain constraints equivalent to the standard bispectrum estimator using $sim$ 20 to 1,600 triangle bins, depending on the bin width. For this work, we have implemented a modal estimator pipeline using Markov Chain Monte Carlo simulations for the first time, and we discuss in detail how the parameter posteriors and modal expansion are robust to, or sensitive to, several user settings within the modal bispectrum pipeline. The combination of the highly efficient compression that is achieved and the large number of mock catalogs available allows us to quantify how our modal bispectrum constraints depend on the number of mocks that are used to estimate covariance matrices and the functional form of the likelihood.
We apply two compression methods to the galaxy power spectrum monopole/quadrupole and bispectrum monopole measurements from the BOSS DR12 CMASS sample. Both methods reduce the dimension of the original data-vector to the number of cosmological parameters considered, using the Karhunen-Lo`eve algorithm with an analytic covariance model. In the first case, we infer the posterior through MCMC sampling from the likelihood of the compressed data-vector (MC-KL). The second, faster option, works by first Gaussianising and then orthogonalising the parameter space before the compression; in this option (G-PCA) we only need to run a low-resolution preliminary MCMC sample for the Gaussianization to compute our posterior. Both compression methods accurately reproduce the posterior distributions obtained by standard MCMC sampling on the CMASS dataset for a $k$-space range of $0.03-0.12,h/mathrm{Mpc}$. The compression enables us to increase the number of bispectrum measurements by a factor of $sim 23$ over the standard binning (from 116 to 2734 triangles used), which is otherwise limited by the number of mock catalogues available. This reduces the $68%$ credible intervals for the parameters $left(b_1,b_2,f,sigma_8right)$ by $left(-24.8%,-52.8%,-26.4%,-21%right)$, respectively. The best-fit values we obtain are $(b_1=2.31pm0.17,b_2=0.77pm0.19,$ $f(z_{mathrm{CMASS}})=0.67pm0.06,sigma_8(z_{mathrm{CMASS}})=0.51pm0.03)$. Using these methods for future redshift surveys like DESI, Euclid and PFS will drastically reduce the number of simulations needed to compute accurate covariance matrices and will facilitate tighter constraints on cosmological parameters.
Clustering of large-scale structure provides significant cosmological information through the power spectrum of density perturbations. Additional information can be gained from higher-order statistics like the bispectrum, especially to break the degeneracy between the linear halo bias $b_1$ and the amplitude of fluctuations $sigma_8$. We propose new simple, computationally inexpensive bispectrum statistics that are near optimal for the specific applications like bias determination. Corresponding to the Legendre decomposition of nonlinear halo bias and gravitational coupling at second order, these statistics are given by the cross-spectra of the density with three quadratic fields: the squared density, a tidal term, and a shift term. For halos and galaxies the first two have associated nonlinear bias terms $b_2$ and $b_{s^2}$, respectively, while the shift term has none in the absence of velocity bias (valid in the $k rightarrow 0$ limit). Thus the linear bias $b_1$ is best determined by the shift cross-spectrum, while the squared density and tidal cross-spectra mostly tighten constraints on $b_2$ and $b_{s^2}$ once $b_1$ is known. Since the form of the cross-spectra is derived from optimal maximum-likelihood estimation, they contain the full bispectrum information on bias parameters. Perturbative analytical predictions for their expectation values and covariances agree with simulations on large scales, $klesssim 0.09h/mathrm{Mpc}$ at $z=0.55$ with Gaussian $R=20h^{-1}mathrm{Mpc}$ smoothing, for matter-matter-matter, and matter-matter-halo combinations. For halo-halo-halo cross-spectra the model also needs to include corrections to the Poisson stochasticity.
We investigate the potential of using cosmic voids as a probe to constrain cosmological parameters through the gravitational lensing effect of the cosmic microwave background (CMB) and make predictions for the next generation surveys. By assuming the detection of a series of $approx 5 - 10$ voids along a line of sight within a square-degree patch of the sky, we found that they can be used to break the degeneracy direction of some of the cosmological parameter constraints (for example $omega_b$ and $Omega_Lambda$) in comparison with the constraints from random CMB skies with the same size area for a survey with extensive integration time. This analysis is based on our current knowledge of the average void profile and analytical estimates of the void number function. We also provide combined cosmological parameter constraints between a sky patch where series of voids are detected and a patch without voids (a randomly selected patch). The full potential of this technique relies on an accurate determination of the void profile to $approx 10$% level. For a small-area CMB observation with extensive integration time and a high signal-to-noise ratio, CMB lensing with such series of voids will provide a complementary route to cosmological parameter constraints to the CMB observations. Example of parameter constraints with a series of five voids on a $1.0^{circ} times 1.0^{circ}$ patch of the sky are $100omega_b = 2.20 pm 0.27$, $omega_c = 0.120 pm 0.022$, $Omega_Lambda = 0.682 pm 0.078$, $Delta_{mathcal{R}}^2 = left(2.22 pm 7.79right) times 10^{-9}$, $n_s = 0.962 pm 0.097$ and $tau = 0.925 pm 1.747$ at 68% C.L.
Optimal extraction of the non-Gaussian information encoded in the Large-Scale Structure (LSS) of the universe lies at the forefront of modern precision cosmology. We propose achieving this task through the use of the Wavelet Scattering Transform (WST), which subjects an input field to a layer of non-linear transformations that are sensitive to non-Gaussianity in spatial density distributions through a generated set of WST coefficients. In order to assess its applicability in the context of LSS surveys, we apply the WST on the 3D overdensity field obtained by the Quijote simulations, out of which we extract the Fisher information in 6 cosmological parameters. It is subsequently found to deliver a large improvement in the marginalized errors on all parameters, ranging between $1.2-4times$ tighter than the corresponding ones obtained from the regular 3D cold dark matter + baryon power spectrum, as well as a $50 %$ improvement over the neutrino mass constraint given by the marked power spectrum. Through this first application on 3D cosmological fields, we demonstrate the great promise held by this novel statistic and set the stage for its future application to actual galaxy observations.