Do you want to publish a course? Click here

Disconnected pseudo-$C_ell$ covariances for projected large-scale structure data

55   0   0.0 ( 0 )
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

The disconnected part of the power spectrum covariance matrix (also known as the Gaussian covariance) is the dominant contribution on large scales for galaxy clustering and weak lensing datasets. The presence of a complicated sky mask causes non-trivial correlations between different Fourier/harmonic modes, which must be accurately characterized in order to obtain reliable cosmological constraints. This is particularly relevant for galaxy survey data. Unfortunately, an exact calculation of these correlations involves $O(ell_{rm max}^6)$ operations that become computationally impractical very quickly. We present an implementation of approximate methods to estimate the Gaussian covariance matrix of power spectra involving spin-0 and spin-2 flat- and curved-sky fields, expanding on existing algorithms. These methods achieve an $O(ell_{rm max}^3)$ scaling, which makes the computation of the covariance matrix as fast as the computation of the power spectrum itself. We quantify the accuracy of these methods on large-scale structure and weak lensing data, making use of a large number of Gaussian but otherwise realistic simulations. We show that, using the approximate covariance matrix, we are able to recover the true posterior distribution of cosmological parameters to high accuracy. We also quantify the shortcomings of these methods, which become unreliable on the very largest scales, as well as for covariance matrix elements involving cosmic shear $B$ modes. The algorithms presented here are implemented in the public code NaMaster (https://github.com/LSSTDESC/NaMaster).



rate research

Read More

The jackknife method gives an internal covariance estimate for large-scale structure surveys and allows model-independent errors on cosmological parameters. Using the SDSS-III BOSS CMASS sample, we study how the jackknife size and number of resamplings impact the precision of the covariance estimate on the correlation function multipoles and the error on the inferred baryon acoustic scale. We compare the measurement with the MultiDark Patchy mock galaxy catalogues, and we also validate it against a set of log-normal mocks with the same survey geometry. We build several jackknife configurations that vary in size and number of resamplings. We introduce the Hartlap factor in the covariance estimate that depends on the number of jackknife resamplings. We also find that it is useful to apply the tapering scheme to estimate the precision matrix from a limited number of resamplings. The results from CMASS and mock catalogues show that the error estimate of the baryon acoustic scale does not depend on the jackknife scale. For the shift parameter $alpha$, we find an average error of 1.6%, 2.2% and 1.2%, respectively from CMASS, Patchy and log-normal jackknife covariances. Despite these uncertainties fluctuate significantly due to some structural limitations of the jackknife method, our $alpha$ estimates are in reasonable agreement with published pre-reconstruction analyses. Jackknife methods will provide valuable and complementary covariance estimates for future large-scale structure surveys.
The covariance matrix $boldsymbol{Sigma}$ of non-linear clustering statistics that are measured in current and upcoming surveys is of fundamental interest for comparing cosmological theory and data and a crucial ingredient for the likelihood approximations underlying widely used parameter inference and forecasting methods. The extreme number of simulations needed to estimate $boldsymbol{Sigma}$ to sufficient accuracy poses a severe challenge. Approximating $boldsymbol{Sigma}$ using inexpensive but biased surrogates introduces model error with respect to full simulations, especially in the non-linear regime of structure growth. To address this problem we develop a matrix generalization of Convergence Acceleration by Regression and Pooling (CARPool) to combine a small number of simulations with fast surrogates and obtain low-noise estimates of $boldsymbol{Sigma}$ that are unbiased by construction. Our numerical examples use CARPool to combine GADGET-III $N$-body simulations with fast surrogates computed using COmoving Lagrangian Acceleration (COLA). Even at the challenging redshift $z=0.5$, we find variance reductions of at least $mathcal{O}(10^1)$ and up to $mathcal{O}(10^4)$ for the elements of the matter power spectrum covariance matrix on scales $8.9times 10^{-3}<k_mathrm{max} <1.0$ $h {rm Mpc^{-1}}$. We demonstrate comparable performance for the covariance of the matter bispectrum, the matter correlation function and probability density function of the matter density field. We compare eigenvalues, likelihoods, and Fisher matrices computed using the CARPool covariance estimate with the standard sample covariance estimators and generally find considerable improvement except in cases where $Sigma$ is severely ill-conditioned.
This is the second paper in a series where we propose a method of indirectly measuring large scale structure using information from small scale perturbations. The idea is to build a quadratic estimator from small scale modes that provides a map of structure on large scales. We demonstrated in the first paper that the quadratic estimator works well on a dark-matter-only N-body simulation at a snapshot of $z=0$. Here we generalize the theory to the case of a light cone halo catalog with a non-cubic region taken into consideration. We successfully apply the generalized version of the quadratic estimator to the light cone halo catalog based on an N-body simulation of volume $sim15.03,(h^{-1},rm Gpc)^3$. The most distant point in the light cone is at a redshift of $1.42$, indicating the applicability of our method to next generation of galaxy surveys.
We introduce a technique to measure gravitational lensing magnification using the variability of type I quasars. Quasars variability amplitudes and luminosities are tightly correlated, on average. Magnification due to gravitational lensing increases the quasars apparent luminosity, while leaving the variability amplitude unchanged. Therefore, the mean magnification of an ensemble of quasars can be measured through the mean shift in the variability-luminosity relation. As a proof of principle, we use this technique to measure the magnification of quasars spectroscopically identified in the Sloan Digital Sky Survey, due to gravitational lensing by galaxy clusters in the SDSS MaxBCG catalog. The Palomar-QUEST Variability Survey, reduced using the DeepSky pipeline, provides variability data for the sources. We measure the average quasar magnification as a function of scaled distance (r/R200) from the nearest cluster; our measurements are consistent with expectations assuming NFW cluster profiles, particularly after accounting for the known uncertainty in the clusters centers. Variability-based lensing measurements are a valuable complement to shape-based techniques because their systematic errors are very different, and also because the variability measurements are amenable to photometric errors of a few percent and to depths seen in current wide-field surveys. Given the data volume expected from current and upcoming surveys, this new technique has the potential to be competitive with weak lensing shear measurements of large scale structure.
An important aspect of large-scale structure data analysis is the presence of non-negligible theoretical uncertainties, which become increasingly important on small scales. We show how to incorporate these uncertainties in realistic power spectrum likelihoods by an appropriate change of the fitting model and the covariance matrix. The inclusion of the theoretical error has several advantages over the standard practice of using the sharp momentum cut $k_{rm max}$. First, the theoretical error covariance gradually suppresses the information from the short scales as the employed theoretical model becomes less reliable. This allows one to avoid laborious measurements of $k_{rm max}$, which is an essential part of the standard methods. Second, the theoretical error likelihood gives unbiased constrains with reliable error bars that are not artificially shrunk due to over-fitting. In realistic settings, the theoretical error likelihood yields essentially the same parameter constraints as the standard analysis with an appropriately selected $k_{rm max}$, thereby effectively optimizing the choice of $k_{rm max}$. We demonstrate these points using the large-volume N-body data for the clustering of matter and galaxies in real and redshift space. In passing, we validate the effective field theory description of the redshift space distortions and show that the use of the one-parameter phenomenological Gaussian damping model for fingers-of-God causes significant biases in parameter recovery.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا