Optimizing large-scale structure data analysis with the theoretical error likelihood


الملخص بالإنكليزية

An important aspect of large-scale structure data analysis is the presence of non-negligible theoretical uncertainties, which become increasingly important on small scales. We show how to incorporate these uncertainties in realistic power spectrum likelihoods by an appropriate change of the fitting model and the covariance matrix. The inclusion of the theoretical error has several advantages over the standard practice of using the sharp momentum cut $k_{rm max}$. First, the theoretical error covariance gradually suppresses the information from the short scales as the employed theoretical model becomes less reliable. This allows one to avoid laborious measurements of $k_{rm max}$, which is an essential part of the standard methods. Second, the theoretical error likelihood gives unbiased constrains with reliable error bars that are not artificially shrunk due to over-fitting. In realistic settings, the theoretical error likelihood yields essentially the same parameter constraints as the standard analysis with an appropriately selected $k_{rm max}$, thereby effectively optimizing the choice of $k_{rm max}$. We demonstrate these points using the large-volume N-body data for the clustering of matter and galaxies in real and redshift space. In passing, we validate the effective field theory description of the redshift space distortions and show that the use of the one-parameter phenomenological Gaussian damping model for fingers-of-God causes significant biases in parameter recovery.

تحميل البحث