ترغب بنشر مسار تعليمي؟ اضغط هنا

Understanding matched filters for precision cosmology

69   0   0.0 ( 0 )
 نشر من قبل \\'I\\~nigo Zubeldia
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Matched filters are routinely used in cosmology in order to detect galaxy clusters from mm observations through their thermal Sunyaev-Zeldovich (tSZ) signature. In addition, they naturally provide an observable, the detection signal-to-noise or significance, which can be used as a mass proxy in number counts analyses of tSZ-selected cluster samples. In this work, we show that this observable is, in general, non-Gaussian, and that it suffers from a positive bias, which we refer to as optimisation bias. Both aspects arise from the fact that the signal-to-noise is constructed through an optimisation operation on noisy data, and hold even if the cluster signal is modelled perfectly well, no foregrounds are present, and the noise is Gaussian. After reviewing the general mathematical formalism underlying matched filters, we study the statistics of the signal-to-noise with a set Monte Carlo mock observations, finding it to be well-described by a unit-variance Gaussian for signal-to-noise values of 6 and above, and quantify the magnitude of the optimisation bias, for which we give an approximate expression that may be used in practice. We also consider the impact of the bias on the cluster number counts of Planck and the Simons Observatory (SO), finding it to be negligible for the former and potentially significant for the latter.



قيم البحث

اقرأ أيضاً

71 - Rachel Mandelbaum 2017
Weak gravitational lensing, the deflection of light by mass, is one of the best tools to constrain the growth of cosmic structure with time and reveal the nature of dark energy. I discuss the sources of systematic uncertainty in weak lensing measurem ents and their theoretical interpretation, including our current understanding and other options for future improvement. These include long-standing concerns such as the estimation of coherent shears from galaxy images or redshift distributions of galaxies selected based on photometric redshifts, along with systematic uncertainties that have received less attention to date because they are subdominant contributors to the error budget in current surveys. I also discuss methods for automated systematics detection using survey data of the 2020s. The goal of this review is to describe the current state of the field and what must be done so that if weak lensing measurements lead toward surprising conclusions about key questions such as the nature of dark energy, those conclusions will be credible.
357 - Steen Hannestad 2016
I review the current status of structure formation bounds on neutrino properties such as mass and energy density. I also discuss future cosmological bounds as well as a variety of different scenarios for reconciling cosmology with the presence of light sterile neutrinos.
153 - Darren S. Reed 2012
Cosmological surveys aim to use the evolution of the abundance of galaxy clusters to accurately constrain the cosmological model. In the context of LCDM, we show that it is possible to achieve the required percent level accuracy in the halo mass func tion with gravity-only cosmological simulations, and we provide simulation start and run parameter guidelines for doing so. Some previous works have had sufficient statistical precision, but lacked robust verification of absolute accuracy. Convergence tests of the mass function with, for example, simulation start redshift can exhibit false convergence of the mass function due to counteracting errors, potentially misleading one to infer overly optimistic estimations of simulation accuracy. Percent level accuracy is possible if initial condition particle mapping uses second order Lagrangian Perturbation Theory, and if the start epoch is between 10 and 50 expansion factors before the epoch of halo formation of interest. The mass function for halos with fewer than ~1000 particles is highly sensitive to simulation parameters and start redshift, implying a practical minimum mass resolution limit due to mass discreteness. The narrow range in converged start redshift suggests that it is not presently possible for a single simulation to capture accurately the cluster mass function while also starting early enough to model accurately the numbers of reionisation era galaxies, whose baryon feedback processes may affect later cluster properties. Ultimately, to fully exploit current and future cosmological surveys will require accurate modeling of baryon physics and observable properties, a formidable challenge for which accurate gravity-only simulations are just an initial step.
The Core Cosmology Library (CCL) provides routines to compute basic cosmological observables to a high degree of accuracy, which have been verified with an extensive suite of validation tests. Predictions are provided for many cosmological quantities , including distances, angular power spectra, correlation functions, halo bias and the halo mass function through state-of-the-art modeling prescriptions available in the literature. Fiducial specifications for the expected galaxy distributions for the Large Synoptic Survey Telescope (LSST) are also included, together with the capability of computing redshift distributions for a user-defined photometric redshift model. A rigorous validation procedure, based on comparisons between CCL and independent software packages, allows us to establish a well-defined numerical accuracy for each predicted quantity. As a result, predictions for correlation functions of galaxy clustering, galaxy-galaxy lensing and cosmic shear are demonstrated to be within a fraction of the expected statistical uncertainty of the observables for the models and in the range of scales of interest to LSST. CCL is an open source software package written in C, with a python interface and publicly available at https://github.com/LSSTDESC/CCL.
The success of future large scale weak lensing surveys will critically depend on the accurate estimation of photometric redshifts of very large samples of galaxies. This in turn depends on both the quality of the photometric data and the photo-z esti mators. In a previous study, (Bordoloi et al. 2010) we focussed primarily on the impact of photometric quality on photo-z estimates and on the development of novel techniques to construct the N(z) of tomographic bins at the high level of precision required for precision cosmology, as well as the correction of issues such as imprecise corrections for Galactic reddening. We used the same set of templates to generate the simulated photometry as were then used in the photo-z code, thereby removing any effects of template error. In this work we now include the effects of template error by generating simulated photometric data set from actual COSMOS photometry. We use the trick of simulating redder photometry of galaxies at higher redshifts by using a bluer set of passbands on low z galaxies with known redshifts. We find that template error is a rather small factor in photo-z performance, at the photometric precision and filter complement expected for all-sky surveys. With only a small sub-set of training galaxies with spectroscopic redshifts, it is in principle possible to construct tomographic redshift bins whose mean redshift is known, from photo-z alone, to the required accuracy of 0.002(1+z).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا