ترغب بنشر مسار تعليمي؟ اضغط هنا

Markov Chain Beam Randomization: a study of the impact of PLANCK beam measurement errors on cosmological parameter estimation

46   0   0.0 ( 0 )
 نشر من قبل Graca Rocha
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce a new method to propagate uncertainties in the beam shapes used to measure the cosmic microwave background to cosmological parameters determined from those measurements. The method, which we call Markov Chain Beam Randomization, MCBR, randomly samples from a set of templates or functions that describe the beam uncertainties. The method is much faster than direct numerical integration over systematic `nuisance parameters, and is not restricted to simple, idealized cases as is analytic marginalization. It does not assume the data are normally distributed, and does not require Gaussian priors on the specific systematic uncertainties. We show that MCBR properly accounts for and provides the marginalized errors of the parameters. The method can be generalized and used to propagate any systematic uncertainties for which a set of templates is available. We apply the method to the Planck satellite, and consider future experiments. Beam measurement errors should have a small effect on cosmological parameters as long as the beam fitting is performed after removal of 1/f noise.

قيم البحث

اقرأ أيضاً

Cosmological large-scale structure analyses based on two-point correlation functions often assume a Gaussian likelihood function with a fixed covariance matrix. We study the impact on cosmological parameter estimation of ignoring the parameter depend ence of this covariance matrix, focusing on the particular case of joint weak-lensing and galaxy clustering analyses. Using a Fisher matrix formalism (calibrated against exact likelihood evaluation in particular simple cases), we quantify the effect of using a parameter dependent covariance matrix on both the bias and variance of the parameters. We confirm that the approximation of a parameter-independent covariance matrix is exceptionally good in all realistic scenarios. The information content in the covariance matrix (in comparison with the two point functions themselves) does not change with the fractional sky coverage. Therefore the increase in information due to the parameter dependent covariance matrix becomes negligible as the number of modes increases. Even for surveys covering less than $1%$ of the sky, this effect only causes a bias of up to ${cal O}(10%)$ of the statistical uncertainties, with a misestimation of the parameter uncertainties at the same level or lower. The effect will only be smaller with future large-area surveys. Thus for most analyses the effect of a parameter-dependent covariance matrix can be ignored both in terms of the accuracy and precision of the recovered cosmological constraints.
Systematic errors in the galaxy redshift distribution $n(z)$ can propagate to systematic errors in the derived cosmology. We characterize how the degenerate effects in tomographic bin widths and galaxy bias impart systematic errors on cosmology infer ence using observational data from the Deep Lens Survey. For this we use a combination of galaxy clustering and galaxy-galaxy lensing. We present two end-to-end analyses from the catalogue level to parameter estimation. We produce an initial cosmological inference using fiducial tomographic redshift bins derived from photometric redshifts, then compare this with a result where the redshift bins are empirically corrected using a set of spectroscopic redshifts. We find that the derived parameter $S_8 equiv sigma_8 (Omega_m/.3)^{1/2}$ goes from $.841^{+0.062}_{-.061}$ to $.739^{+.054}_{-.050}$ upon correcting the n(z) errors in the second method.
We introduce a fast Markov Chain Monte Carlo (MCMC) exploration of the astrophysical parameter space using a modified version of the publicly available code CIGALE (Code Investigating GALaxy emission). The original CIGALE builds a grid of theoretical Spectral Energy Distribution (SED) models and fits to photometric fluxes from Ultraviolet (UV) to Infrared (IR) to put contraints on parameters related to both formation and evolution of galaxies. Such a grid-based method can lead to a long and challenging parameter extraction since the computation time increases exponentially with the number of parameters considered and results can be dependent on the density of sampling points, which must be chosen in advance for each parameter. Markov Chain Monte Carlo methods, on the other hand, scale approximately linearly with the number of parameters, allowing a faster and more accurate exploration of the parameter space by using a smaller number of efficiently chosen samples. We test our MCMC version of the code CIGALE (called CIGALEMC) with simulated data. After checking the ability of the code to retrieve the input parameters used to build the mock sample, we fit theoretical SEDs to real data from the well known and studied SINGS sample. We discuss constraints on the parameters and show the advantages of our MCMC sampling method in terms of accuracy of the results and optimization of CPU time.
Upcoming surveys will map the growth of large-scale structure with unprecented precision, improving our understanding of the dark sector of the Universe. Unfortunately, much of the cosmological information is encoded by the small scales, where the cl ustering of dark matter and the effects of astrophysical feedback processes are not fully understood. This can bias the estimates of cosmological parameters, which we study here for a joint analysis of mock Euclid cosmic shear and Planck cosmic microwave background data. We use different implementations for the modelling of the signal on small scales and find that they result in significantly different predictions. Moreover, the different nonlinear corrections lead to biased parameter estimates, especially when the analysis is extended into the highly nonlinear regime, with both the Hubble constant, $H_0$, and the clustering amplitude, $sigma_8$, affected the most. Improvements in the modelling of nonlinear scales will therefore be needed if we are to resolve the current tension with more and better data. For a given prescription for the nonlinear power spectrum, using different corrections for baryon physics does not significantly impact the precision of Euclid, but neglecting these correction does lead to large biases in the cosmological parameters. In order to extract precise and unbiased constraints on cosmological parameters from Euclid cosmic shear data, it is therefore essential to improve the accuracy of the recipes that account for nonlinear structure formation, as well as the modelling of the impact of astrophysical processes that redistribute the baryons.
The Planck Collaboration made its final data release in 2018. In this paper we describe beam-deconvolution map products made from Planck LFI data using the artDeco deconvolution code to symmetrize the effective beam. The deconvolution results are aux iliary data products, available through the Planck Legacy Archive. Analysis of these deconvolved survey difference maps reveals signs of residual signal in the 30-GHz and 44-GHz frequency channels. We produce low-resolution maps and corresponding noise covariance matrices (NCVMs). The NCVMs agree reasonably well with the half-ring noise estimates except for 44 GHz, where we observe an asymmetry between $EE$ and $BB$ noise spectra, possibly a sign of further unresolved systematics.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا