ترغب بنشر مسار تعليمي؟ اضغط هنا

Residual noise covariance for Planck low-resolution data analysis

53   0   0.0 ( 0 )
 نشر من قبل Reijo Keskitalo
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the sub-percent level at multipoles, ell > 2Nside, where Nside is the HEALPix resolution parameter. We show that sufficient characterization of the residual noise is unavoidable if one is to draw reliable contraints on large scale anisotropy. Conclusions: We have described how to compute the low-resolution maps, with a controlled sky signal level, and a reliable estimate of covariance of the residual noise. We have also presented a method to smooth the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth limited maps.

قيم البحث

اقرأ أيضاً

We report on the results of a search for a Weakly Interacting Massive Particle (WIMP) signal in low-energy data of the Cryogenic Dark Matter Search (CDMS~II) experiment using a maximum likelihood analysis. A background model is constructed using GEAN T4 to simulate the surface-event background from $^{210}$Pb decay-chain events, while using independent calibration data to model the gamma background. Fitting this background model to the data results in no statistically significant WIMP component. In addition, we perform fits using an analytic ad hoc background model proposed by Collar and Fields, who claimed to find a large excess of signal-like events in our data. We confirm the strong preference for a signal hypothesis in their analysis under these assumptions, but excesses are observed in both single- and multiple-scatter events, which implies the signal is not caused by WIMPs, but rather reflects the inadequacy of their background model.
We describe the BeyondPlanck project in terms of motivation, methodology and main products, and provide a guide to a set of companion papers that describe each result in fuller detail. Building directly on experience from ESAs Planck mission, we impl ement a complete end-to-end Bayesian analysis framework for the Planck Low Frequency Instrument (LFI) observations. The primary product is a joint posterior distribution P(omega|d), where omega represents the set of all free instrumental (gain, correlated noise, bandpass etc.), astrophysical (synchrotron, free-free, thermal dust emission etc.), and cosmological (CMB map, power spectrum etc.) parameters. Some notable advantages of this approach are seamless end-to-end propagation of uncertainties; accurate modeling of both astrophysical and instrumental effects in the most natural basis for each uncertain quantity; optimized computational costs with little or no need for intermediate human interaction between various analysis steps; and a complete overview of the entire analysis process within one single framework. As a practical demonstration of this framework, we focus in particular on low-l CMB polarization reconstruction, paying special attention to the LFI 44 GHz channel. We find evidence of significant residual systematic effects that are still not accounted for in the current processing, but must be addressed in future work. These include a break-down of the 1/f correlated noise model at 30 and 44 GHz, and scan-aligned stripes in the Southern Galactic hemisphere at 44 GHz. On the Northern hemisphere, however, we find that all results are consistent with the LCDM model, and we constrain the reionization optical depth to tau = 0.067 +/- 0.016, with a low-resolution chi-squared probability-to-exceed of 16%. The marginal CMB dipole amplitude is 3359.5 +/- 1.9 uK. (Abridged.)
The covariance matrix $boldsymbol{Sigma}$ of non-linear clustering statistics that are measured in current and upcoming surveys is of fundamental interest for comparing cosmological theory and data and a crucial ingredient for the likelihood approxim ations underlying widely used parameter inference and forecasting methods. The extreme number of simulations needed to estimate $boldsymbol{Sigma}$ to sufficient accuracy poses a severe challenge. Approximating $boldsymbol{Sigma}$ using inexpensive but biased surrogates introduces model error with respect to full simulations, especially in the non-linear regime of structure growth. To address this problem we develop a matrix generalization of Convergence Acceleration by Regression and Pooling (CARPool) to combine a small number of simulations with fast surrogates and obtain low-noise estimates of $boldsymbol{Sigma}$ that are unbiased by construction. Our numerical examples use CARPool to combine GADGET-III $N$-body simulations with fast surrogates computed using COmoving Lagrangian Acceleration (COLA). Even at the challenging redshift $z=0.5$, we find variance reductions of at least $mathcal{O}(10^1)$ and up to $mathcal{O}(10^4)$ for the elements of the matter power spectrum covariance matrix on scales $8.9times 10^{-3}<k_mathrm{max} <1.0$ $h {rm Mpc^{-1}}$. We demonstrate comparable performance for the covariance of the matter bispectrum, the matter correlation function and probability density function of the matter density field. We compare eigenvalues, likelihoods, and Fisher matrices computed using the CARPool covariance estimate with the standard sample covariance estimators and generally find considerable improvement except in cases where $Sigma$ is severely ill-conditioned.
We describe the processing of the 531 billion raw data samples from the High Frequency Instrument (hereafter HFI), which we performed to produce six temperature maps from the first 473 days of Planck-HFI survey data. These maps provide an accurate re ndition of the sky emission at 100, 143, 217, 353, 545, and 857 GHz with an angular resolution ranging from 9.7 to 4.6 arcmin. The detector noise per (effective) beam solid angle is respectively, 10, 6, 12 and 39 microKelvin in HFI four lowest frequency channel (100--353 GHz) and 13 and 14 kJy/sr for the 545 and 857 GHz channels. Using the 143 GHz channel as a reference, these two high frequency channels are intercalibrated within 5% and the 353 GHz relative calibration is at the percent level. The 100 and 217 GHz channels, which together with the 143 GHz channel determine the high-multipole part of the CMB power spectrum (50 < l <2500), are intercalibrated at better than 0.2 %.
71 - P. Lemos , M. Raveri , A. Campos 2020
Quantifying tensions -- inconsistencies amongst measurements of cosmological parameters by different experiments -- has emerged as a crucial part of modern cosmological data analysis. Statistically-significant tensions between two experiments or cosm ological probes may indicate new physics extending beyond the standard cosmological model and need to be promptly identified. We apply several tension estimators proposed in the literature to the Dark Energy Survey (DES) large-scale structure measurement and Planck cosmic microwave background data. We first evaluate the responsiveness of these metrics to an input tension artificially introduced between the two, using synthetic DES data. We then apply the metrics to the comparison of Planck and actual DES Year 1 data. We find that the parameter differences, Eigentension, and Suspiciousness metrics all yield similar results on both simulated and real data, while the Bayes ratio is inconsistent with the rest due to its dependence on the prior volume. Using these metrics, we calculate the tension between DES Year 1 $3times 2$pt and Planck, finding the surveys to be in $sim 2.3sigma$ tension under the $Lambda$CDM paradigm. This suite of metrics provides a toolset for robustly testing tensions in the DES Year 3 data and beyond.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا