ترغب بنشر مسار تعليمي؟ اضغط هنا

Foregrounds in Wide-Field Redshifted 21 cm Power Spectra

137   0   0.0 ( 0 )
 نشر من قبل Nithyanandan Thyagarajan
 تاريخ النشر 2015
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Detection of 21~cm emission of HI from the epoch of reionization, at redshifts z>6, is limited primarily by foreground emission. We investigate the signatures of wide-field measurements and an all-sky foreground model using the delay spectrum technique that maps the measurements to foreground object locations through signal delays between antenna pairs. We demonstrate interferometric measurements are inherently sensitive to all scales, including the largest angular scales, owing to the nature of wide-field measurements. These wide-field effects are generic to all observations but antenna shapes impact their amplitudes substantially. A dish-shaped antenna yields the most desirable features from a foreground contamination viewpoint, relative to a dipole or a phased array. Comparing data from recent Murchison Widefield Array observations, we demonstrate that the foreground signatures that have the largest impact on the HI signal arise from power received far away from the primary field of view. We identify diffuse emission near the horizon as a significant contributing factor, even on wide antenna spacings that usually represent structures on small scales. For signals entering through the primary field of view, compact emission dominates the foreground contamination. These two mechanisms imprint a characteristic pitchfork signature on the foreground wedge in Fourier delay space. Based on these results, we propose that selective down-weighting of data based on antenna spacing and time can mitigate foreground contamination substantially by a factor ~100 with negligible loss of sensitivity.



قيم البحث

اقرأ أيضاً

We confirm our recent prediction of the pitchfork foreground signature in power spectra of high-redshift 21 cm measurements where the interferometer is sensitive to large-scale structure on all baselines. This is due to the inherent response of a wid e-field instrument and is characterized by enhanced power from foreground emission in Fourier modes adjacent to those considered to be the most sensitive to the cosmological H I signal. In our recent paper, many signatures from the simulation that predicted this feature were validated against Murchison Widefield Array (MWA) data, but this key pitchfork signature was close to the noise level. In this paper, we improve the data sensitivity through the coherent averaging of 12 independent snapshots with identical instrument settings and provide the first confirmation of the prediction with a signal-to-noise ratio > 10. This wide-field effect can be mitigated by careful antenna designs that suppress sensitivity near the horizon. Simple models for antenna apertures that have been proposed for future instruments such as the Hydrogen Epoch of Reionization Array and the Square Kilometre Array indicate they should suppress foreground leakage from the pitchfork by ~40 dB relative to the MWA and significantly increase the likelihood of cosmological signal detection in these critical Fourier modes in the three-dimensional power spectrum.
Precise measurements of the 21 cm power spectrum are crucial for understanding the physical processes of hydrogen reionization. Currently, this probe is being pursued by low-frequency radio interferometer arrays. As these experiments come closer to m aking a first detection of the signal, error estimation will play an increasingly important role in setting robust measurements. Using the delay power spectrum approach, we have produced a critical examination of different ways that one can estimate error bars on the power spectrum. We do this through a synthesis of analytic work, simulations of toy models, and tests on small amounts of real data. We find that, although computed independently, the different error bar methodologies are in good agreement with each other in the noise-dominated regime of the power spectrum. For our preferred methodology, the predicted probability distribution function is consistent with the empirical noise power distributions from both simulated and real data. This diagnosis is mainly in support of the forthcoming HERA upper limit, and also is expected to be more generally applicable.
We analyse the accuracy of radio interferometric gridding of visibilities with the aim to quantify the Epoch of Reionization (EoR) 21-cm power spectrum bias caused by gridding, ultimately to determine the suitability of different imaging algorithms a nd gridding settings for 21-cm power spectrum analysis. We simulate realistic LOFAR data, and construct power spectra with convolutional gridding and w-stacking, w-projection, image domain gridding and without w-correction. These are compared against directly Fourier transformed data. The influence of oversampling, kernel size, w-quantization, kernel windowing function and image padding are quantified. The gridding excess power is measured with a foreground subtraction strategy, for which foregrounds have been subtracted using Gaussian progress regression, as well as with a foreground avoidance strategy. Constructing a power spectrum that has a bias significantly lower compared to the expected EoR signals is possible with the tested methods, but requires a kernel oversampling factor > 4000 and, when using w-correction, > 500 w-quantization levels. These values are higher than typical values used for imaging, but are computationally feasible. The kernel size and padding factor parameters are less crucial. Among the tested methods, image domain gridding shows the highest accuracy with the lowest imaging time. LOFAR 21-cm power spectrum results are not affected by gridding. Image domain gridding is overall the most suitable algorithm for 21-cm EoR experiments, including for future SKA EoR analyses. Nevertheless, convolutional gridding with tuned parameters results in sufficient accuracy. This holds also for w-stacking for wide-field imaging. The w-projection algorithm is less suitable because of the kernel oversampling requirements, and a faceting approach is unsuitable due to the resulting spatial discontinuities.
Observations of the EoR with the 21-cm hyperfine emission of neutral hydrogen (HI) promise to open an entirely new window onto the formation of the first stars, galaxies and accreting black holes. In order to characterize the weak 21-cm signal, we ne ed to develop imaging techniques which can reconstruct the extended emission very precisely. Here, we present an inversion technique for LOFAR baselines at NCP, based on a Bayesian formalism with optimal spatial regularization, which is used to reconstruct the diffuse foreground map directly from the simulated visibility data. We notice the spatial regularization de-noises the images to a large extent, allowing one to recover the 21-cm power-spectrum over a considerable $k_{perp}-k_{para}$ space in the range of $0.03,{rm Mpc^{-1}}<k_{perp}<0.19,{rm Mpc^{-1}}$ and $0.14,{rm Mpc^{-1}}<k_{para}<0.35,{rm Mpc^{-1}}$ without subtracting the noise power-spectrum. We find that, in combination with using the GMCA, a non-parametric foreground removal technique, we can mostly recover the spherically average power-spectrum within $2sigma$ statistical fluctuations for an input Gaussian random rms noise level of $60 , {rm mK}$ in the maps after 600 hrs of integration over a $10 , {rm MHz}$ bandwidth.
We present the 21 cm power spectrum analysis approach of the Murchison Widefield Array Epoch of Reionization project. In this paper, we compare the outputs of multiple pipelines for the purpose of validating statistical limits cosmological hydrogen a t redshifts between 6 and 12. Multiple, independent, data calibration and reduction pipelines are used to make power spectrum limits on a fiducial night of data. Comparing the outputs of imaging and power spectrum stages highlights differences in calibration, foreground subtraction and power spectrum calculation. The power spectra found using these different methods span a space defined by the various tradeoffs between speed, accuracy, and systematic control. Lessons learned from comparing the pipelines range from the algorithmic to the prosaically mundane; all demonstrate the many pitfalls of neglecting reproducibility. We briefly discuss the way these different methods attempt to handle the question of evaluating a significant detection in the presence of foregrounds.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا