No Arabic abstract
We discuss absolute calibration strategies for Phase I of the Hydrogen Epoch of Reionization Array (HERA), which aims to measure the cosmological 21 cm signal from the Epoch of Reionization (EoR). HERA is a drift-scan array with a 10 degree wide field of view, meaning bright, well-characterized point source transits are scarce. This, combined with HERAs redundant sampling of the uv plane and the modest angular resolution of the Phase I instrument, make traditional sky-based and self-calibration techniques difficult to implement with high dynamic range. Nonetheless, in this work we demonstrate calibration for HERA using point source catalogues and electromagnetic simulations of its primary beam. We show that unmodeled diffuse flux and instrumental contaminants can corrupt the gain solutions, and present a gain smoothing approach for mitigating their impact on the 21 cm power spectrum. We also demonstrate a hybrid sky and redundant calibration scheme and compare it to pure sky-based calibration, showing only a marginal improvement to the gain solutions at intermediate delay scales. Our work suggests that the HERA Phase I system can be well-calibrated for a foreground-avoidance power spectrum estimator by applying direction-independent gains with a small set of degrees of freedom across the frequency and time axes.
21 cm Epoch of Reionization observations promise to transform our understanding of galaxy formation, but these observations are impossible without unprecedented levels of instrument calibration. We present end-to-end simulations of a full EoR power spectrum analysis including all of the major components of a real data processing pipeline: models of astrophysical foregrounds and EoR signal, frequency-dependent instrument effects, sky-based antenna calibration, and the full PS analysis. This study reveals that traditional sky-based per-frequency antenna calibration can only be implemented in EoR measurement analyses if the calibration model is unrealistically accurate. For reasonable levels of catalog completeness, the calibration introduces contamination in otherwise foreground-free power spectrum modes, precluding a PS measurement. We explore the origin of this contamination and potential mitigation techniques. We show that there is a strong joint constraint on the precision of the calibration catalog and the inherent spectral smoothness of antennae, and that this has significant implications for the instrumental design of the SKA and other future EoR observatories.
In 21 cm cosmology, precision calibration is key to the separation of the neutral hydrogen signal from very bright but spectrally-smooth astrophysical foregrounds. The Hydrogen Epoch of Reionization Array (HERA), an interferometer specialized for 21 cm cosmology and now under construction in South Africa, was designed to be largely calibrated using the self-consistency of repeated measurements of the same interferometric modes. This technique, known as redundant-baseline calibration resolves most of the internal degrees of freedom in the calibration problem. It assumes, however, on antenna elements with identical primary beams placed precisely on a redundant grid. In this work, we review the detailed implementation of the algorithms enabling redundant-baseline calibration and report results with HERA data. We quantify the effects of real-world non-redundancy and how they compare to the idealized scenario in which redundant measurements differ only in their noise realizations. Finally, we study how non-redundancy can produce spurious temporal structure in our calibration solutions--both in data and in simulations--and present strategies for mitigating that structure.
We present the 21 cm power spectrum analysis approach of the Murchison Widefield Array Epoch of Reionization project. In this paper, we compare the outputs of multiple pipelines for the purpose of validating statistical limits cosmological hydrogen at redshifts between 6 and 12. Multiple, independent, data calibration and reduction pipelines are used to make power spectrum limits on a fiducial night of data. Comparing the outputs of imaging and power spectrum stages highlights differences in calibration, foreground subtraction and power spectrum calculation. The power spectra found using these different methods span a space defined by the various tradeoffs between speed, accuracy, and systematic control. Lessons learned from comparing the pipelines range from the algorithmic to the prosaically mundane; all demonstrate the many pitfalls of neglecting reproducibility. We briefly discuss the way these different methods attempt to handle the question of evaluating a significant detection in the presence of foregrounds.
We describe the validation of the HERA Phase I software pipeline by a series of modular tests, building up to an end-to-end simulation. The philosophy of this approach is to validate the software and algorithms used in the Phase I upper limit analysis on wholly synthetic data satisfying the assumptions of that analysis, not addressing whether the actual data meet these assumptions. We discuss the organization of this validation approach, the specific modular tests performed, and the construction of the end-to-end simulations. We explicitly discuss the limitations in scope of the current simulation effort. With mock visibility data generated from a known analytic power spectrum and a wide range of realistic instrumental effects and foregrounds, we demonstrate that the current pipeline produces power spectrum estimates that are consistent with known analytic inputs to within thermal noise levels (at the 2 sigma level) for k > 0.2 h/Mpc for both bands and fields considered. Our input spectrum is intentionally amplified to enable a strong `detection at k ~0.2 h/Mpc -- at the level of ~25 sigma -- with foregrounds dominating on larger scales, and thermal noise dominating at smaller scales. Our pipeline is able to detect this amplified input signal after suppressing foregrounds with a dynamic range (foreground to noise ratio) of > 10^7. Our validation test suite uncovered several sources of scale-independent signal loss throughout the pipeline, whose amplitude is well-characterized and accounted for in the final estimates. We conclude with a discussion of the steps required for the next round of data analysis.
Measurements of the Epoch of Reionization (EoR) 21-cm signal hold the potential to constrain models of reionization. In this paper we consider a reionization model with three astrophysical parameters namely (1) the minimum halo mass which can host ionizing sources, $M_{rm min}$, (2) the number of ionizing photons escaping into the IGM per baryon within the halo, $N_{rm ion}$ and (3) the mean free path of the ionizing photons within the IGM, $R_{rm mfp}$. We predict the accuracy with which these parameters can be measured from future observations of the 21-cm power spectrum (PS) using the upcoming SKA-Low. Unlike several earlier works, we account for the non-Gaussianity of the inherent EoR 21-cm signal. Considering cosmic variance only and assuming that foregrounds are completely removed, we find that non-Gaussianity increases the volume of the $1 sigma$ error ellipsoid of the parameters by a factor of $133$ relative to the Gaussian predictions, the orientation is also different. The ratio of the volume of error ellipsoids is $1.65$ and $2.67$ for observation times of $1024$ and $10000$ hours respectively, when all the $mathbf{k}$ modes within the foreground wedge are excluded. With foreground wedge excluded and for $1024$ hours, the 1D marginalized errors are $(Delta M_{rm min}/M_{rm min},Delta N_{rm ion}/N_{rm ion},Delta R_{rm mfp}/R_{rm mfp})=(6.54, 2.71, 7.75) times 10^{-2}$ which are respectively $2 %$, $5 %$ and $23 %$ larger than the respective Gaussian predictions. The impact of non-Gaussianity increases for longer observations, and it is particularly important for $R_{rm mfp}$.