No Arabic abstract
We explore the impact of incorporating physically motivated ionisation and recombination rates on the history and topology of cosmic reionisation, by incorporating inputs from small-volume hydrodynamic simulations into a semi-numerical code, SimFast21, that evolves reionisation on large scales. We employ radiative hydrodynamic simulations to parameterize the ionisation rate Rion and recombination rate Rrec as functions of halo mass, overdensity and redshift. We find that Rion is super-linearly dependent on halo mass (Rion ~ Mh^1.41), in contrast to previous assumptions. We implement these scalings into SimFast21 to identify the ionized regions. We tune our models to be consistent with recent observations of the optical depth, ionizing emissivity, and neutral fraction by the end of reionisation. We require an average photon escape fraction fesc=0.04 within ~ 0.5 cMpc cells, independent of halo mass or redshift, to simultaneously match these data. We present predictions for the 21cm power spectrum, and show that it is converged with respect to simulation volume. We find that introducing superlinearly mass-dependent ionisations increases the duration of reionisation and boosts the small-scale 21cm power by ~ 2-3 at intermediate phases of reionisation. Introducing inhomogeneous recombinations reduces ionised bubble sizes and suppresses large-scale 21cm power by ~ 2-3. Moreover, gas clumping on sub-cell scales has a minimal effect on the 21cm power, indicating that robust predictions do not depend on the behaviour of kpc-scale structures. The superlinear ionisations significantly increase the median halo mass scale for ionising photon output to >10^10 Mo, giving greater hope for detecting most of ionising sources with next-generation facilities. These results highlight the importance of more accurately treating ionising sources and recombinations for modeling reionisation and its 21cm signal.
We explore methods for robust estimation of the 21 cm signal from the Epoch of Reionisation (EoR). A Kernel Density Estimator (KDE) is introduced for measuring the spatial temperature fluctuation power spectrum from the EoR. The KDE estimates the underlying probability distribution function of fluctuations as a function of spatial scale, and contains different systematic biases and errors to the typical approach to estimating the fluctuation power spectrum. Extraction of histograms of visibilities allows moments analysis to be used to discriminate foregrounds from 21 cm signal and thermal noise. We use the information available in the histograms, along with the statistical dis-similarity of foregrounds from two independent observing fields, to robustly separate foregrounds from cosmological signal, while making no assumptions about the Gaussianity of the signal. Using two independent observing fields to robustly discriminate signal from foregrounds is crucial for the analysis presented in this paper. We apply the techniques to 13 hours of Murchison Widefield Array (MWA) EoR data over two observing fields. We compare the output to that obtained with a comparative power spectrum estimation method, and demonstrate the reduced foreground contamination using this approach. Using the second moment obtained directly from the KDE distribution functions yields a factor of 2-3 improvement in power for k < 0.3hMpc^{-1} compared with a matched delay space power estimator, while weighting data by additional statistics does not offer significant improvement beyond that available for thermal noise-only weights.
Emulation of the Global (sky-averaged) 21-cm signal from the Cosmic Dawn and Epoch of Reionization with neural networks has been shown to be an essential tool for physical signal modelling. In this paper we present globalemu, a Global 21-cm signal emulator that uses redshift as a character defining variable along side a set of astrophysical parameters to estimate the brightness temperature of the 21-cm signal. Combined with a physically motivated pre-processing of the data this makes for a reliable and fast emulator that is relatively insensitive to the neural network design. A single high resolution signal can be emulated in 1.3 ms when using globalemu in comparison to 133 ms, a factor of 102 improvement, when using the existing public state of the art emulator 21cmGEM evaluated with the same computing power. We illustrate, with the same training and test data used for 21cmGEM, that globalemu is almost twice as accurate as 21cmGEM and for 95% of models in a test set of $approx1,700$ we can achieve a RMSE of $leq 5.37$ mK and a mean RMSE of 2.52 mK across the band z = 7 -28 (approximately 10% the expected noise of 25 mK for the Radio Experiment for the Analysis of Cosmic Hydrogen (REACH)). Further, globalemu provides a flexible framework in which the neutral fraction history and Global signal models with updated astrophysics can be emulated easily. The emulator is pip installable and available at: https://github.com/htjb/globalemu. globalemu will be used by the REACH collaboration to perform physical signal modelling inside a Bayesian nested sampling loop.
The 21-cm signal of neutral hydrogen is a sensitive probe of the Epoch of Reionization (EoR) and Cosmic Dawn. Currently operating radio telescopes have ushered in a data-driven era of 21-cm cosmology, providing the first constraints on the astrophysical properties of sources that drive this signal. However, extracting astrophysical information from the data is highly non-trivial and requires the rapid generation of theoretical templates over a wide range of astrophysical parameters. To this end emulators are often employed, with previous efforts focused on predicting the power spectrum. In this work we introduce 21cmGEM - the first emulator of the global 21-cm signal from Cosmic Dawn and the EoR. The smoothness of the output signal is guaranteed by design. We train neural networks to predict the cosmological signal using a database of ~30,000 simulated signals which were created by varying seven astrophysical parameters: the star formation efficiency and the minimal mass of star-forming halos; the efficiency of the first X-ray sources and their spectrum parameterized by spectral index and the low energy cutoff; the mean free path of ionizing photons and the CMB optical depth. We test the performance with a set of ~2,000 simulated signals, showing that the relative error in the prediction has an r.m.s. of 0.0159. The algorithm is efficient, with a running time per parameter set of 0.16 sec. Finally, we use the database of models to check the robustness of relations between the features of the global signal and the astrophysical parameters that we previously reported.
The 21-cm intensity mapping (IM) of neutral hydrogen (HI) is a promising tool to probe the large-scale structures. Sky maps of 21-cm intensities can be highly contaminated by different foregrounds, such as Galactic synchrotron radiation, free-free emission, extragalactic point sources, and atmospheric noise. We here present a model of foreground components and a method of removal, especially to quantify the potential of Five-hundred-meter Aperture Spherical radio Telescope (FAST) for measuring HI IM. We consider 1-year observational time with the survey area of $20,000,{rm deg}^{2}$ to capture significant variations of the foregrounds across both the sky position and angular scales relative to the HI signal. We first simulate the observational sky and then employ the Principal Component Analysis (PCA) foreground separation technique. We show that by including different foregrounds, thermal and $1/f$ noises, the value of the standard deviation between reconstructed 21-cm IM map and the input pure 21-cm signal is $Delta T = 0.034,{rm mK}$, which is well under control. The eigenmode-based analysis shows that the underlying HI eigenmode is just less than $1$ per cent level of the total sky components. By subtracting the PCA cleaned foreground+noise map from the total map, we show that PCA method can recover HI power spectra for FAST with high accuracy.
The upcoming Ooty Wide Field Array (OWFA) will operate at $326.5 , {rm MHz}$ which corresponds to the redshifted 21-cm signal from neutral hydrogen (HI) at z = 3.35. We present two different prescriptions to simulate this signal and calculate the visibilities expected in radio-interferometric observations with OWFA. In the first method we use an input model for the expected 21-cm power spectrum to directly simulate different random realizations of the brightness temperature fluctuations and calculate the visibilities. This method, which models the HI signal entirely as a diffuse radiation, is completely oblivious to the discrete nature of the astrophysical sources which host the HI. While each discrete source subtends an angle that is much smaller than the angular resolution of OWFA, the velocity structure of the HI inside the individual sources is well within reach of OWFAs frequency resolution and this is expected to have an impact on the observed HI signal. The second prescription is based on cosmological N-body simulations. Here we identify each simulation particle with a source that hosts the HI, and we have the freedom to implement any desired line profile for the HI emission from the individual sources. Implementing a simple model for the line profile, we have generated several random realizations of the complex visibilities. Correlations between the visibilities measured at different baselines and channels provides an unique method to quantify the statistical properties of the HI signal. We have used this to quantify the results of our simulations, and explore the relation between the expected visibility correlations and the underlying HI power spectrum.