ترغب بنشر مسار تعليمي؟ اضغط هنا

The LOFAR EoR Data Model: (I) Effects of Noise and Instrumental Corruptions on the 21-cm Reionization Signal-Extraction Strategy

103   0   0.0 ( 0 )
 نشر من قبل Panagiotis Labropoulos
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

A number of experiments are set to measure the 21-cm signal of neutral hydrogen from the Epoch of Reionization (EoR). The common denominator of these experiments are the large data sets produced, contaminated by various instrumental effects, ionospheric distortions, RFI and strong Galactic and extragalactic foregrounds. In this paper, the first in a series, we present the Data Model that will be the basis of the signal analysis for the LOFAR (Low Frequency Array) EoR Key Science Project (LOFAR EoR KSP). Using this data model we simulate realistic visibility data sets over a wide frequency band, taking properly into account all currently known instrumental corruptions (e.g. direction-dependent gains, complex gains, polarization effects, noise, etc). We then apply primary calibration errors to the data in a statistical sense, assuming that the calibration errors are random Gaussian variates at a level consistent with our current knowledge based on observations with the LOFAR Core Station 1. Our aim is to demonstrate how the systematics of an interferometric measurement affect the quality of the calibrated data, how errors correlate and propagate, and in the long run how this can lead to new calibration strategies. We present results of these simulations and the inversion process and extraction procedure. We also discuss some general properties of the coherency matrix and Jones formalism that might prove useful in solving the calibration problem of aperture synthesis arrays. We conclude that even in the presence of realistic noise and instrumental errors, the statistical signature of the EoR signal can be detected by LOFAR with relatively small errors. A detailed study of the statistical properties of our data model and more complex instrumental models will be considered in the future.



قيم البحث

اقرأ أيضاً

When using valid foreground and signal models, the uncertainties on extracted signals in global 21-cm signal experiments depend principally on the overlap between signal and foreground models. In this paper, we investigate two strategies for decreasi ng this overlap: (i) utilizing time dependence by fitting multiple drift-scan spectra simultaneously and (ii) measuring all four Stokes parameters instead of only the total power, Stokes I. Although measuring polarization requires different instruments than are used in most existing experiments, all existing experiments can utilize drift-scan measurements merely by averaging their data differently. In order to evaluate the increase in constraining power from using these two techniques, we define a method for connecting Root-Mean-Square (RMS) uncertainties to probabilistic confidence levels. Employing simulations, we find that fitting only one total power spectrum leads to RMS uncertainties at the few K level, while fitting multiple time-binned, drift-scan spectra yields uncertainties at the $lesssim 10$ mK level. This significant improvement only appears if the spectra are modeled with one set of basis vectors, instead of using multiple sets of basis vectors that independently model each spectrum. Assuming that they are simulated accurately, measuring all four Stokes parameters also leads to lower uncertainties. These two strategies can be employed simultaneously and fitting multiple time bins of all four Stokes parameters yields the best precision measurements of the 21-cm signal, approaching the noise level in the data.
We present the completion of a data analysis pipeline that self-consistently separates global 21-cm signals from large systematics using a pattern recognition technique. In the first paper of this series, we obtain optimal basis vectors from signal a nd foreground training sets to linearly fit both components with the minimal number of terms that best extracts the signal given its overlap with the foreground. In this second paper, we utilize the spectral constraints derived in the first paper to calculate the full posterior probability distribution of any signal parameter space of choice. The spectral fit provides the starting point for a Markov Chain Monte Carlo (MCMC) engine that samples the signal without traversing the foreground parameter space. At each MCMC step, we marginalize over the weights of all linear foreground modes and suppress those with unimportant variations by applying priors gleaned from the training set. This method drastically reduces the number of MCMC parameters, augmenting the efficiency of exploration, circumvents the need for selecting a minimal number of foreground modes, and allows the complexity of the foreground model to be greatly increased to simultaneously describe many observed spectra without requiring extra MCMC parameters. Using two nonlinear signal models, one based on EDGES observations and the other on phenomenological frequencies and temperatures of theoretically expected extrema, we demonstrate the success of this methodology by recovering the input parameters from multiple randomly simulated signals at low radio frequencies (10-200 MHz), while rigorously accounting for realistically modeled beam-weighted foregrounds.
The Epoch of Reionization (EoR) is an uncharted era in our Universes history during which the birth of the first stars and galaxies led to the ionization of neutral hydrogen in the intergalactic medium. There are many experiments investigating the Eo R by tracing the 21cm line of neutral hydrogen. Because this signal is very faint and difficult to isolate, it is crucial to develop analysis techniques that maximize sensitivity and suppress contaminants in data. It is also imperative to understand the trade-offs between different analysis methods and their effects on power spectrum estimates. Specifically, with a statistical power spectrum detection in HERAs foreseeable future, it has become increasingly important to understand how certain analysis choices can lead to the loss of the EoR signal. In this paper, we focus on signal loss associated with power spectrum estimation. We describe the origin of this loss using both toy models and data taken by the 64-element configuration of the Donald C. Backer Precision Array for Probing the Epoch of Reionization (PAPER). In particular, we highlight how detailed investigations of signal loss have led to a revised, higher 21cm power spectrum upper limit from PAPER-64. Additionally, we summarize errors associated with power spectrum error estimation that were previously unaccounted for. We focus on a subset of PAPER-64 data in this paper; revised power spectrum limits from the PAPER experiment are presented in a forthcoming paper by Kolopanis et al. (in prep.) and supersede results from previously published PAPER analyses.
LOFAR is a new and innovative effort to build a radio-telescope operating at the multi-meter wavelength spectral window. One of the most exciting applications of LOFAR will be the search for redshifted 21-cm line emission from the Epoch of Reionizati on (EoR). It is currently believed that the Dark Ages, the period after recombination when the Universe turned neutral, lasted until around the Universe was 400,000 years old. During the EoR, objects started to form in the early universe and they were energetic enough to ionize neutral hydrogen. The precision and accuracy required to achieve this scientific goal, can be essentially translated into accumulating large amounts of data. The data model describing the response of the LOFAR telescope to the intensity distribution of the sky is characterized by the non-linearity of the parameters and the large level of noise compared to the desired cosmological signal. In this poster, we present the implementation of a statistically optimal map-making process and its properties. The basic assumptions of this method are that the noise is Gaussian and independent between the stations and frequency channels and that the dynamic range of the data can been enhanced significantly during the off-line LOFAR processing. These assumptions match our expectations for the LOFAR Epoch of Reionization Experiment.
The relative velocity between baryons and dark matter in the early Universe can suppress the formation of small-scale baryonic structure and leave an imprint on the baryon acoustic oscillation (BAO) scale at low redshifts after reionization. This str eaming velocity affects the post-reionization gas distribution by directly reducing the abundance of pre-existing mini-halos ($lesssim 10^7 M_{bigodot}$) that could be destroyed by reionization and indirectly modulating reionization history via photoionization within these mini-halos. In this work, we investigate the effect of streaming velocity on the BAO feature in HI 21 cm intensity mapping after reionization, with a focus on redshifts $3.5lesssim zlesssim5.5$. We build a spatially modulated halo model that includes the dependence of the filtering mass on the local reionization redshift and thermal history of the intergalactic gas. In our fiducial model, we find isotropic streaming velocity bias coefficients $b_v$ ranging from $-0.0033$ at $z=3.5$ to $-0.0248$ at $z=5.5$, which indicates that the BAO scale is stretched (i.e., the peaks shift to lower $k$). In particular, streaming velocity shifts the transverse BAO scale between 0.087% ($z=3.5$) and 0.37% ($z=5.5$) and shifts the radial BAO scale between 0.13% ($z=3.5$) and 0.52% ($z=5.5$). These shifts exceed the projected error bars from the more ambitious proposed hemispherical-scale surveys in HI (0.13% at $1sigma$ per $Delta z = 0.5$ bin).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا