ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Learning for Line Intensity Mapping Observations: Information Extraction from Noisy Maps

78   0   0.0 ( 0 )
 نشر من قبل Kana Moriwaki
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Line intensity mapping (LIM) is a promising observational method to probe large-scale fluctuations of line emission from distant galaxies. Data from wide-field LIM observations allow us to study the large-scale structure of the universe as well as galaxy populations and their evolution. A serious problem with LIM is contamination by foreground/background sources and various noise contributions. We develop conditional generative adversarial networks (cGANs) that extract designated signals and information from noisy maps. We train the cGANs using 30,000 mock observation maps with assuming a Gaussian noise matched to the expected noise level of NASAs SPHEREx mission. The trained cGANs successfully reconstruct H{alpha} emission from galaxies at a target redshift from observed, noisy intensity maps. Intensity peaks with heights greater than 3.5 {sigma} noise are located with 60 % precision. The one-point probability distribution and the power spectrum are accurately recovered even in the noise-dominated regime. However, the overall reconstruction performance depends on the pixel size and on the survey volume assumed for the training data. It is necessary to generate training mock data with a sufficiently large volume in order to reconstruct the intensity power spectrum at large angular scales. Our deep-learning approach can be readily applied to observational data with line confusion and with noise.

قيم البحث

اقرأ أيضاً

159 - Wenxuan Zhou , Muhao Chen 2021
Recent information extraction approaches have relied on training deep neural models. However, such models can easily overfit noisy labels and suffer from performance degradation. While it is very costly to filter noisy labels in large learning resour ces, recent studies show that such labels take more training steps to be memorized and are more frequently forgotten than clean labels, therefore are identifiable in training. Motivated by such properties, we propose a simple co-regularization framework for entity-centric information extraction, which consists of several neural models with identical structures but different parameter initialization. These models are jointly optimized with the task-specific losses and are regularized to generate similar predictions based on an agreement loss, which prevents overfitting on noisy labels. Extensive experiments on two widely used but noisy benchmarks for information extraction, TACRED and CoNLL03, demonstrate the effectiveness of our framework. We release our code to the community for future research.
Line-intensity mapping surveys probe large-scale structure through spatial variations in molecular line emission from a population of unresolved cosmological sources. Future such surveys of carbon monoxide line emission, specifically the CO(1-0) line , face potential contamination from a disjoint population of sources emitting in a hydrogen cyanide emission line, HCN(1-0). This paper explores the potential range of the strength of HCN emission and its effect on the CO auto power spectrum, using simulations with an empirical model of the CO/HCN--halo connection. We find that effects on the observed CO power spectrum depend on modeling assumptions but are very small for our fiducial model based on our understanding of the galaxy--halo connection, with the bias in overall CO detection significance due to HCN expected to be less than 1%.
Observations of the high-redshift Universe using the 21 cm line of neutral hydrogen and complimentary emission lines from the first galaxies promise to open a new door for our understanding of the epoch of reionization. We present predictions for the [C II] 158-micron line and H I 21 cm emission from redshifts z=6--9 using high-dynamic-range cosmological simulations combined with semi-analytical models. We find that the CONCERTO experiment should be able to detect the large scale power spectrum of [C II] emission to redshifts of up to z=8 (signal-to-noise ratio ~ 1 at k = 0.1 h/cMpc with 1500 hr of integration). A Stage II experiment similar to CCAT-p should be able to detect [C II] from even higher redshifts to high significance for similar integration times (signal-to-noise ratio of ~50 at k = 0.2 h/cMpc at z=6--9). We study the possibility of combining such future [C II] measurements with 21 cm measurements using LOFAR and SKA to measure the [C II]-21cm cross power spectra, and find that a Stage II experiment should be able to measure the cross-power spectrum for k < 1 h/cMpc to signal-to-noise ratio of better than 10. We discuss the capability of such measurements to constrain astrophysical parameters relevant to reionization and show that a measurement of the [C II]-21cm cross power spectrum helps break the degeneracy between the mass and brightness of ionizing sources.
Line-intensity mapping (LIM) provides a promising way to probe cosmology, reionization and galaxy evolution. However, its sensitivity to cosmology and astrophysics at the same time is also a nuisance. Here we develop a comprehensive framework for mod elling the LIM power spectrum, which includes redshift space distortions and the Alcock-Paczynski effect. We then identify and isolate degeneracies with astrophysics so that they can be marginalized over. We study the gains of using the multipole expansion of the anisotropic power spectrum, providing an accurate analytic expression for their covariance, and find a 10%-60% increase in the precision of the baryon acoustic oscillation scale measurements when including the hexadecapole in the analysis. We discuss different observational strategies when targeting other cosmological parameters, such as the sum of neutrino masses or primordial non-Gaussianity, finding that fewer and wider bins are typically more optimal. Overall, our formalism facilitates an optimal extraction of cosmological constraints robust to astrophysics.
Deep learning is a powerful analysis technique that has recently been proposed as a method to constrain cosmological parameters from weak lensing mass maps. Due to its ability to learn relevant features from the data, it is able to extract more infor mation from the mass maps than the commonly used power spectrum, and thus achieve better precision for cosmological parameter measurement. We explore the advantage of Convolutional Neural Networks (CNN) over the power spectrum for varying levels of shape noise and different smoothing scales applied to the maps. We compare the cosmological constraints from the two methods in the $Omega_M-sigma_8$ plane for sets of 400 deg$^2$ convergence maps. We find that, for a shape noise level corresponding to 8.53 galaxies/arcmin$^2$ and the smoothing scale of $sigma_s = 2.34$ arcmin, the network is able to generate 45% tighter constraints. For smaller smoothing scale of $sigma_s = 1.17$ the improvement can reach $sim 50 %$, while for larger smoothing scale of $sigma_s = 5.85$, the improvement decreases to 19%. The advantage generally decreases when the noise level and smoothing scales increase. We present a new training strategy to train the neural network with noisy data, as well as considerations for practical applications of the deep learning approach.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا