No Arabic abstract
Deep learning is a powerful analysis technique that has recently been proposed as a method to constrain cosmological parameters from weak lensing mass maps. Due to its ability to learn relevant features from the data, it is able to extract more information from the mass maps than the commonly used power spectrum, and thus achieve better precision for cosmological parameter measurement. We explore the advantage of Convolutional Neural Networks (CNN) over the power spectrum for varying levels of shape noise and different smoothing scales applied to the maps. We compare the cosmological constraints from the two methods in the $Omega_M-sigma_8$ plane for sets of 400 deg$^2$ convergence maps. We find that, for a shape noise level corresponding to 8.53 galaxies/arcmin$^2$ and the smoothing scale of $sigma_s = 2.34$ arcmin, the network is able to generate 45% tighter constraints. For smaller smoothing scale of $sigma_s = 1.17$ the improvement can reach $sim 50 %$, while for larger smoothing scale of $sigma_s = 5.85$, the improvement decreases to 19%. The advantage generally decreases when the noise level and smoothing scales increase. We present a new training strategy to train the neural network with noisy data, as well as considerations for practical applications of the deep learning approach.
Convolutional Neural Networks (CNN) have recently been demonstrated on synthetic data to improve upon the precision of cosmological inference. In particular they have the potential to yield more precise cosmological constraints from weak lensing mass maps than the two-point functions. We present the cosmological results with a CNN from the KiDS-450 tomographic weak lensing dataset, constraining the total matter density $Omega_m$, the fluctuation amplitude $sigma_8$, and the intrinsic alignment amplitude $A_{rm{IA}}$. We use a grid of N-body simulations to generate a training set of tomographic weak lensing maps. We test the robustness of the expected constraints to various effects, such as baryonic feedback, simulation accuracy, different value of $H_0$, or the lightcone projection technique. We train a set of ResNet-based CNNs with varying depths to analyze sets of tomographic KiDS mass maps divided into 20 flat regions, with applied Gaussian smoothing of $sigma=2.34$ arcmin. The uncertainties on shear calibration and $n(z)$ error are marginalized in the likelihood pipeline. Following a blinding scheme, we derive constraints of $S_8 = sigma_8 (Omega_m/0.3)^{0.5} = 0.777^{+0.038}_{-0.036}$ with our CNN analysis, with $A_{rm{IA}}=1.398^{+0.779}_{-0.724}$. We compare this result to the power spectrum analysis on the same maps and likelihood pipeline and find an improvement of about $30%$ for the CNN. We discuss how our results offer excellent prospects for the use of deep learning in future cosmological data analysis.
We examine the cosmological information available from the 1-point probability distribution (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the $Omega_m$-$sigma_8$ plane from the convergence PDF with $188 arcmin^2$ pixels compared to the cosmic shear power spectrum with an equivalent number of modes ($ell < 886$). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of $2-3$, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.
We present a deep machine learning (ML)-based technique for accurately determining $sigma_8$ and $Omega_m$ from mock 3D galaxy surveys. The mock surveys are built from the AbacusCosmos suite of $N$-body simulations, which comprises 40 cosmological volume simulations spanning a range of cosmological models, and we account for uncertainties in galaxy formation scenarios through the use of generalized halo occupation distributions (HODs). We explore a trio of ML models: a 3D convolutional neural network (CNN), a power-spectrum-based fully connected network, and a hybrid approach that merges the two to combine physically motivated summary statistics with flexible CNNs. We describe best practices for training a deep model on a suite of matched-phase simulations and we test our model on a completely independent sample that uses previously unseen initial conditions, cosmological parameters, and HOD parameters. Despite the fact that the mock observations are quite small ($sim0.07h^{-3},mathrm{Gpc}^3$) and the training data span a large parameter space (6 cosmological and 6 HOD parameters), the CNN and hybrid CNN can constrain $sigma_8$ and $Omega_m$ to $sim3%$ and $sim4%$, respectively.
Line intensity mapping (LIM) is a promising observational method to probe large-scale fluctuations of line emission from distant galaxies. Data from wide-field LIM observations allow us to study the large-scale structure of the universe as well as galaxy populations and their evolution. A serious problem with LIM is contamination by foreground/background sources and various noise contributions. We develop conditional generative adversarial networks (cGANs) that extract designated signals and information from noisy maps. We train the cGANs using 30,000 mock observation maps with assuming a Gaussian noise matched to the expected noise level of NASAs SPHEREx mission. The trained cGANs successfully reconstruct H{alpha} emission from galaxies at a target redshift from observed, noisy intensity maps. Intensity peaks with heights greater than 3.5 {sigma} noise are located with 60 % precision. The one-point probability distribution and the power spectrum are accurately recovered even in the noise-dominated regime. However, the overall reconstruction performance depends on the pixel size and on the survey volume assumed for the training data. It is necessary to generate training mock data with a sufficiently large volume in order to reconstruct the intensity power spectrum at large angular scales. Our deep-learning approach can be readily applied to observational data with line confusion and with noise.
Stage IV lensing surveys promise to make available an unprecedented amount of excellent data which will represent a huge leap in terms of both quantity and quality. This will open the way to the use of novel tools, which go beyond the standard second order statistics probing the high order properties of the convergence field. We discuss the use of Minkowski Functionals (MFs) as complementary probes to increase the lensing Figure of Merit (FoM), for a survey made out of a wide total area $A_{rm{tot}}$ imaged at a limiting magnitude $rm{mag_{W}}$ containing a subset of area $A_{rm{deep}}$ where observations are pushed to a deeper limiting magnitude $rm{mag_{D}}$. We present an updated procedure to match the theoretically predicted MFs to the measured ones, taking into account the impact of map reconstruction from noisy shear data. We validate this renewed method against simulated data sets with different source redshift distributions and total number density, setting these quantities in accordance with the depth of the survey. We can then rely on a Fisher matrix analysis to forecast the improvement in the FoM due to the joint use of shear tomography and MFs under different assumptions on $(A_{rm{tot}},,A_{rm{deep}},,rm{mag_{D}})$, and the prior on the MFs nuisance parameters. It turns out that MFs can provide a valuable help in increasing the FoM of the lensing survey, provided the nuisance parameters are known with a non negligible precision. What is actually more interesting is the possibility to compensate for the loss of FoM due to a cut in the multipole range probed by shear tomography, which makes the results more robust against uncertainties in the modeling of nonlinearities. This makes MFs a promising tool to both increase the FoM and make the constraints on the cosmological parameters less affected by theoretical systematic effects.