Inpainting hydrodynamical maps with deep learning


Abstract in English

From 1,000 hydrodynamic simulations of the CAMELS project, each with a different value of the cosmological and astrophysical parameters, we generate 15,000 gas temperature maps. We use a state-of-the-art deep convolutional neural network to recover missing data from those maps. We mimic the missing data by applying regular and irregular binary masks that cover either $15%$ or $30%$ of the area of each map. We quantify the reliability of our results using two summary statistics: 1) the distance between the probability density functions (pdf), estimated using the Kolmogorov-Smirnov (KS) test, and 2) the 2D power spectrum. We find an excellent agreement between the model prediction and the unmasked maps when using the power spectrum: better than $1%$ for $k<20 h/$Mpc for any irregular mask. For regular masks, we observe a systematic offset of $sim5%$ when covering $15%$ of the maps while the results become unreliable when $30%$ of the data is missing. The observed KS-test p-values favor the null hypothesis that the reconstructed and the ground-truth maps are drawn from the same underlying distribution when irregular masks are used. For regular-shaped masks on the other hand, we find a strong evidence that the two distributions do not match each other. Finally, we use the model, trained on gas temperature maps, to perform inpainting on maps from completely different fields such as gas mass, gas pressure, and electron density and also for gas temperature maps from simulations run with other codes. We find that visually, our model is able to reconstruct the missing pixels from the maps of those fields with great accuracy, although its performance using summary statistics depends strongly on the considered field.

Download