Do you want to publish a course? Click here

Inpainting hydrodynamical maps with deep learning

81   0   0.0 ( 0 )
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

From 1,000 hydrodynamic simulations of the CAMELS project, each with a different value of the cosmological and astrophysical parameters, we generate 15,000 gas temperature maps. We use a state-of-the-art deep convolutional neural network to recover missing data from those maps. We mimic the missing data by applying regular and irregular binary masks that cover either $15%$ or $30%$ of the area of each map. We quantify the reliability of our results using two summary statistics: 1) the distance between the probability density functions (pdf), estimated using the Kolmogorov-Smirnov (KS) test, and 2) the 2D power spectrum. We find an excellent agreement between the model prediction and the unmasked maps when using the power spectrum: better than $1%$ for $k<20 h/$Mpc for any irregular mask. For regular masks, we observe a systematic offset of $sim5%$ when covering $15%$ of the maps while the results become unreliable when $30%$ of the data is missing. The observed KS-test p-values favor the null hypothesis that the reconstructed and the ground-truth maps are drawn from the same underlying distribution when irregular masks are used. For regular-shaped masks on the other hand, we find a strong evidence that the two distributions do not match each other. Finally, we use the model, trained on gas temperature maps, to perform inpainting on maps from completely different fields such as gas mass, gas pressure, and electron density and also for gas temperature maps from simulations run with other codes. We find that visually, our model is able to reconstruct the missing pixels from the maps of those fields with great accuracy, although its performance using summary statistics depends strongly on the considered field.



rate research

Read More

Convolutional Neural Networks (CNN) have recently been demonstrated on synthetic data to improve upon the precision of cosmological inference. In particular they have the potential to yield more precise cosmological constraints from weak lensing mass maps than the two-point functions. We present the cosmological results with a CNN from the KiDS-450 tomographic weak lensing dataset, constraining the total matter density $Omega_m$, the fluctuation amplitude $sigma_8$, and the intrinsic alignment amplitude $A_{rm{IA}}$. We use a grid of N-body simulations to generate a training set of tomographic weak lensing maps. We test the robustness of the expected constraints to various effects, such as baryonic feedback, simulation accuracy, different value of $H_0$, or the lightcone projection technique. We train a set of ResNet-based CNNs with varying depths to analyze sets of tomographic KiDS mass maps divided into 20 flat regions, with applied Gaussian smoothing of $sigma=2.34$ arcmin. The uncertainties on shear calibration and $n(z)$ error are marginalized in the likelihood pipeline. Following a blinding scheme, we derive constraints of $S_8 = sigma_8 (Omega_m/0.3)^{0.5} = 0.777^{+0.038}_{-0.036}$ with our CNN analysis, with $A_{rm{IA}}=1.398^{+0.779}_{-0.724}$. We compare this result to the power spectrum analysis on the same maps and likelihood pipeline and find an improvement of about $30%$ for the CNN. We discuss how our results offer excellent prospects for the use of deep learning in future cosmological data analysis.
Deep convolutional neural networks have been a popular tool for image generation and restoration. The performance of these networks is related to the capability of learning realistic features from a large dataset. In this work, we applied the problem of inpainting non-Gaussian signal, in the context of Galactic diffuse emissions at the millimetric and sub-millimetric regimes, specifically Synchrotron and Thermal Dust emission. Both of them are affected by contamination at small angular scales due to extra-galactic radio sources (the former) and to dusty star-forming galaxies (the latter). We consider the performances of a nearest-neighbors inpainting technique and compare it with two novels methodologies relying on generative Neural Networks. We show that the generative network is able to reproduce the statistical properties of the ground truth signal more consistently with high confidence level. The Python Inpainter for Cosmological and AStrophysical SOurces (PICASSO) is a package encoding a suite of inpainting methods described in this work and has been made publicly available.
Deep learning is a powerful analysis technique that has recently been proposed as a method to constrain cosmological parameters from weak lensing mass maps. Due to its ability to learn relevant features from the data, it is able to extract more information from the mass maps than the commonly used power spectrum, and thus achieve better precision for cosmological parameter measurement. We explore the advantage of Convolutional Neural Networks (CNN) over the power spectrum for varying levels of shape noise and different smoothing scales applied to the maps. We compare the cosmological constraints from the two methods in the $Omega_M-sigma_8$ plane for sets of 400 deg$^2$ convergence maps. We find that, for a shape noise level corresponding to 8.53 galaxies/arcmin$^2$ and the smoothing scale of $sigma_s = 2.34$ arcmin, the network is able to generate 45% tighter constraints. For smaller smoothing scale of $sigma_s = 1.17$ the improvement can reach $sim 50 %$, while for larger smoothing scale of $sigma_s = 5.85$, the improvement decreases to 19%. The advantage generally decreases when the noise level and smoothing scales increase. We present a new training strategy to train the neural network with noisy data, as well as considerations for practical applications of the deep learning approach.
Stage IV lensing surveys promise to make available an unprecedented amount of excellent data which will represent a huge leap in terms of both quantity and quality. This will open the way to the use of novel tools, which go beyond the standard second order statistics probing the high order properties of the convergence field. We discuss the use of Minkowski Functionals (MFs) as complementary probes to increase the lensing Figure of Merit (FoM), for a survey made out of a wide total area $A_{rm{tot}}$ imaged at a limiting magnitude $rm{mag_{W}}$ containing a subset of area $A_{rm{deep}}$ where observations are pushed to a deeper limiting magnitude $rm{mag_{D}}$. We present an updated procedure to match the theoretically predicted MFs to the measured ones, taking into account the impact of map reconstruction from noisy shear data. We validate this renewed method against simulated data sets with different source redshift distributions and total number density, setting these quantities in accordance with the depth of the survey. We can then rely on a Fisher matrix analysis to forecast the improvement in the FoM due to the joint use of shear tomography and MFs under different assumptions on $(A_{rm{tot}},,A_{rm{deep}},,rm{mag_{D}})$, and the prior on the MFs nuisance parameters. It turns out that MFs can provide a valuable help in increasing the FoM of the lensing survey, provided the nuisance parameters are known with a non negligible precision. What is actually more interesting is the possibility to compensate for the loss of FoM due to a cut in the multipole range probed by shear tomography, which makes the results more robust against uncertainties in the modeling of nonlinearities. This makes MFs a promising tool to both increase the FoM and make the constraints on the cosmological parameters less affected by theoretical systematic effects.
Video inpainting aims to fill spatio-temporal holes with plausible content in a video. Despite tremendous progress of deep neural networks for image inpainting, it is challenging to extend these methods to the video domain due to the additional time dimension. In this work, we propose a novel deep network architecture for fast video inpainting. Built upon an image-based encoder-decoder model, our framework is designed to collect and refine information from neighbor frames and synthesize still-unknown regions. At the same time, the output is enforced to be temporally consistent by a recurrent feedback and a temporal memory module. Compared with the state-of-the-art image inpainting algorithm, our method produces videos that are much more semantically correct and temporally smooth. In contrast to the prior video completion method which relies on time-consuming optimization, our method runs in near real-time while generating competitive video results. Finally, we applied our framework to video retargeting task, and obtain visually pleasing results.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا