ترغب بنشر مسار تعليمي؟ اضغط هنا

Likelihood-free inference with neural compression of DES SV weak lensing map statistics

179   0   0.0 ( 0 )
 نشر من قبل Niall Jeffrey
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

In many cosmological inference problems, the likelihood (the probability of the observed data as a function of the unknown parameters) is unknown or intractable. This necessitates approximations and assumptions, which can lead to incorrect inference of cosmological parameters, including the nature of dark matter and dark energy, or create artificial model tensions. Likelihood-free inference covers a novel family of methods to rigorously estimate posterior distributions of parameters using forward modelling of mock data. We present likelihood-free cosmological parameter inference using weak lensing maps from the Dark Energy Survey (DES) SV data, using neural data compression of weak lensing map summary statistics. We explore combinations of the power spectra, peak counts, and neural compressed summaries of the lensing mass map using deep convolution neural networks. We demonstrate methods to validate the inference process, for both the data modelling and the probability density estimation steps. Likelihood-free inference provides a robust and scalable alternative for rigorous large-scale cosmological inference with galaxy survey data (for DES, Euclid and LSST). We have made our simulated lensing maps publicly available.

قيم البحث

اقرأ أيضاً

Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of t he convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12 per cent improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods abilities to find mass peaks, we measure the difference between peak counts from simulated {Lambda}CDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals concentration is improved 17 per cent by GLIMPSE and 18 per cent by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18 per cent by the Wiener filter and 32 per cent by GLIMPSE.
In this paper we show how nuisance parameter marginalized posteriors can be inferred directly from simulations in a likelihood-free setting, without having to jointly infer the higher-dimensional interesting and nuisance parameter posterior first and marginalize a posteriori. The result is that for an inference task with a given number of interesting parameters, the number of simulations required to perform likelihood-free inference can be kept (roughly) the same irrespective of the number of additional nuisances to be marginalized over. To achieve this we introduce two extensions to the standard likelihood-free inference set-up. Firstly we show how nuisance parameters can be re-cast as latent variables and hence automatically marginalized over in the likelihood-free framework. Secondly, we derive an asymptotically optimal compression from $N$ data down to $n$ summaries -- one per interesting parameter -- such that the Fisher information is (asymptotically) preserved, but the summaries are insensitive (to leading order) to the nuisance parameters. This means that the nuisance marginalized inference task involves learning $n$ interesting parameters from $n$ nuisance hardened data summaries, regardless of the presence or number of additional nuisance parameters to be marginalized over. We validate our approach on two examples from cosmology: supernovae and weak lensing data analyses with nuisance parameterized systematics. For the supernova problem, high-fidelity posterior inference of $Omega_m$ and $w_0$ (marginalized over systematics) can be obtained from just a few hundred data simulations. For the weak lensing problem, six cosmological parameters can be inferred from $mathcal{O}(10^3)$ simulations, irrespective of whether ten additional nuisance parameters are included in the problem or not.
We measure the correlation of galaxy lensing and cosmic microwave background lensing with a set of galaxies expected to trace the matter density field. The measurements are performed using pre-survey Dark Energy Survey (DES) Science Verification opti cal imaging data and millimeter-wave data from the 2500 square degree South Pole Telescope Sunyaev-Zeldovich (SPT-SZ) survey. The two lensing-galaxy correlations are jointly fit to extract constraints on cosmological parameters, constraints on the redshift distribution of the lens galaxies, and constraints on the absolute shear calibration of DES galaxy lensing measurements. We show that an attractive feature of these fits is that they are fairly insensitive to the clustering bias of the galaxies used as matter tracers. The measurement presented in this work confirms that DES and SPT data are consistent with each other and with the currently favored $Lambda$CDM cosmological model. It also demonstrates that joint lensing-galaxy correlation measurement considered here contains a wealth of information that can be extracted using current and future surveys.
We introduce a novel method for reconstructing the projected matter distributions of galaxy clusters with weak-lensing (WL) data based on convolutional neural network (CNN). Training datasets are generated with ray-tracing through cosmological simula tions. We control the noise level of the galaxy shear catalog such that it mimics the typical properties of the existing ground-based WL observations of galaxy clusters. We find that the mass reconstruction by our multi-layered CNN with the architecture of alternating convolution and trans-convolution filters significantly outperforms the traditional reconstruction methods. The CNN method provides better pixel-to-pixel correlations with the truth, restores more accurate positions of the mass peaks, and more efficiently suppresses artifacts near the field edges. In addition, the CNN mass reconstruction lifts the mass-sheet degeneracy when applied to sufficiently large fields. This implies that this CNN algorithm can be used to measure cluster masses in a model independent way for future wide-field WL surveys.
In the past few years, approximate Bayesian Neural Networks (BNNs) have demonstrated the ability to produce statistically consistent posteriors on a wide range of inference problems at unprecedented speed and scale. However, any disconnect between tr aining sets and the distribution of real-world objects can introduce bias when BNNs are applied to data. This is a common challenge in astrophysics and cosmology, where the unknown distribution of objects in our Universe is often the science goal. In this work, we incorporate BNNs with flexible posterior parameterizations into a hierarchical inference framework that allows for the reconstruction of population hyperparameters and removes the bias introduced by the training distribution. We focus on the challenge of producing posterior PDFs for strong gravitational lens mass model parameters given Hubble Space Telescope (HST) quality single-filter, lens-subtracted, synthetic imaging data. We show that the posterior PDFs are sufficiently accurate (i.e., statistically consistent with the truth) across a wide variety of power-law elliptical lens mass distributions. We then apply our approach to test data sets whose lens parameters are drawn from distributions that are drastically different from the training set. We show that our hierarchical inference framework mitigates the bias introduced by an unrepresentative training sets interim prior. Simultaneously, given a sufficiently broad training set, we can precisely reconstruct the population hyperparameters governing our test distributions. Our full pipeline, from training to hierarchical inference on thousands of lenses, can be run in a day. The framework presented here will allow us to efficiently exploit the full constraining power of future ground- and space-based surveys.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا