Do you want to publish a course? Click here

Fast likelihood-free cosmology with neural density estimators and active learning

105   0   0.0 ( 0 )
 Added by Justin Alsing
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

Likelihood-free inference provides a framework for performing rigorous Bayesian inference using only forward simulations, properly accounting for all physical and observational effects that can be successfully included in the simulations. The key challenge for likelihood-free applications in cosmology, where simulation is typically expensive, is developing methods that can achieve high-fidelity posterior inference with as few simulations as possible. Density-estimation likelihood-free inference (DELFI) methods turn inference into a density estimation task on a set of simulated data-parameter pairs, and give orders of magnitude improvements over traditional Approximate Bayesian Computation approaches to likelihood-free inference. In this paper we use neural density estimators (NDEs) to learn the likelihood function from a set of simulated datasets, with active learning to adaptively acquire simulations in the most relevant regions of parameter space on-the-fly. We demonstrate the approach on a number of cosmological case studies, showing that for typical problems high-fidelity posterior inference can be achieved with just $mathcal{O}(10^3)$ simulations or fewer. In addition to enabling efficient simulation-based inference, for simple problems where the form of the likelihood is known, DELFI offers a fast alternative to MCMC sampling, giving orders of magnitude speed-up in some cases. Finally, we introduce textsc{pydelfi} -- a flexible public implementation of DELFI with NDEs and active learning -- available at url{https://github.com/justinalsing/pydelfi}.



rate research

Read More

Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (textsc{delfi}), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as $sim 10^4$ simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.
In this paper we show how nuisance parameter marginalized posteriors can be inferred directly from simulations in a likelihood-free setting, without having to jointly infer the higher-dimensional interesting and nuisance parameter posterior first and marginalize a posteriori. The result is that for an inference task with a given number of interesting parameters, the number of simulations required to perform likelihood-free inference can be kept (roughly) the same irrespective of the number of additional nuisances to be marginalized over. To achieve this we introduce two extensions to the standard likelihood-free inference set-up. Firstly we show how nuisance parameters can be re-cast as latent variables and hence automatically marginalized over in the likelihood-free framework. Secondly, we derive an asymptotically optimal compression from $N$ data down to $n$ summaries -- one per interesting parameter -- such that the Fisher information is (asymptotically) preserved, but the summaries are insensitive (to leading order) to the nuisance parameters. This means that the nuisance marginalized inference task involves learning $n$ interesting parameters from $n$ nuisance hardened data summaries, regardless of the presence or number of additional nuisance parameters to be marginalized over. We validate our approach on two examples from cosmology: supernovae and weak lensing data analyses with nuisance parameterized systematics. For the supernova problem, high-fidelity posterior inference of $Omega_m$ and $w_0$ (marginalized over systematics) can be obtained from just a few hundred data simulations. For the weak lensing problem, six cosmological parameters can be inferred from $mathcal{O}(10^3)$ simulations, irrespective of whether ten additional nuisance parameters are included in the problem or not.
In many cosmological inference problems, the likelihood (the probability of the observed data as a function of the unknown parameters) is unknown or intractable. This necessitates approximations and assumptions, which can lead to incorrect inference of cosmological parameters, including the nature of dark matter and dark energy, or create artificial model tensions. Likelihood-free inference covers a novel family of methods to rigorously estimate posterior distributions of parameters using forward modelling of mock data. We present likelihood-free cosmological parameter inference using weak lensing maps from the Dark Energy Survey (DES) SV data, using neural data compression of weak lensing map summary statistics. We explore combinations of the power spectra, peak counts, and neural compressed summaries of the lensing mass map using deep convolution neural networks. We demonstrate methods to validate the inference process, for both the data modelling and the probability density estimation steps. Likelihood-free inference provides a robust and scalable alternative for rigorous large-scale cosmological inference with galaxy survey data (for DES, Euclid and LSST). We have made our simulated lensing maps publicly available.
Measuring the sum of the three active neutrino masses, $M_ u$, is one of the most important challenges in modern cosmology. Massive neutrinos imprint characteristic signatures on several cosmological observables in particular on the large-scale structure of the Universe. In order to maximize the information that can be retrieved from galaxy surveys, accurate theoretical predictions in the non-linear regime are needed. Currently, one way to achieve those predictions is by running cosmological numerical simulations. Unfortunately, producing those simulations requires high computational resources -- seven hundred CPU hours for each neutrino mass case. In this work, we propose a new method, based on a deep learning network (U-Net), to quickly generate simulations with massive neutrinos from standard $Lambda$CDM simulations without neutrinos. We computed multiple relevant statistical measures of deep-learning generated simulations, and conclude that our method accurately reproduces the 3-dimensional spatial distribution of matter down to non-linear scales: $k < 0.7$ h/Mpc. Finally, our method allows us to generate massive neutrino simulations 10,000 times faster than the traditional methods.
106 - Yu-Chen Wang 2020
The errors of cosmological data generated from complex processes, such as the observational Hubble parameter data (OHD) and the Type Ia supernova (SN Ia) data, cannot be accurately modeled by simple analytical probability distributions, e.g. Gaussian distribution. To constrain cosmological parameters from these data, likelihood-free inference is usually used to bypass the direct calculation of the likelihood. In this paper, we propose a new procedure to perform likelihood-free cosmological inference using two artificial neural networks (ANN), the Masked Autoregressive Flow (MAF) and the denoising autoencoder (DAE). Our procedure is the first to use DAE to extract features from data, in order to simplify the structure of MAF needed to estimate the posterior. Tested on simulated Hubble parameter data with a simple Gaussian likelihood, the procedure shows the capability of extracting features from data and estimating posterior distributions without the need of tractable likelihood. We demonstrate that it can accurately approximate the real posterior, achieve performance comparable to the traditional MCMC method, and the MAF gets better training results for small number of simulation when the DAE is added. We also discuss the application of the proposed procedure to OHD and Pantheon SN Ia data, and use them to constrain cosmological parameters from the non-flat $Lambda$CDM model. For SNe Ia, we use fitted light curve parameters to find constraints on $H_0,Omega_m,Omega_Lambda$ similar to relevant work, using less empirical distributions. In addition, this work is also the first to use Gaussian process in the procedure of OHD simulation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا