ترغب بنشر مسار تعليمي؟ اضغط هنا

Simulation-efficient marginal posterior estimation with swyft: stop wasting your precious time

70   0   0.0 ( 0 )
 نشر من قبل Benjamin Miller
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present algorithms (a) for nested neural likelihood-to-evidence ratio estimation, and (b) for simulation reuse via an inhomogeneous Poisson point process cache of parameters and corresponding simulations. Together, these algorithms enable automatic and extremely simulator efficient estimation of marginal and joint posteriors. The algorithms are applicable to a wide range of physics and astronomy problems and typically offer an order of magnitude better simulator efficiency than traditional likelihood-based sampling methods. Our approach is an example of likelihood-free inference, thus it is also applicable to simulators which do not offer a tractable likelihood function. Simulator runs are never rejected and can be automatically reused in future analysis. As functional prototype implementation we provide the open-source software package swyft.



قيم البحث

اقرأ أيضاً

Fast and automated inference of binary-lens, single-source (2L1S) microlensing events with sampling-based Bayesian algorithms (e.g., Markov Chain Monte Carlo; MCMC) is challenged on two fronts: high computational cost of likelihood evaluations with m icrolensing simulation codes, and a pathological parameter space where the negative-log-likelihood surface can contain a multitude of local minima that are narrow and deep. Analysis of 2L1S events usually involves grid searches over some parameters to locate approximate solutions as a prerequisite to posterior sampling, an expensive process that often requires human-in-the-loop domain expertise. As the next-generation, space-based microlensing survey with the Roman Space Telescope is expected to yield thousands of binary microlensing events, a new fast and automated method is desirable. Here, we present a likelihood-free inference (LFI) approach named amortized neural posterior estimation, where a neural density estimator (NDE) learns a surrogate posterior $hat{p}(theta|x)$ as an observation-parametrized conditional probability distribution, from pre-computed simulations over the full prior space. Trained on 291,012 simulated Roman-like 2L1S simulations, the NDE produces accurate and precise posteriors within seconds for any observation within the prior support without requiring a domain expert in the loop, thus allowing for real-time and automated inference. We show that the NDE also captures expected posterior degeneracies. The NDE posterior could then be refined into the exact posterior with a downstream MCMC sampler with minimal burn-in steps.
Parametric stochastic simulators are ubiquitous in science, often featuring high-dimensional input parameters and/or an intractable likelihood. Performing Bayesian parameter inference in this context can be challenging. We present a neural simulator- based inference algorithm which simultaneously offers simulation efficiency and fast empirical posterior testability, which is unique among modern algorithms. Our approach is simulation efficient by simultaneously estimating low-dimensional marginal posteriors instead of the joint posterior and by proposing simulations targeted to an observation of interest via a prior suitably truncated by an indicator function. Furthermore, by estimating a locally amortized posterior our algorithm enables efficient empirical tests of the robustness of the inference results. Such tests are important for sanity-checking inference in real-world applications, which do not feature a known ground truth. We perform experiments on a marginalized version of the simulation-based inference benchmark and two complex and narrow posteriors, highlighting the simulator efficiency of our algorithm as well as the quality of the estimated marginal posteriors. Implementation on GitHub.
We introduce a new Markov-Chain Monte Carlo (MCMC) approach designed for efficient sampling of highly correlated and multimodal posteriors. Parallel tempering, though effective, is a costly technique for sampling such posteriors. Our approach minimiz es the use of parallel tempering, only using it for a short time to tune a new jump proposal. For complex posteriors we find efficiency improvements up to a factor of ~13. The estimation of parameters of gravitational-wave signals measured by ground-based detectors is currently done through Bayesian inference with MCMC one of the leading sampling methods. Posteriors for these signals are typically multimodal with strong non-linear correlations, making sampling difficult. As we enter the advanced-detector era, improved sensitivities and wider bandwidths will drastically increase the computational cost of analyses, demanding more efficient search algorithms to meet these challenges.
The Hubble constant value is currently known to 10% accuracy unless assumptions are made for the cosmology (Sandage et al. 2006). Gravitational lens systems provide another probe of the Hubble constant using time delay measurements. However, current investigations of ~20 time delay lenses, albeit of varying levels of sophistication, have resulted in different values of the Hubble constant ranging from 50-80 km/s/Mpc. In order to reduce uncertainties, more time delay measurements are essential together with better determined mass models (Oguri 2007, Saha et al. 2006). We propose a more efficient technique for measuring time delays which does not require regular monitoring with a high-resolution interferometer array. The method uses double image and long-axis quadruple lens systems in which the brighter component varies first and dominates the total flux density. Monitoring the total flux density with low-resolution but high sensitivity radio telescopes provides the variation of the brighter image and is used to trigger high-resolution observations which can then be used to see the variation in the fainter image. We present simulations of this method together with a pilot project using the WSRT (Westerbork Radio Synthesis Telescope) to trigger VLA (Very Large Array) observations. This new method is promising for measuring time delays because it uses relatively small amounts of time on high-resolution telescopes. This will be important because many SKA pathfinder telescopes, such as MeerKAT (Karoo Array Telescope) and ASKAP (Australian Square Kilometre Array Pathfinder), have high sensitivity but limited resolution.
EMMA is a cosmological simulation code aimed at investigating the reionization epoch. It handles simultaneously collisionless and gas dynamics, as well as radiative transfer physics using a moment-based description with the M1 approximation. Field qu antities are stored and computed on an adaptive 3D mesh and the spatial resolution can be dynamically modified based on physically-motivated criteria. Physical processes can be coupled at all spatial and temporal scales. We also introduce a new and optional approximation to handle radiation : the light is transported at the resolution of the non-refined grid and only once the dynamics have been fully updated, whereas thermo-chemical processes are still tracked on the refined elements. Such an approximation reduces the overheads induced by the treatment of radiation physics. A suite of standard tests are presented and passed by EMMA, providing a validation for its future use in studies of the reionization epoch. The code is parallel and is able to use graphics processing units (GPUs) to accelerate hydrodynamics and radiative transfer calculations. Depending on the optimizations and the compilers used to generate the CPU reference, global GPU acceleration factors between x3.9 and x16.9 can be obtained. Vectorization and transfer operations currently prevent better GPU performances and we expect that future optimizations and hardware evolution will lead to greater accelerations.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا