Do you want to publish a course? Click here

Bayesian inference for binary neutron star inspirals using a Hamiltonian Monte Carlo Algorithm

63   0   0.0 ( 0 )
 Added by Edward Porter
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

The coalescence of binary neutron stars are one of the main sources of gravitational waves for ground-based gravitational wave detectors. As Bayesian inference for binary neutron stars is computationally expensive, more efficient and faster converging algorithms are always needed. In this work, we conduct a feasibility study using a Hamiltonian Monte Carlo algorithm (HMC). The HMC is a sampling algorithm that takes advantage of gradient information from the geometry of the parameter space to efficiently sample from the posterior distribution, allowing the algorithm to avoid the random-walk behaviour commonly associated with stochastic samplers. As well as tuning the algorithms free parameters specifically for gravitational wave astronomy, we introduce a method for approximating the gradients of the log-likelihood that reduces the runtime for a $10^6$ trajectory run from ten weeks, using numerical derivatives along the Hamiltonian trajectories, to one day, in the case of non-spinning neutron stars. Testing our algorithm against a set of neutron star binaries using a detector network composed of Advanced LIGO and Advanced Virgo at optimal design, we demonstrate that not only is our algorithm more efficient than a standard sampler, but a $10^6$ trajectory HMC produces an effective sample size on the order of $10^4 - 10^5$ statistically independent samples.



rate research

Read More

We present a Markov-chain Monte-Carlo (MCMC) technique to study the source parameters of gravitational-wave signals from the inspirals of stellar-mass compact binaries detected with ground-based gravitational-wave detectors such as LIGO and Virgo, for the case where spin is present in the more massive compact object in the binary. We discuss aspects of the MCMC algorithm that allow us to sample the parameter space in an efficient way. We show sample runs that illustrate the possibilities of our MCMC code and the difficulties that we encounter.
Third-generation (3G) gravitational-wave detectors will observe thousands of coalescing neutron star binaries with unprecedented fidelity. Extracting the highest precision science from these signals is expected to be challenging owing to both high signal-to-noise ratios and long-duration signals. We demonstrate that current Bayesian inference paradigms can be extended to the analysis of binary neutron star signals without breaking the computational bank. We construct reduced order models for $sim 90,mathrm{minute}$ long gravitational-wave signals, covering the observing band ($5-2048,mathrm{Hz}$), speeding up inference by a factor of $sim 1.3times 10^4$ compared to the calculation times without reduced order models. The reduced order models incorporate key physics including the effects of tidal deformability, amplitude modulation due to the Earths rotation, and spin-induced orbital precession. We show how reduced order modeling can accelerate inference on data containing multiple, overlapping gravitational-wave signals, and determine the speedup as a function of the number of overlapping signals. Thus, we conclude that Bayesian inference is computationally tractable for the long-lived, overlapping, high signal-to-noise-ratio events present in 3G observatories.
168 - Ziming Liu , Zheng Zhang 2019
Hamiltonian Monte Carlo (HMC) is an efficient Bayesian sampling method that can make distant proposals in the parameter space by simulating a Hamiltonian dynamical system. Despite its popularity in machine learning and data science, HMC is inefficient to sample from spiky and multimodal distributions. Motivated by the energy-time uncertainty relation from quantum mechanics, we propose a Quantum-Inspired Hamiltonian Monte Carlo algorithm (QHMC). This algorithm allows a particle to have a random mass matrix with a probability distribution rather than a fixed mass. We prove the convergence property of QHMC and further show why such a random mass can improve the performance when we sample a broad class of distributions. In order to handle the big training data sets in large-scale machine learning, we develop a stochastic gradient version of QHMC using Nos{e}-Hoover thermostat called QSGNHT, and we also provide theoretical justifications about its steady-state distributions. Finally in the experiments, we demonstrate the effectiveness of QHMC and QSGNHT on synthetic examples, bridge regression, image denoising and neural network pruning. The proposed QHMC and QSGNHT can indeed achieve much more stable and accurate sampling results on the test cases.
Gaussian latent variable models are a key class of Bayesian hierarchical models with applications in many fields. Performing Bayesian inference on such models can be challenging as Markov chain Monte Carlo algorithms struggle with the geometry of the resulting posterior distribution and can be prohibitively slow. An alternative is to use a Laplace approximation to marginalize out the latent Gaussian variables and then integrate out the remaining hyperparameters using dynamic Hamiltonian Monte Carlo, a gradient-based Markov chain Monte Carlo sampler. To implement this scheme efficiently, we derive a novel adjoint method that propagates the minimal information needed to construct the gradient of the approximate marginal likelihood. This strategy yields a scalable differentiation method that is orders of magnitude faster than state of the art differentiation techniques when the hyperparameters are high dimensional. We prototype the method in the probabilistic programming framework Stan and test the utility of the embedded Laplace approximation on several models, including one where the dimension of the hyperparameter is $sim$6,000. Depending on the cases, the benefits can include an alleviation of the geometric pathologies that frustrate Hamiltonian Monte Carlo and a dramatic speed-up.
Functional data registration is a necessary processing step for many applications. The observed data can be inherently noisy, often due to measurement error or natural process uncertainty, which most functional alignment methods cannot handle. A pair of functions can also have multiple optimal alignment solutions, which is not addressed in current literature. In this paper, a flexible Bayesian approach to functional alignment is presented, which appropriately accounts for noise in the data without any pre-smoothing required. Additionally, by running parallel MCMC chains, the method can account for multiple optimal alignments via the multi-modal posterior distribution of the warping functions. To most efficiently sample the warping functions, the approach relies on a modification of the standard Hamiltonian Monte Carlo to be well-defined on the infinite-dimensional Hilbert space. This flexible Bayesian alignment method is applied to both simulated data and real data sets to show its efficiency in handling noisy functions and successfully accounting for multiple optimal alignments in the posterior; characterizing the uncertainty surrounding the warping functions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا