Do you want to publish a course? Click here

Detection of Gravitational Waves Using Bayesian Neural Networks

179   0   0.0 ( 0 )
 Added by Jiun-Huei Proty Wu
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

We propose a new model of Bayesian Neural Networks to not only detect the events of compact binary coalescence in the observational data of gravitational waves (GW) but also identify the full length of the event duration including the inspiral stage. This is achieved by incorporating the Bayesian approach into the CLDNN classifier, which integrates together the Convolutional Neural Network (CNN) and the Long Short-Term Memory Recurrent Neural Network (LSTM). Our model successfully detect all seven BBH events in the LIGO Livingston O2 data, with the periods of their GW waveforms correctly labeled. The ability of a Bayesian approach for uncertainty estimation enables a newly defined `awareness state for recognizing the possible presence of signals of unknown types, which is otherwise rejected in a non-Bayesian model. Such data chunks labeled with the awareness state can then be further investigated rather than overlooked. Performance tests with 40,960 training samples against 512 chunks of 8-second real noise mixed with mock signals of various optimal signal-to-noise ratio $0 leq rho_text{opt} leq 18$ show that our model recognizes 90% of the events when $rho_text{opt} >7$ (100% when $rho_text{opt} >8.5$) and successfully labels more than 95% of the waveform periods when $rho_text{opt} >8$. The latency between the arrival of peak signal and generating an alert with the associated waveform period labeled is only about 20 seconds for an unoptimized code on a moderate GPU-equipped personal computer. This makes our model possible for nearly real-time detection and for forecasting the coalescence events when assisted with deeper training on a larger dataset using the state-of-art HPCs.



rate research

Read More

Gravitational wave astronomy has been already a well-established research domain for many years. Moreover, after the detection by LIGO/Virgo collaboration, in 2017, of the first gravitational wave signal emitted during the collision of a binary neutron star system, that was accompanied by the detection of other types of signals coming from the same event, multi-messenger astronomy has claimed its rights more assertively. In this context, it is of great importance in a gravitational wave experiment to have a rapid mechanism of alerting about potential gravitational waves events other observatories capable to detect other types of signals (e.g. in other wavelengths) that are produce by the same event. In this paper, we present the first progress in the development of a neural network algorithm trained to recognize and characterize gravitational wave patterns from signal plus noise data samples. We have implemented t
107 - Plamen G. Krastev 2019
The groundbreaking discoveries of gravitational waves from binary black-hole mergers and, most recently, coalescing neutron stars started a new era of Multi-Messenger Astrophysics and revolutionized our understanding of the Cosmos. Machine learning techniques such as artificial neural networks are already transforming many technological fields and have also proven successful in gravitational-wave astrophysics for detection and characterization of gravitational-wave signals from binary black holes. Here we use a deep-learning approach to rapidly identify transient gravitational-wave signals from binary neutron star mergers in noisy time series representative of typical gravitational-wave detector data. Specifically, we show that a deep convolution neural network trained on 100,000 data samples can rapidly identify binary neutron star gravitational-wave signals and distinguish them from noise and signals from merging black hole binaries. These results demonstrate the potential of artificial neural networks for real-time detection of gravitational-wave signals from binary neutron star mergers, which is critical for a prompt follow-up and detailed observation of the electromagnetic and astro-particle counterparts accompanying these important transients.
We investigate the use of approximate Bayesian neural networks (BNNs) in modeling hundreds of time-delay gravitational lenses for Hubble constant ($H_0$) determination. Our BNN was trained on synthetic HST-quality images of strongly lensed active galactic nuclei (AGN) with lens galaxy light included. The BNN can accurately characterize the posterior PDFs of model parameters governing the elliptical power-law mass profile in an external shear field. We then propagate the BNN-inferred posterior PDFs into ensemble $H_0$ inference, using simulated time delay measurements from a plausible dedicated monitoring campaign. Assuming well-measured time delays and a reasonable set of priors on the environment of the lens, we achieve a median precision of $9.3$% per lens in the inferred $H_0$. A simple combination of 200 test-set lenses results in a precision of 0.5 $textrm{km s}^{-1} textrm{ Mpc}^{-1}$ ($0.7%$), with no detectable bias in this $H_0$ recovery test. The computation time for the entire pipeline -- including the training set generation, BNN training, and $H_0$ inference -- translates to 9 minutes per lens on average for 200 lenses and converges to 6 minutes per lens as the sample size is increased. Being fully automated and efficient, our pipeline is a promising tool for exploring ensemble-level systematics in lens modeling for $H_0$ inference.
In recent times, neural networks have become a powerful tool for the analysis of complex and abstract data models. However, their introduction intrinsically increases our uncertainty about which features of the analysis are model-related and which are due to the neural network. This means that predictions by neural networks have biases which cannot be trivially distinguished from being due to the true nature of the creation and observation of data or not. In order to attempt to address such issues we discuss Bayesian neural networks: neural networks where the uncertainty due to the network can be characterised. In particular, we present the Bayesian statistical framework which allows us to categorise uncertainty in terms of the ingrained randomness of observing certain data and the uncertainty from our lack of knowledge about how data can be created and observed. In presenting such techniques we show how errors in prediction by neural networks can be obtained in principle, and provide the two favoured methods for characterising these errors. We will also describe how both of these methods have substantial pitfalls when put into practice, highlighting the need for other statistical techniques to truly be able to do inference when using neural networks.
In the past few years, approximate Bayesian Neural Networks (BNNs) have demonstrated the ability to produce statistically consistent posteriors on a wide range of inference problems at unprecedented speed and scale. However, any disconnect between training sets and the distribution of real-world objects can introduce bias when BNNs are applied to data. This is a common challenge in astrophysics and cosmology, where the unknown distribution of objects in our Universe is often the science goal. In this work, we incorporate BNNs with flexible posterior parameterizations into a hierarchical inference framework that allows for the reconstruction of population hyperparameters and removes the bias introduced by the training distribution. We focus on the challenge of producing posterior PDFs for strong gravitational lens mass model parameters given Hubble Space Telescope (HST) quality single-filter, lens-subtracted, synthetic imaging data. We show that the posterior PDFs are sufficiently accurate (i.e., statistically consistent with the truth) across a wide variety of power-law elliptical lens mass distributions. We then apply our approach to test data sets whose lens parameters are drawn from distributions that are drastically different from the training set. We show that our hierarchical inference framework mitigates the bias introduced by an unrepresentative training sets interim prior. Simultaneously, given a sufficiently broad training set, we can precisely reconstruct the population hyperparameters governing our test distributions. Our full pipeline, from training to hierarchical inference on thousands of lenses, can be run in a day. The framework presented here will allow us to efficiently exploit the full constraining power of future ground- and space-based surveys.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا