No Arabic abstract
The groundbreaking discoveries of gravitational waves from binary black-hole mergers and, most recently, coalescing neutron stars started a new era of Multi-Messenger Astrophysics and revolutionized our understanding of the Cosmos. Machine learning techniques such as artificial neural networks are already transforming many technological fields and have also proven successful in gravitational-wave astrophysics for detection and characterization of gravitational-wave signals from binary black holes. Here we use a deep-learning approach to rapidly identify transient gravitational-wave signals from binary neutron star mergers in noisy time series representative of typical gravitational-wave detector data. Specifically, we show that a deep convolution neural network trained on 100,000 data samples can rapidly identify binary neutron star gravitational-wave signals and distinguish them from noise and signals from merging black hole binaries. These results demonstrate the potential of artificial neural networks for real-time detection of gravitational-wave signals from binary neutron star mergers, which is critical for a prompt follow-up and detailed observation of the electromagnetic and astro-particle counterparts accompanying these important transients.
We propose a new model of Bayesian Neural Networks to not only detect the events of compact binary coalescence in the observational data of gravitational waves (GW) but also identify the full length of the event duration including the inspiral stage. This is achieved by incorporating the Bayesian approach into the CLDNN classifier, which integrates together the Convolutional Neural Network (CNN) and the Long Short-Term Memory Recurrent Neural Network (LSTM). Our model successfully detect all seven BBH events in the LIGO Livingston O2 data, with the periods of their GW waveforms correctly labeled. The ability of a Bayesian approach for uncertainty estimation enables a newly defined `awareness state for recognizing the possible presence of signals of unknown types, which is otherwise rejected in a non-Bayesian model. Such data chunks labeled with the awareness state can then be further investigated rather than overlooked. Performance tests with 40,960 training samples against 512 chunks of 8-second real noise mixed with mock signals of various optimal signal-to-noise ratio $0 leq rho_text{opt} leq 18$ show that our model recognizes 90% of the events when $rho_text{opt} >7$ (100% when $rho_text{opt} >8.5$) and successfully labels more than 95% of the waveform periods when $rho_text{opt} >8$. The latency between the arrival of peak signal and generating an alert with the associated waveform period labeled is only about 20 seconds for an unoptimized code on a moderate GPU-equipped personal computer. This makes our model possible for nearly real-time detection and for forecasting the coalescence events when assisted with deeper training on a larger dataset using the state-of-art HPCs.
The fastest-spinning neutron stars in low-mass X-ray binaries, despite having undergone millions of years of accretion, have been observed to spin well below the Keplerian break-up frequency. We simulate the spin evolution of synthetic populations of accreting neutron stars in order to assess whether gravitational waves can explain this behaviour and provide the distribution of spins that is observed. We model both persistent and transient accretion and consider two gravitational-wave-production mechanisms that could be present in these systems: thermal mountains and unstable $r$-modes. We consider the case of no gravitational-wave emission and observe that this does not match well with observation. We find evidence for gravitational waves being able to provide the observed spin distribution; the most promising mechanisms being a permanent quadrupole, thermal mountains and unstable $r$-modes. However, based on the resultant distributions alone it is difficult to distinguish between the competing mechanisms.
One of the key challenges of real-time detection and parameter estimation of gravitational waves from compact binary mergers is the computational cost of conventional matched-filtering and Bayesian inference approaches. In particular, the application of these methods to the full signal parameter space available to the gravitational-wave detectors, and/or real-time parameter estimation is computationally prohibitive. On the other hand, rapid detection and inference are critical for prompt follow-up of the electromagnetic and astro-particle counterparts accompanying important transients, such as binary neutron-star and black-hole neutron-star mergers. Training deep neural networks to identify specific signals and learn a computationally efficient representation of the mapping between gravitational-wave signals and their parameters allows both detection and inference to be done quickly and reliably, with high sensitivity and accuracy. In this work we apply a deep-learning approach to rapidly identify and characterize transient gravitational-wave signals from binary neutron-star mergers in real LIGO data. We show for the first time that artificial neural networks can promptly detect and characterize binary neutron star gravitational-wave signals in real LIGO data, and distinguish them from noise and signals from coalescing black-hole binaries. We illustrate this key result by demonstrating that our deep-learning framework classifies correctly all gravitational-wave events from the Gravitational-Wave Transient Catalog, GWTC-1 [Phys. Rev. X 9 (2019), 031040]. These results emphasize the importance of using realistic gravitational-wave detector data in machine learning approaches, and represent a step towards achieving real-time detection and inference of gravitational waves.
Gravitational wave astronomy has been already a well-established research domain for many years. Moreover, after the detection by LIGO/Virgo collaboration, in 2017, of the first gravitational wave signal emitted during the collision of a binary neutron star system, that was accompanied by the detection of other types of signals coming from the same event, multi-messenger astronomy has claimed its rights more assertively. In this context, it is of great importance in a gravitational wave experiment to have a rapid mechanism of alerting about potential gravitational waves events other observatories capable to detect other types of signals (e.g. in other wavelengths) that are produce by the same event. In this paper, we present the first progress in the development of a neural network algorithm trained to recognize and characterize gravitational wave patterns from signal plus noise data samples. We have implemented t
Gravitational-wave memory manifests as a permanent distortion of an idealized gravitational-wave detector and arises generically from energetic astrophysical events. For example, binary black hole mergers are expected to emit memory bursts a little more than an order of magnitude smaller in strain than the oscillatory parent waves. We introduce the concept of orphan memory: gravitational-wave memory for which there is no detectable parent signal. In particular, high-frequency gravitational-wave bursts ($gtrsim$ kHz) produce orphan memory in the LIGO/Virgo band. We show that Advanced LIGO measurements can place stringent limits on the existence of high-frequency gravitational waves, effectively increasing the LIGO bandwidth by orders of magnitude. We investigate the prospects for and implications of future searches for orphan memory.