How effective is machine learning to detect long transient gravitational waves from neutron stars in a real search?


Abstract in English

We present a comprehensive study of the effectiveness of Convolution Neural Networks (CNNs) to detect long duration transient gravitational-wave signals lasting $O(hours-days)$ from isolated neutron stars. We determine that CNNs are robust towards signal morphologies that differ from the training set, and they do not require many training injections/data to guarantee good detection efficiency and low false alarm probability. In fact, we only need to train one CNN on signal/noise maps in a single 150 Hz band; afterwards, the CNN can distinguish signals/noise well in any band, though with different efficiencies and false alarm probabilities due to the non-stationary noise in LIGO/Virgo. We demonstrate that we can control the false alarm probability for the CNNs by selecting the optimal threshold on the outputs of the CNN, which appears to be frequency dependent. Finally we compare the detection efficiencies of the networks to a well-established algorithm, the Generalized FrequencyHough (GFH), which maps curves in the time/frequency plane to lines in a plane that relates to the initial frequency/spindown of the source. The networks have similar sensitivities to the GFH but are orders of magnitude faster to run and can detect signals to which the GFH is blind. Using the results of our analysis, we propose strategies to apply CNNs to a real search using LIGO/Virgo data to overcome the obstacles that we would encounter, such as a finite amount of training data. We then use our networks and strategies to run a real search for a remnant of GW170817, making this the first time ever that a machine learning method has been applied to search for a gravitational wave signal from an isolated neutron star.

Download