Application of machine learning algorithms to the study of noise artifacts in gravitational-wave data


Abstract in English

The sensitivity of searches for astrophysical transients in data from the LIGO is generally limited by the presence of transient, non-Gaussian noise artifacts, which occur at a high-enough rate such that accidental coincidence across multiple detectors is non-negligible. Furthermore, non-Gaussian noise artifacts typically dominate over the background contributed from stationary noise. These glitches can easily be confused for transient gravitational-wave signals, and their robust identification and removal will help any search for astrophysical gravitational-waves. We apply Machine Learning Algorithms (MLAs) to the problem, using data from auxiliary channels within the LIGO detectors that monitor degrees of freedom unaffected by astrophysical signals. The number of auxiliary-channel parameters describing these disturbances may also be extremely large; an area where MLAs are particularly well-suited. We demonstrate the feasibility and applicability of three very different MLAs: Artificial Neural Networks, Support Vector Machines, and Random Forests. These classifiers identify and remove a substantial fraction of the glitches present in two very different data sets: four weeks of LIGOs fourth science run and one week of LIGOs sixth science run. We observe that all three algorithms agree on which events are glitches to within 10% for the sixth science run data, and support this by showing that the different optimization criteria used by each classifier generate the same decision surface, based on a likelihood-ratio statistic. Furthermore, we find that all classifiers obtain similar limiting performance, suggesting that most of the useful information currently contained in the auxiliary channel parameters we extract is already being used.

Download