No Arabic abstract
We investigate the class of quadratic detectors (i.e., the statistic is a bilinear function of the data) for the detection of poorly modeled gravitational transients of short duration. We point out that all such detection methods are equivalent to passing the signal through a filter bank and linearly combine the output energy. Existing methods for the choice of the filter bank and of the weight parameters rely essentially on the two following ideas: (i) the use of the likelihood function based on a (possibly non-informative) statistical model of the signal and the noise, (ii) the use of Monte-Carlo simulations for the tuning of parametric filters to get the best detection probability keeping fixed the false alarm rate. We propose a third approach according to which the filter bank is learned from a set of training data. By-products of this viewpoint are that, contrarily to previous methods, (i) there is no requirement of an explicit description of the probability density function of the data when the signal is present and (ii) the filters we use are non-parametric. The learning procedure may be described as a two step process: first, estimate the mean and covariance of the signal with the training data; second, find the filters which maximize a contrast criterion referred to as deflection between the noise only and signal+noise hypothesis. The deflection is homogeneous to the signal-to-noise ratio and it uses the quantities estimated at the first step. We apply this original method to the problem of the detection of supernovae core collapses. We use the catalog of waveforms provided recently by Dimmelmeier et al. to train our algorithm. We expect such detector to have better performances on this particular problem provided that the reference signals are reliable.
The future space-based gravitational wave observatory LISA will consist of a constellation of three spacecraft in a triangular constellation, connected by laser interferometers with 2.5 million-kilometer arms. Among other challenges, the success of the mission strongly depends on the quality of the cancellation of laser frequency noise, whose power lies eight orders of magnitude above the gravitational signal. The standard technique to perform noise removal is time-delay interferometry (TDI). TDI constructs linear combinations of delayed phasemeter measurements tailored to cancel laser noise terms. Previous work has demonstrated the relationship between TDI and principal component analysis (PCA). We build on this idea to develop an extension of TDI based on a model likelihood that directly depends on the phasemeter measurements. Assuming stationary Gaussian noise, we decompose the measurement covariance using PCA in the frequency domain. We obtain a comprehensive and compact framework that we call PCI for principal component interferometry, and show that it provides an optimal description of the LISA data analysis problem.
Ground-based gravitational wave laser interferometers (LIGO, GEO-600, Virgo and Tama-300) have now reached high sensitivity and duty cycle. We present a Bayesian evidence-based approach to the search for gravitational waves, in particular aimed at the followup of candidate events generated by the analysis pipeline. We introduce and demonstrate an efficient method to compute the evidence and odds ratio between different models, and illustrate this approach using the specific case of the gravitational wave signal generated during the inspiral phase of binary systems, modelled at the leading quadrupole Newtonian order, in synthetic noise. We show that the method is effective in detecting signals at the detection threshold and it is robust against (some types of) instrumental artefacts. The computational efficiency of this method makes it scalable to the analysis of all the triggers generated by the analysis pipelines to search for coalescing binaries in surveys with ground-based interferometers, and to a whole variety of signal waveforms, characterised by a larger number of parameters.
It is well known that two types of gravitational wave memory exist in general relativity (GR): the linear memory and the non-linear, or Christodoulou memory. These effects, especially the latter, depend on the specific form of Einstein equation. It can then be speculated that in modified theories of gravity, the memory can differ from the GR prediction, and provides novel phenomena to study these theories. We support this speculation by considering scalar-tensor theories, for which we find two new types of memory: the T memory and the S memory, which contribute to the tensor and scalar components of gravitational wave, respectively. In particular, the former is caused by the burst of energy carried away by scalar radiation, while the latter is intimately related to the no scalar hair property of black holes in scalar-tensor gravity. We estimate the size of these two types of memory in gravitational collapses, and formulate a detection strategy for the S memory, which can be singled out from tensor gravitational waves. We show that (i) the S memory exists even in spherical symmetry, and is observable under current model constraints, and (ii) while the T memory is usually much weaker than the S memory, it can become comparable in the case of spontaneous scalarization.
Within the next few years, Advanced LIGO and Virgo should detect gravitational waves from binary neutron star and neutron star-black hole mergers. These sources are also predicted to power a broad array of electromagnetic transients. Because the electromagnetic signatures can be faint and fade rapidly, observing them hinges on rapidly inferring the sky location from the gravitational-wave observations. Markov chain Monte Carlo methods for gravitational-wave parameter estimation can take hours or more. We introduce BAYESTAR, a rapid, Bayesian, non-Markov chain Monte Carlo sky localization algorithm that takes just seconds to produce probability sky maps that are comparable in accuracy to the full analysis. Prompt localizations from BAYESTAR will make it possible to search electromagnetic counterparts of compact binary mergers.
We explore machine learning methods to detect gravitational waves (GW) from binary black hole (BBH) mergers using deep learning (DL) algorithms. The DL networks are trained with gravitational waveforms obtained from BBH mergers with component masses randomly sampled in the range from 5 to 100 solar masses and luminosity distances from 100 Mpc to, at least, 2000 Mpc. The GW signal waveforms are injected in public data from the O2 run of the Advanced LIGO and Advanced Virgo detectors, in time windows that do not coincide with those of known detected signals. We demonstrate that DL algorithms, trained with GW signal waveforms at distances of 2000 Mpc, still show high accuracy when detecting closer signals, within the ranges considered in our analysis. Moreover, by combining the results of the three-detector network in a unique RGB image, the single detector performance is improved by as much as 70%. Furthermore, we train a regression network to perform parameter inference on BBH spectrogram data and apply this network to the events from the the GWTC-1 and GWTC-2 catalogs. Without significant optimization of our algorithms we obtain results that are mostly consistent with published results by the LIGO-Virgo Collaboration. In particular, our predictions for the chirp mass are compatible (up to 3$sigma$) with the official values for 90% of events