No Arabic abstract
Ground-based gravitational wave laser interferometers (LIGO, GEO-600, Virgo and Tama-300) have now reached high sensitivity and duty cycle. We present a Bayesian evidence-based approach to the search for gravitational waves, in particular aimed at the followup of candidate events generated by the analysis pipeline. We introduce and demonstrate an efficient method to compute the evidence and odds ratio between different models, and illustrate this approach using the specific case of the gravitational wave signal generated during the inspiral phase of binary systems, modelled at the leading quadrupole Newtonian order, in synthetic noise. We show that the method is effective in detecting signals at the detection threshold and it is robust against (some types of) instrumental artefacts. The computational efficiency of this method makes it scalable to the analysis of all the triggers generated by the analysis pipelines to search for coalescing binaries in surveys with ground-based interferometers, and to a whole variety of signal waveforms, characterised by a larger number of parameters.
We present the first application of a hierarchical Markov Chain Monte Carlo (MCMC) follow-up on continuous gravitational-wave candidates from real-data searches. The follow-up uses an MCMC sampler to draw parameter-space points from the posterior distribution, constructed using the matched-filter as a log-likelihood. As outliers are narrowed down, coherence time increases, imposing more restrictive phase-evolution templates. We introduce a novel Bayes factor to compare results from different stages: The signal hypothesis is derived from first principles, while the noise hypothesis uses extreme value theory to derive a background model. The effectiveness of our proposal is evaluated on fake Gaussian data and applied to a set of 30 outliers produced by different continuous wave searches on O2 Advanced LIGO data. The results of our analysis suggest all but five outliers are inconsistent with an astrophysical origin under the standard continuous wave signal model. We successfully ascribe four of the surviving outliers to instrumental artifacts and a strong hardware injection present in the data. The behavior of the fifth outlier suggests an instrumental origin as well, but we could not relate it to any known instrumental cause.
We estimate the probability of detecting a gravitational wave signal from coalescing compact binaries in simulated data from a ground-based interferometer detector of gravitational radiation using Bayesian model selection. The simulated waveform of the chirp signal is assumed to be a spin-less Post-Newtonian (PN) waveform of a given expansion order, while the searching template is assumed to be either of the same Post-Newtonian family as the simulated signal or one level below its Post-Newtonian expansion order. Within the Bayesian framework, and by applying a reversible jump Markov chain Monte Carlo simulation algorithm, we compare PN1.5 vs. PN2.0 and PN3.0 vs. PN3.5 wave forms by deriving the detection probabilities, the statistical uncertainties due to noise as a function of the SNR, and the posterior distributions of the parameters. Our analysis indicates that the detection probabilities are not compromised when simplified models are used for the comparison, while the accuracies in the determination of the parameters characterizing these signals can be significantly worsened, no matter what the considered Post-Newtonian order expansion comparison is.
We present the first multi-wavelength follow-up observations of two candidate gravitational-wave (GW) transient events recorded by LIGO and Virgo in their 2009-2010 science run. The events were selected with low latency by the network of GW detectors and their candidate sky locations were observed by the Swift observatory. Image transient detection was used to analyze the collected electromagnetic data, which were found to be consistent with background. Off-line analysis of the GW data alone has also established that the selected GW events show no evidence of an astrophysical origin; one of them is consistent with background and the other one was a test, part of a blind injection challenge. With this work we demonstrate the feasibility of rapid follow-ups of GW transients and establish the sensitivity improvement joint electromagnetic and GW observations could bring. This is a first step toward an electromagnetic follow-up program in the regime of routine detections with the advanced GW instruments expected within this decade. In that regime multi-wavelength observations will play a significant role in completing the astrophysical identification of GW sources. We present the methods and results from this first combined analysis and discuss its implications in terms of sensitivity for the present and future instruments.
We investigate the class of quadratic detectors (i.e., the statistic is a bilinear function of the data) for the detection of poorly modeled gravitational transients of short duration. We point out that all such detection methods are equivalent to passing the signal through a filter bank and linearly combine the output energy. Existing methods for the choice of the filter bank and of the weight parameters rely essentially on the two following ideas: (i) the use of the likelihood function based on a (possibly non-informative) statistical model of the signal and the noise, (ii) the use of Monte-Carlo simulations for the tuning of parametric filters to get the best detection probability keeping fixed the false alarm rate. We propose a third approach according to which the filter bank is learned from a set of training data. By-products of this viewpoint are that, contrarily to previous methods, (i) there is no requirement of an explicit description of the probability density function of the data when the signal is present and (ii) the filters we use are non-parametric. The learning procedure may be described as a two step process: first, estimate the mean and covariance of the signal with the training data; second, find the filters which maximize a contrast criterion referred to as deflection between the noise only and signal+noise hypothesis. The deflection is homogeneous to the signal-to-noise ratio and it uses the quantities estimated at the first step. We apply this original method to the problem of the detection of supernovae core collapses. We use the catalog of waveforms provided recently by Dimmelmeier et al. to train our algorithm. We expect such detector to have better performances on this particular problem provided that the reference signals are reliable.
We discuss the prospects of gravitational lensing of gravitational waves (GWs) coming from core-collapse supernovae (CCSN). As the CCSN GW signal can only be detected from within our own Galaxy and the local group by current and upcoming ground-based GW detectors, we focus on microlensing. We introduce a new technique based on analysis of the power spectrum and association of peaks of the power spectrum with the peaks of the amplification factor to identify lensed signals. We validate our method by applying it on the CCSN-like mock signals lensed by a point mass lens. We find that the lensed and unlensed signal can be differentiated using the association of peaks by more than one sigma for lens masses larger than 150 solar masses. We also study the correlation integral between the power spectra and corresponding amplification factor. This statistical approach is able to differentiate between unlensed and lensed signals for lenses as small as 15 solar masses. Further, we demonstrate that this method can be used to estimate the mass of a lens in case the signal is lensed. The power spectrum based analysis is general and can be applied to any broad band signal and is especially useful for incoherent signals.