No Arabic abstract
The expected event rate of lensed gravitational wave sources scales with the merger rate at redshift $zgeq 1$, where the optical depth for lensing is high. It is commonly assumed that the merger rate of the astrophysical compact objects is closely connected with the star formation rate, which peaks around redshift $zsim 2$. However, a major source of uncertainty is the delay time between the formation and merger of compact objects. We explore the impact of delay time on the lensing event rate. We show that as the delay time increases, the peak of the merger rate of gravitational wave sources gets deferred to a lower redshift. This leads to a reduction in the event rate of the lensed events which are detectable by the gravitational wave detectors. We show that for a delay time of around $10$ Gyr or larger, the lensed event rate can be less than one per year for the design sensitivity of LIGO/Virgo. We also estimate the merger rate for lensed sub-threshold for different delay time scenarios, finding that for larger delay times the number of lensed sub-threshold events is reduced, whereas for small-delay time models they are significantly more frequent. This analysis shows for the first time that lensing is a complementary probe to explore different formation channels of binary systems by exploiting the lensing event rate from the well-detected events and sub-threshold events which are measurable using the network of gravitational wave detectors.
Full, non-linear general relativity predicts a memory effect for gravitational waves. For compact binary coalescence, the total gravitational memory serves as an inferred observable, conceptually on the same footing as the mass and the spin of the final black hole. Given candidate waveforms for any LIGO event, then, one can calculate the posterior probability distribution functions for the total gravitational memory, and use them to compare and contrast the waveforms. In this paper we present these posterior distributions for the binary black hole merger events reported in the first Gravitational Wave Transient Catalog (GWTC-1), using the Phenomenological and Effective-One-Body waveforms. On the whole, the two sets of posterior distributions agree with each other quite well though we find larger discrepancies for the $ell=2, m=1$ mode of the memory. This signals a possible source of systematic errors that was not captured by the posterior distributions of other inferred observables. Thus, the posterior distributions of various angular modes of total memory can serve as diagnostic tools to further improve the waveforms. Analyses such as this would be valuable especially for future events as the sensitivity of ground based detectors improves, and for LISA which could measure the total gravitational memory directly.
We describe the implementation of a search for gravitational waves from compact binary coalescences in LIGO and Virgo data. This all-sky, all-time, multi-detector search for binary coalescence has been used to search data taken in recent LIGO and Virgo runs. The search is built around a matched filter analysis of the data, augmented by numerous signal consistency tests designed to distinguish artifacts of non-Gaussian detector noise from potential detections. We demonstrate the search performance using Gaussian noise and data from the fifth LIGO science run and demonstrate that the signal consistency tests are capable of mitigating the effect of non-Gaussian noise and providing a sensitivity comparable to that achieved in Gaussian noise.
We make forecasts for the impact a future midband space-based gravitational wave experiment, most sensitive to $10^{-2}- 10$ Hz, could have on potential detections of cosmological stochastic gravitational wave backgrounds (SGWBs). Specific proposed midband experiments considered are TianGo, B-DECIGO and AEDGE. We propose a combined power-law integrated sensitivity (CPLS) curve combining GW experiments over different frequency bands, which shows the midband improves sensitivity to SGWBs by up to two orders of magnitude at $10^{-2} - 10$ Hz. We consider GW emission from cosmic strings and phase transitions as benchmark examples of cosmological SGWBs. We explicitly model various astrophysical SGWB sources, most importantly from unresolved black hole mergers. Using Markov Chain Monte Carlo, we demonstrated that midband experiments can, when combined with LIGO A+ and LISA, significantly improve sensitivities to cosmological SGWBs and better separate them from astrophysical SGWBs. In particular, we forecast that a midband experiment improves sensitivity to cosmic string tension $Gmu$ by up to a factor of $10$, driven by improved component separation from astrophysical sources. For phase transitions, a midband experiment can detect signals peaking at $0.1 - 1$ Hz, which for our fiducial model corresponds to early Universe temperatures of $T_*sim 10^4 - 10^6$ GeV, generally beyond the reach of LIGO and LISA. The midband closes an energy gap and better captures characteristic spectral shape information. It thus substantially improves measurement of the properties of phase transitions at lower energies of $T_* sim O(10^3)$ GeV, potentially relevant to new physics at the electroweak scale, whereas in this energy range LISA alone will detect an excess but not effectively measure the phase transition parameters. Our modelling code and chains are publicly available.
Estimating the parameters of gravitational wave signals detected by ground-based detectors requires an understanding of the properties of the detectors noise. In particular, the most commonly used likelihood function for gravitational wave data analysis assumes that the noise is Gaussian, stationary, and of known frequency-dependent variance. The variance of the colored Gaussian noise is used as a whitening filter on the data before computation of the likelihood function. In practice the noise variance is not known and it evolves over timescales of dozens of seconds to minutes. We study two methods for estimating this whitening filter for ground-based gravitational wave detectors with the goal of performing parameter estimation studies. The first method uses large amounts of data separated from the specific segment we wish to analyze and computes the power spectral density of the noise through the mean-median Welch method. The second method uses the same data segment as the parameter estimation analysis, which potentially includes a gravitational wave signal, and obtains the whitening filter through a fit of the power spectrum of the data in terms of a sum of splines and Lorentzians. We compare these two methods and argue that the latter is more reliable for gravitational wave parameter estimation.
The possible formation of stellar-mass binary black holes through dynamical interactions in dense stellar environments predicts the existence of binaries with non-negligible eccentricity in the frequency band of ground-based gravitational wave detectors; the detection of binary black hole mergers with measurable orbital eccentricity would validate the existence of this formation channel. Waveform templates currently used in the matched-filter gravitational-wave searches of LIGO-Virgo data neglect effects of eccentricity which is expected to reduce their efficiency to detect eccentric binary black holes. Meanwhile, the sensitivity of coherent unmodeled gravitational-wave searches (with minimal assumptions about the signal model) have been shown to be largely unaffected by the presence of even sizable orbital eccentricity. In this paper, we compare the performance of two state-of-the-art search algorithms recently used by LIGO and Virgo to search for binary black holes in the second Observing Run (O2), quantifying their search sensitivity by injecting numerical-relativity simulations of inspiral-merger-ringdown eccentric waveforms into O2 LIGO data. Our results show that the matched-filter search PyCBC performs better than the unmodeled search cWB for the high chirp mass ($>20 M_{odot}$) and low eccentricity region ($e_{30 Hz} < 0.3$) of parameter space. For moderate eccentricities and low chirp mass, on the other hand, the unmodeled search is more sensitive than the modeled search.