No Arabic abstract
We present the probability distribution of the systematic errors in the most accurate, high-latency version of the reconstructed dimensionless strain $h$, at the Hanford and Livingston LIGO detectors, used for gravitational-wave astrophysical analysis, including parameter estimation, in the last five months of the third observing run (O3B). This work extends the results presented in Sun et. al (2020) [1] for the first six months of the third observing run (O3A). The complex-valued, frequency-dependent, and slowly time-varying systematic error (excursion from unity magnitude and zero phase) in O3B generally remains at a consistent level as in O3A, yet changes of detector configurations in O3B have introduced a non-negligible change in the frequency dependence of the error, leading to larger excursions from unity at some frequencies and/or during some observational periods; in some other periods the excursions are smaller than those in O3A. For O3B, the upper limit on the systematic error and associated uncertainty is 11.29% in magnitude and 9.18 deg in phase (68% confidence interval) in the most sensitive frequency band 20-2000 Hz. The systematic error alone is estimated at levels of < 2% in magnitude and $lesssim 4$ deg in phase. These errors and uncertainties are dominated by the imperfect modeling of the frequency dependence of the detector response functions rather than the uncertainty in the absolute reference, the photon calibrators.
The raw outputs of the detectors within the Advanced Laser Interferometer Gravitational-Wave Observatory need to be calibrated in order to produce the estimate of the dimensionless strain used for astrophysical analyses. The two detectors have been upgraded since the second observing run and finished the year-long third observing run. Understanding, accounting, and/or compensating for the complex-valued response of each part of the upgraded detectors improves the overall accuracy of the estimated detector response to gravitational waves. We describe improved understanding and methods used to quantify the response of each detector, with a dedicated effort to define all places where systematic error plays a role. We use the detectors as they stand in the first half (six months) of the third observing run to demonstrate how each identified systematic error impacts the estimated strain and constrain the statistical uncertainty therein. For this time period, we estimate the upper limit on systematic error and associated uncertainty to be $< 7%$ in magnitude and $< 4$ deg in phase ($68%$ confidence interval) in the most sensitive frequency band 20-2000 Hz. The systematic error alone is estimated at levels of $< 2%$ in magnitude and $< 2$ deg in phase.
The characterization of the Advanced LIGO detectors in the second and third observing runs has increased the sensitivity of the instruments, allowing for a higher number of detectable gravitational-wave signals, and provided confirmation of all observed gravitational-wave events. In this work, we present the methods used to characterize the LIGO detectors and curate the publicly available datasets, including the LIGO strain data and data quality products. We describe the essential role of these datasets in LIGO-Virgo Collaboration analyses of gravitational-waves from both transient and persistent sources and include details on the provenance of these datasets in order to support analyses of LIGO data by the broader community. Finally, we explain anticipated changes in the role of detector characterization and current efforts to prepare for the high rate of gravitational-wave alerts and events in future observing runs.
Advanced LIGOs raw detector output needs to be calibrated to compute dimensionless strain h(t). Calibrated strain data is produced in the time domain using both a low-latency, online procedure and a high-latency, offline procedure. The low-latency h(t) data stream is produced in two stages, the first of which is performed on the same computers that operate the detectors feedback control system. This stage, referred to as the front-end calibration, uses infinite impulse response (IIR) filtering and performs all operations at a 16384 Hz digital sampling rate. Due to several limitations, this procedure currently introduces certain systematic errors in the calibrated strain data, motivating the second stage of the low-latency procedure, known as the low-latency gstlal calibration pipeline. The gstlal calibration pipeline uses finite impulse response (FIR) filtering to apply corrections to the output of the front-end calibration. It applies time-dependent correction factors to the sensing and actuation components of the calibrated strain to reduce systematic errors. The gstlal calibration pipeline is also used in high latency to recalibrate the data, which is necessary due mainly to online dropouts in the calibrated data and identified improvements to the calibration models or filters.
This paper presents an adaptable, parallelizable method for subtracting linearly coupled noise from Advanced LIGO data. We explain the features developed to ensure that the process is robust enough to handle the variability present in Advanced LIGO data. In this work, we target subtraction of noise due to beam jitter, detector calibration lines, and mains power lines. We demonstrate noise subtraction over the entirety of the second observing run, resulting in increases in sensitivity comparable to those reported in previous targeted efforts. Over the course of the second observing run, we see a 30% increase in Advanced LIGO sensitivity to gravitational waves from a broad range of compact binary systems. We expect the use of this method to result in a higher rate of detected gravitational-wave signals in Advanced LIGO data.
Hardware injections are simulated gravitational-wave signals added to the Laser Interferometer Gravitational-wave Observatory (LIGO). The detectors test masses are physically displaced by an actuator in order to simulate the effects of a gravitational wave. The simulated signal initiates a control-system response which mimics that of a true gravitational wave. This provides an end-to-end test of LIGOs ability to observe gravitational waves. The gravitational-wave analyses used to detect and characterize signals are exercised with hardware injections. By looking for discrepancies between the injected and recovered signals, we are able to characterize the performance of analyses and the coupling of instrumental subsystems to the detectors output channels. This paper describes the hardware injection system and the recovery of injected signals representing binary black hole mergers, a stochastic gravitational wave background, spinning neutron stars, and sine-Gaussians.