Non-sky-averaged sensitivity curves for space-based gravitational-wave observatories


Abstract in English

(abridged) The signal-to-noise ratio (SNR) is used in gravitational-wave observations as the basic figure of merit for detection confidence and, together with the Fisher matrix, for the amount of physical information that can be extracted from a detected signal. SNRs are usually computed from a sensitivity curve, which describes the gravitational-wave amplitude needed by a monochromatic source of given frequency to achieve a threshold SNR. For interferometric space-based detectors similar to LISA, which are sensitive to long-lived signals and have constantly changing position and orientation, exact SNRs need to be computed on a source-by-source basis. For convenience, most authors prefer to work with sky-averaged sensitivities, accepting inaccurate SNRs for individual sources and giving up control over the statistical distribution of SNRs for source populations. In this paper, we describe a straightforward end-to-end recipe to compute the non-sky-averaged sensitivity of interferometric space-based detectors of any geometry: in effect, we derive error bars for the sky-averaged sensitivity curve, which provide a stringent statistical interpretation for previously unqualified statements about sky-averaged SNRs. As a worked-out example, we consider isotropic and Galactic-disk populations of monochromatic sources, as observed with the classic LISA configuration. We confirm that the (standard) inverse-rms average sensitivity for the isotropic population remains the same whether or not the LISA orbits are included in the computation. However, detector motion tightens the distribution of sensitivities, so for 50% of sources the sensitivity is within 30% of its average. For the Galactic-disk population, the average and the distribution of the sensitivity for a moving detector turn out to be similar to the isotropic case.

Download