Do you want to publish a course? Click here

We discuss the detection of gravitational-wave backgrounds in the context of Bayesian inference and suggest a practical definition of what it means for a signal to be considered stochastic---namely, that the Bayesian evidence favors a stochastic signal model over a deterministic signal model. A signal can further be classified as Gaussian-stochastic if a Gaussian signal model is favored. In our analysis we use Bayesian model selection to choose between several signal and noise models for simulated data consisting of uncorrelated Gaussian detector noise plus a superposition of sinusoidal signals from an astrophysical population of gravitational-wave sources. For simplicity, we consider co-located and co-aligned detectors with white detector noise, but the method can be extended to more realistic detector configurations and power spectra. The general trend we observe is that a deterministic model is favored for small source numbers, a non-Gaussian stochastic model is preferred for intermediate source numbers, and a Gaussian stochastic model is preferred for large source numbers. However, there is very large variation between individual signal realizations, leading to fuzzy boundaries between the three regimes. We find that a hybrid, trans-dimensional model comprised of a deterministic signal model for individual bright sources and a Gaussian-stochastic signal model for the remaining confusion background outperforms all other models in most instances.
The detection of a stochastic gravitational-wave signal from the superposition of many inspiraling supermassive black holes with pulsar timing arrays (PTAs) is likely to occur within the next decade. With this detection will come the opportunity to learn about the processes that drive black-hole-binary systems toward merger through their effects on the gravitational-wave spectrum. We use Bayesian methods to investigate the extent to which effects other than gravitational-wave emission can be distinguished using PTA observations. We show that, even in the absence of a detection, it is possible to place interesting constraints on these dynamical effects for conservative predictions of the population of tightly bound supermassive black-hole binaries. For instance, if we assume a relatively weak signal consistent with a low number of bound binaries and a low black-hole-mass to galaxy-mass correlation, we still find that a non-detection by a simulated array, with a sensitivity that should be reached in practice within a few years, disfavors gravitational-wave-dominated evolution with an odds ratio of $sim$30:1. Such a finding would suggest either that all existing astrophysical models for the population of tightly bound binaries are overly optimistic, or else that some dynamical effect other than gravitational-wave emission is actually dominating binary evolution even at the relatively high frequencies/small orbital separations probed by PTAs.
A central challenge in Gravitational Wave Astronomy is identifying weak signals in the presence of non-stationary and non-Gaussian noise. The separation of gravitational wave signals from noise requires good models for both. When accurate signal models are available, such as for binary Neutron star systems, it is possible to make robust detection statements even when the noise is poorly understood. In contrast, searches for un-modeled transient signals are strongly impacted by the methods used to characterize the noise. Here we take a Bayesian approach and introduce a multi-component, variable dimension, parameterized noise model that explicitly accounts for non-stationarity and non-Gaussianity in data from interferometric gravitational wave detectors. Instrumental transients (glitches) and burst sources of gravitational waves are modeled using a Morlet-Gabor continuous wavelet frame. The number and placement of the wavelets is determined by a trans-dimensional Reversible Jump Markov Chain Monte Carlo algorithm. The Gaussian component of the noise and sharp line features in the noise spectrum are modeled using the BayesLine algorithm, which operates in concert with the wavelet model.
We study generic tests of strong-field General Relativity using gravitational waves emitted during the inspiral of compact binaries. Previous studies have considered simple extensions to the standard post-Newtonian waveforms that differ by a single term in the phase. Here we improve on these studies by (i) increasing the realism of injections and (ii) determining the optimal waveform families for detecting and characterizing such signals. We construct waveforms that deviate from those in General Relativity through a series of post-Newtonian terms, and find that these higher-order terms can affect our ability to test General Relativity, in some cases by making it easier to detect a deviation, and in some cases by making it more difficult. We find that simple single-phase post-Einsteinian waveforms are sufficient for detecting deviations from General Relativity, and there is little to be gained from using more complicated models with multiple phase terms. The results found here will help guide future attempts to test General Relativity with advanced ground-based detectors.
Gravitational wave astronomy has tremendous potential for studying extreme astrophysical phenomena and exploring fundamental physics. The waves produced by binary black hole mergers will provide a pristine environment in which to study strong field, dynamical gravity. Extracting detailed information about these systems requires accurate theoretical models of the gravitational wave signals. If gravity is not described by General Relativity, analyses that are based on waveforms derived from Einsteins field equations could result in parameter biases and a loss of detection efficiency. A new class of parameterized post-Einsteinian (ppE) waveforms has been proposed to cover this eventuality. Here we apply the ppE approach to simulated data from a network of advanced ground based interferometers (aLIGO/aVirgo) and from a future spaced based interferometer (LISA). Bayesian inference and model selection are used to investigate parameter biases, and to determine the level at which departures from general relativity can be detected. We find that in some cases the parameter biases from assuming the wrong theory can be severe. We also find that gravitational wave observations will beat the existing bounds on deviations from general relativity derived from the orbital decay of binary pulsars by a large margin across a wide swath of parameter space.
The analysis of gravitational wave data involves many model selection problems. The most important example is the detection problem of selecting between the data being consistent with instrument noise alone, or instrument noise and a gravitational wave signal. The analysis of data from ground based gravitational wave detectors is mostly conducted using classical statistics, and methods such as the Neyman-Pearson criteria are used for model selection. Future space based detectors, such as the emph{Laser Interferometer Space Antenna} (LISA), are expected to produced rich data streams containing the signals from many millions of sources. Determining the number of sources that are resolvable, and the most appropriate description of each source poses a challenging model selection problem that may best be addressed in a Bayesian framework. An important class of LISA sources are the millions of low-mass binary systems within our own galaxy, tens of thousands of which will be detectable. Not only are the number of sources unknown, but so are the number of parameters required to model the waveforms. For example, a significant subset of the resolvable galactic binaries will exhibit orbital frequency evolution, while a smaller number will have measurable eccentricity. In the Bayesian approach to model selection one needs to compute the Bayes factor between competing models. Here we explore various methods for computing Bayes factors in the context of determining which galactic binaries have measurable frequency evolution. The methods explored include a Reverse Jump Markov Chain Monte Carlo (RJMCMC) algorithm, Savage-Dickie density ratios, the Schwarz-Bayes Information Criterion (BIC), and the Laplace approximation to the model evidence. We find good agreement between all of the approaches.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا