Do you want to publish a course? Click here

Recently exploratory studies were performed on the possibility of constraining the neutron star equation of state (EOS) using signals from coalescing binary neutron stars, or neutron star-black hole systems, as they will be seen in upcoming advanced gravitational wave detectors such as Advanced LIGO and Advanced Virgo. In particular, it was estimated to what extent the combined information from multiple detections would enable one to distinguish between different equations of state through hypothesis ranking or parameter estimation. Under the assumption of zero neutron star spins both in signals and in template waveforms and considering tidal effects to 1 post-Newtonian (1PN) order, it was found that O(20) sources would suffice to distinguish between a hard, moderate, and soft equation of state. Here we revisit these results, this time including neutron star tidal effects to the highest order currently known, termination of gravitational waveforms at the contact frequency, neutron star spins, and the resulting quadrupole-monopole interaction. We also take the masses of neutron stars in simulated sources to be distributed according to a relatively strongly peaked Gaussian, as hinted at by observations, but without assuming that the data analyst will necessarily have accurate knowledge of this distribution for use as a mass prior. We find that especially the effect of the latter is dramatic, necessitating many more detections to distinguish between different EOS and causing systematic biases in parameter estimation, on top of biases due to imperfect understanding of the signal model pointed out in earlier work. This would get mitigated if reliable prior information about the mass distribution could be folded into the analyses.
The second generation of gravitational-wave detectors is scheduled to start operations in 2015. Gravitational-wave signatures of compact binary coalescences could be used to accurately test the strong-field dynamical predictions of general relativity. Computationally expensive data analysis pipelines, including TIGER, have been developed to carry out such tests. As a means to cheaply assess whether a particular deviation from general relativity can be detected, Cornish et al. and Vallisneri recently proposed an approximate scheme to compute the Bayes factor between a general-relativity gravitational-wave model and a model representing a class of alternative theories of gravity parametrised by one additional parameter. This approximate scheme is based on only two easy-to-compute quantities: the signal-to-noise ratio of the signal and the fitting factor between the signal and the manifold of possible waveforms within general relativity. In this work, we compare the prediction from the approximate formula against an exact numerical calculation of the Bayes factor using the lalinference library. We find that, using frequency-domain waveforms, the approximate scheme predicts exact results with good accuracy, providing the correct scaling with the signal-to-noise ratio at a fitting factor value of 0.992 and the correct scaling with the fitting factor at a signal-to-noise ratio of 20, down to a fitting factor of $sim$ 0.9. We extend the framework for the approximate calculation of the Bayes factor which significantly increases its range of validity, at least to fitting factors of $sim$ 0.7 or higher.
The direct detection of gravitational waves with upcoming second-generation gravitational wave detectors such as Advanced LIGO and Virgo will allow us to probe the genuinely strong-field dynamics of general relativity (GR) for the first time. We present a data analysis pipeline called TIGER (Test Infrastructure for GEneral Relativity), which is designed to utilize detections of compact binary coalescences to test GR in this regime. TIGER is a model-independent test of GR itself, in that it is not necessary to compare with any specific alternative theory. It performs Bayesian inference on two hypotheses: the GR hypothesis $mathcal{H}_{rm GR}$, and $mathcal{H}_{rm modGR}$, which states that one or more of the post-Newtonian coefficients in the waveform are not as predicted by GR. By the use of multiple sub-hypotheses of $mathcal{H}_{rm modGR}$, in each of which a different number of parameterized deformations of the GR phase are allowed, an arbitrarily large number of testing parameters can be used without having to worry about a model being insufficiently parsimonious if the true number of extra parameters is in fact small. TIGER is well-suited to the regime where most sources have low signal-to-noise ratios, again through the use of these sub-hypotheses. Information from multiple sources can trivially be combined, leading to a stronger test. We focus on binary neutron star coalescences, for which sufficiently accurate waveform models are available that can be generated fast enough on a computer to be fit for use in Bayesian inference. We show that the pipeline is robust against a number of fundamental, astrophysical, and instrumental effects, such as differences between waveform approximants, a limited number of post-Newtonian phase contributions being known, the effects of neutron star spins and tidal deformability on the orbital motion, and instrumental calibration errors.
Fisher matrix and related studies have suggested that with second-generation gravitational wave detectors, it may be possible to infer the equation of state of neutron stars using tidal effects in binary inspiral. Here we present the first fully Bayesian investigation of this problem. We simulate a realistic data analysis setting by performing a series of numerical experiments of binary neutron star signals hidden in detector noise, assuming the projected final design sensitivity of the Advanced LIGO- Virgo network. With an astrophysical distribution of events (in particular, uniform in co-moving volume), we find that only a few tens of detections will be required to arrive at strong constraints, even for some of the softest equations of state in the literature. Thus, direct gravitational wave detection will provide a unique probe of neutron star structure.
Einstein Telescope (ET) is conceived to be a third generation gravitational-wave observatory. Its amplitude sensitivity would be a factor ten better than advanced LIGO and Virgo and it could also extend the low-frequency sensitivity down to 1--3 Hz, compared to the 10--20 Hz of advanced detectors. Such an observatory will have the potential to observe a variety of different GW sources, including compact binary systems at cosmological distances. ETs expected reach for binary neutron star (BNS) coalescences is out to redshift $zsimeq 2$ and the rate of detectable BNS coalescences could be as high as one every few tens or hundreds of seconds, each lasting up to several days. %in the sensitive frequency band of ET. With such a signal-rich environment, a key question in data analysis is whether overlapping signals can be discriminated. In this paper we simulate the GW signals from a cosmological population of BNS and ask the following questions: Does this population create a confusion background that limits ETs ability to detect foreground sources? How efficient are current algorithms in discriminating overlapping BNS signals? Is it possible to discern the presence of a population of signals in the data by cross-correlating data from different detectors in the ET observatory? We find that algorithms currently used to analyze LIGO and Virgo data are already powerful enough to detect the sources expected in ET, but new algorithms are required to fully exploit ET data.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا