ترغب بنشر مسار تعليمي؟ اضغط هنا

We describe a Bayesian formalism for analyzing individual gravitational-wave events in light of the rest of an observed population. This analysis reveals how the idea of a ``population-informed prior arises naturally from a suitable marginalization o f an underlying hierarchical Bayesian model which consistently accounts for selection effects. Our formalism naturally leads to the presence of ``leave-one-out distributions which include subsets of events. This differs from other approximations, also known as empirical Bayes methods, which effectively double count one or more events. We design a double-reweighting post-processing strategy that uses only existing data products to reconstruct the resulting population-informed posterior distributions. Although the correction we highlight is an important conceptual point, we find it has a limited impact on the current catalog of gravitational-wave events. Our approach further allows us to study, for the first time in the gravitational-wave literature, correlations between the parameters of individual events and those of the population.
Gravitational-wave observations of binary black holes allow new tests of general relativity to be performed on strong, dynamical gravitational fields. These tests require accurate waveform models of the gravitational-wave signal, otherwise waveform e rrors can erroneously suggest evidence for new physics. Existing waveforms are generally thought to be accurate enough for current observations, and each of the events observed to date appears to be individually consistent with general relativity. In the near future, with larger gravitational-wave catalogs, it will be possible to perform more stringent tests of gravity by analyzing large numbers of events together. However, there is a danger that waveform errors can accumulate among events: even if the waveform model is accurate enough for each individual event, it can still yield erroneous evidence for new physics when applied to a large catalog. This paper presents a simple linearised analysis, in the style of a Fisher matrix calculation, that reveals the conditions under which the apparent evidence for new physics due to waveform errors grows as the catalog size increases. We estimate that, in the worst-case scenario, evidence for a deviation from general relativity might appear in some tests using a catalog containing as few as 10-30 events above a signal-to-noise ratio of 20. This is close to the size of current catalogs and highlights the need for caution when performing these sorts of experiments.
White dwarf stars are a well-established tool for studying Galactic stellar populations. Two white dwarfs in a tight binary system offer us an additional messenger - gravitational waves - for exploring the Milky Way and its immediate surroundings. Gr avitational waves produced by double white dwarf (DWD) binaries can be detected by the future Laser Interferometer Space Antenna (LISA). Numerous and widespread DWDs have the potential to probe shapes, masses and formation histories of the stellar populations in the Galactic neighbourhood. In this work we outline a method for estimating the total stellar mass of Milky Way satellite galaxies based on the number of DWDs detected by LISA. To constrain the mass we perform a Bayesian inference using binary population synthesis models and considering the number of detected DWDs associated with the satellite and the measured distance to the satellite as the only inputs. Using a fiducial binary population synthesis model we find that for large satellites the stellar masses can be recovered to within 1) a factor two if the star formation history is known and 2) an order of magnitude when marginalising over different star formation history models. For smaller satellites we can place upper limits on their stellar mass. Gravitational wave observations can provide mass measurements for large satellites that are comparable, and in some cases more precise, than standard electromagnetic observations.
We compute families of spherically symmetric neutron-star models in two-derivative scalar-tensor theories of gravity with a massive scalar field. The numerical approach we present allows us to compute the resulting spacetimes out to infinite radius u sing a relaxation algorithm on a compactified grid. We discuss the structure of the weakly and strongly scalarized branches of neutron-star models thus obtained and their dependence on the linear and quadratic coupling parameters $alpha_0$, $beta_0$ between the scalar and tensor sectors of the theory, as well as the scalar mass $mu$. For highly negative values of $beta_0$, we encounter configurations resembling a gravitational atom, consisting of a highly compact baryon star surrounded by a scalar cloud. A stability analysis based on binding-energ calculations suggests that these configurations are unstable and we expect them to migrate to models with radially decreasing baryon density {it and} scalar field strength.
Gravitational waves (GWs) are subject to gravitational lensing in the same way as electromagnetic radiation. However, to date, no unequivocal observation of a lensed GW transient has been reported. Independently, GW observatories continue to search f or the stochastic GW signal which is produced by many transient events at high redshift. We exploit a surprising connection between the lensing of individual transients and limits to the background radiation produced by the unresolved population of binary back hole mergers: we show that it constrains the fraction of individually resolvable lensed binary black holes to less than $sim 4times 10^{-5}$ at present sensitivity. We clarify the interpretation of existing, low redshift GW observations (obtained assuming no lensing) in terms of their apparent lensed redshifts and masses and explore constraints from GW observatories at future sensitivity. Based on our results, recent claims of observations of lensed events are statistically disfavoured.
This paper provides an extended exploration of the inverse-chirp gravitational-wave signals from stellar collapse in massive scalar-tensor gravity reported in [Phys. Rev. Lett. {bf 119}, 201103]. We systematically explore the parameter space that cha racterizes the progenitor stars, the equation of state and the scalar-tensor theory of the core collapse events. We identify a remarkably simple and straightforward classification scheme of the resulting collapse events. For any given set of parameters, the collapse leads to one of three end states, a weakly scalarized neutron star, a strongly scalarized neutron star or a black hole, possibly formed in multiple stages. The latter two end states can lead to strong gravitational-wave signals that may be detectable in present continuous-wave searches with ground-based detectors. We identify a very sharp boundary in the parameter space that separates events with strong gravitational-wave emission from those with negligible radiation.
A number of open problems hinder our present ability to extract scientific information from data that will be gathered by the near-future gravitational-wave mission LISA. Many of these relate to the modeling, detection and characterization of signals from binary inspirals with an extreme component-mass ratio of $lesssim10^{-4}$. In this paper, we draw attention to the issue of systematic error in parameter estimation due to the use of fast but approximate waveform models; this is found to be relevant for extreme-mass-ratio inspirals even in the case of waveforms with $gtrsim90%$ overlap accuracy and moderate ($gtrsim30$) signal-to-noise ratios. A scheme that uses Gaussian processes to interpolate and marginalize over waveform error is adapted and investigated as a possible precursor solution to this problem. Several new methodological results are obtained, and the viability of the technique is successfully demonstrated on a three-parameter example in the setting of the LISA Data Challenge.
A passing gravitational wave causes a deflection in the apparent astrometric positions of distant stars. The effect of the speed of the gravitational wave on this astrometric shift is discussed. A stochastic background of gravitational waves would re sult in a pattern of astrometric deflections which are correlated on large angular scales. These correlations are quantified and investigated for backgrounds of gravitational waves with sub- and super-luminal group velocities. The statistical properties of the correlations are depicted in two equivalent and related ways: as correlation curves and as angular power spectra. Sub-(super-)luminal gravitational wave backgrounds have the effect of enhancing (suppressing) the power in low-order angular modes. Analytical representations of the redshift-redshift and redshift-astrometry correlations are also derived. The potential for using this effect for constraining the speed of gravity is discussed.
Gaussian process regression (GPR) is a non-parametric Bayesian technique for interpolating or fitting data. The main barrier to further uptake of this powerful tool rests in the computational costs associated with the matrices which arise when dealin g with large data sets. Here, we derive some simple results which we have found useful for speeding up the learning stage in the GPR algorithm, and especially for performing Bayesian model comparison between different covariance functions. We apply our techniques to both synthetic and real data and quantify the speed-up relative to using nested sampling to numerically evaluate model evidences.
Folding uncertainty in theoretical models into Bayesian parameter estimation is necessary in order to make reliable inferences. A general means of achieving this is by marginalizing over model uncertainty using a prior distribution constructed using Gaussian process regression (GPR). As an example, we apply this technique to the measurement of chirp mass using (simulated) gravitational-wave signals from binary black holes that could be observed using advanced-era gravitational-wave detectors. Unless properly accounted for, uncertainty in the gravitational-wave templates could be the dominant source of error in studies of these systems. We explain our approach in detail and provide proofs of various features of the method, including the limiting behavior for high signal-to-noise, where systematic model uncertainties dominate over noise errors. We find that the marginalized likelihood constructed via GPR offers a significant improvement in parameter estimation over the standard, uncorrected likelihood both in our simple one-dimensional study, and theoretically in general. We also examine the dependence of the method on the size of training set used in the GPR; on the form of covariance function adopted for the GPR, and on changes to the detector noise power spectral density.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا