ترغب بنشر مسار تعليمي؟ اضغط هنا

Adaptive filtering techniques for interferometric data preparation: removal of long-term sinusoidal signals and oscillatory transients

322   0   0.0 ( 0 )
 نشر من قبل Eric Chassande-Mottin
 تاريخ النشر 2000
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose an adaptive denoising scheme for poorly modeled non-Gaussian features in the gravitational wave interferometric data. Preliminary tests on real data show encouraging results.



قيم البحث

اقرأ أيضاً

It is known by the experience gained from the gravitational wave detector proto-types that the interferometric output signal will be corrupted by a significant amount of non-Gaussian noise, large part of it being essentially composed of long-term sin usoids with slowly varying envelope (such as violin resonances in the suspensions, or main power harmonics) and short-term ringdown noise (which may emanate from servo control systems, electronics in a non-linear state, etc.). Since non-Gaussian noise components make the detection and estimation of the gravitational wave signature more difficult, a denoising algorithm based on adaptive filtering techniques (LMS methods) is proposed to separate and extract them from the stationary and Gaussian background noise. The strength of the method is that it does not require any precise model on the observed data: the signals are distinguished on the basis of their autocorrelation time. We believe that the robustness and simplicity of this method make it useful for data preparation and for the understanding of the first interferometric data. We present the detailed structure of the algorithm and its application to both simulated data and real data from the LIGO 40meter proto-type.
Long-lived gravitational wave (GW) transients have received interest in the last decade, as the sensitivity of LIGO and Virgo increases. Such signals, lasting between 10 and 1000s, can come from a variety of sources, including accretion disk instabil ities around black holes, binary neutron stars post-merger, core-collapse supernovae, non-axisymmetric deformations in isolated neutron stars, and magnetar giant flares. Given the large parameter space and the lack of precisely modeled waveforms, searches must rely on robust detection algorithms, which make few or no assumptions on the nature of the signal. Here we present a new data analysis pipeline to search for long-lived transient GW signals, based on an excess cross-power statistic computed over a network of detectors. It uses a hierarchical strategy that allows to estimate the background quickly and implements several features aimed to increase detection sensitivity by 30% for a wide range of signal morphology compared to an older implementation. We also report upper limits on the GW energy emitted from a search conducted with the pipeline for GW emission around a sample of nearby magnetar giant flares, and discuss detection potential of such sources with second and third-generation detectors.
We present a new ${it{gating}}$ method to remove non-Gaussian noise transients in gravitational wave data. The method does not rely on any a-priori knowledge on the amplitude or duration of the transient events. In light of the character of the newly released LIGO O3a data, glitch-identification is particularly relevant for searches using this data. Our method preserves more data than previously achieved, while obtaining the same, if not higher, noise reduction. We achieve a $approx$ 2-fold reduction in zeroed-out data with respect to the gates released by LIGO on the O3a data. We describe the method and characterise its performance. While developed in the context of searches for continuous signals, this method can be used to prepare gravitational wave data for any search. As the cadence of compact binary inspiral detections increases and the lower noise level of the instruments unveils new glitches, excising disturbances effectively, precisely, and in a timely manner, becomes more important. Our method does this. We release the source code associated with this new technique and the gates for the newly released O3 data.
Since the very first detection of gravitational waves from the coalescence of two black holes in 2015, Bayesian statistical methods have been routinely applied by LIGO and Virgo to extract the signal out of noisy interferometric measurements, obtain point estimates of the physical parameters responsible for producing the signal, and rigorously quantify their uncertainties. Different computational techniques have been devised depending on the source of the gravitational radiation and the gravitational waveform model used. Prominent sources of gravitational waves are binary black hole or neutron star mergers, the only objects that have been observed by detectors to date. But also gravitational waves from core collapse supernovae, rapidly rotating neutron stars, and the stochastic gravitational wave background are in the sensitivity band of the ground-based interferometers and expected to be observable in future observation runs. As nonlinearities of the complex waveforms and the high-dimensional parameter spaces preclude analytic evaluation of the posterior distribution, posterior inference for all these sources relies on computer-intensive simulation techniques such as Markov chain Monte Carlo methods. A review of state-of-the-art Bayesian statistical parameter estimation methods will be given for researchers in this cross-disciplinary area of gravitational wave data analysis.
A vitally important requirement for detecting gravitational wave (GW) signals from compact coalescing binaries (CBC) with high significance is the reduction of the false-alarm rate of the matched-filter statistic. The data from GW detectors contain t ransient noise artifacts, or glitches, which adversely affect the performance of search algorithms by producing false alarms. Glitches with large amplitudes can produce triggers in the SNR time-series in spite of their small overlap with the templates. This contributes to false alarms. Historically, the traditional $chi^2$ test has proved quite useful in distinguishing triggers arising from CBC signals and those caused by glitches. In a recent paper, a large class of unified $chi^2$ discriminators was formulated, along with a procedure to construct an optimal $chi^2$ discriminator, especially, when the glitches can be modeled. A large variety of glitches that often occur in GW detector data can be modeled as sine-Gaussians, with quality factor and central frequency, ($Q,f_0$), as parameters. We use Singular Value Decomposition to identify the most significant degrees of freedom in order to reduce the computational cost of our $chi^2$. Finally, we construct a $chi^2$ statistic that optimally discriminates between sine-Gaussian glitches and CBC signals. We also use Receiver-Operating-Characteristics to quantify the improvement in search sensitivity when it employs the optimal $chi^2$ compared to the traditional $chi^2$. The improvement in detection probability is by a few to several percentage points, near a false-alarm probability of a few times $10^{-3}$, and holds for binary black holes (BBHs) with component masses from several to a hundred solar masses. Moreover, the glitches that are best discriminated against are those that are like sine-Gaussians with $Qin [25,50]$ and $f_0in [40,80]$Hz.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا