ترغب بنشر مسار تعليمي؟ اضغط هنا

Methods for detection and characterization of signals in noisy data with the Hilbert-Huang Transform

342   0   0.0 ( 0 )
 نشر من قبل Alexander Stroeer
 تاريخ النشر 2009
والبحث باللغة English




اسأل ChatGPT حول البحث

The Hilbert-Huang Transform is a novel, adaptive approach to time series analysis that does not make assumptions about the data form. Its adaptive, local character allows the decomposition of non-stationary signals with hightime-frequency resolution but also renders it susceptible to degradation from noise. We show that complementing the HHT with techniques such as zero-phase filtering, kernel density estimation and Fourier analysis allows it to be used effectively to detect and characterize signals with low signal to noise ratio.



قيم البحث

اقرأ أيضاً

The Ninja data analysis challenge allowed the study of the sensitivity of data analysis pipelines to binary black hole numerical relativity waveforms in simulated Gaussian noise at the design level of the LIGO observatory and the VIRGO observatory. W e analyzed NINJA data with a pipeline based on the Hilbert Huang Transform, utilizing a detection stage and a characterization stage: detection is performed by triggering on excess instantaneous power, characterization is performed by displaying the kernel density enhanced (KD) time-frequency trace of the signal. Using the simulated data based on the two LIGO detectors, we were able to detect 77 signals out of 126 above SNR 5 in coincidence, with 43 missed events characterized by signal to noise ratio SNR less than 10. Characterization of the detected signals revealed the merger part of the waveform in high time and frequency resolution, free from time-frequency uncertainty. We estimated the timelag of the signals between the detectors based on the optimal overlap of the individual KD time-frequency maps, yielding estimates accurate within a fraction of a millisecond for half of the events. A coherent addition of the data sets according to the estimated timelag eventually was used in a characterization of the event.
Since Bandt and Pompes seminal work, permutation entropy has been used in several applications and is now an essential tool for time series analysis. Beyond becoming a popular and successful technique, permutation entropy inspired a framework for map ping time series into symbolic sequences that triggered the development of many other tools, including an approach for creating networks from time series known as ordinal networks. Despite the increasing popularity, the computational development of these methods is fragmented, and there were still no efforts focusing on creating a unified software package. Here we present ordpy, a simple and open-source Python module that implements permutation entropy and several of the principal methods related to Bandt and Pompes framework to analyze time series and two-dimensional data. In particular, ordpy implements permutation entropy, Tsallis and Renyi permutation entropies, complexity-entropy plane, complexity-entropy curves, missing ordinal patterns, ordinal networks, and missing ordinal transitions for one-dimensional (time series) and two-dimensional (images) data as well as their multiscale generalizations. We review some theoretical aspects of these tools and illustrate the use of ordpy by replicating several literature results.
We present a new event trigger generator based on the Hilbert-Huang transform, named EtaGen ($eta$Gen). It decomposes a time-series data into several adaptive modes without imposing a priori bases on the data. The adaptive modes are used to find tran sients (excesses) in the background noises. A clustering algorithm is used to gather excesses corresponding to a single event and to reconstruct its waveform. The performance of EtaGen is evaluated by how many injections in the LIGO simulated data are found. EtaGen is viable as an event trigger generator when compared directly with the performance of Omicron, which is currently the best event trigger generator used in the LIGO Scientific Collaboration and Virgo Collaboration.
Data-driven prediction and physics-agnostic machine-learning methods have attracted increased interest in recent years achieving forecast horizons going well beyond those to be expected for chaotic dynamical systems. In a separate strand of research data-assimilation has been successfully used to optimally combine forecast models and their inherent uncertainty with incoming noisy observations. The key idea in our work here is to achieve increased forecast capabilities by judiciously combining machine-learning algorithms and data assimilation. We combine the physics-agnostic data-driven approach of random feature maps as a forecast model within an ensemble Kalman filter data assimilation procedure. The machine-learning model is learned sequentially by incorporating incoming noisy observations. We show that the obtained forecast model has remarkably good forecast skill while being computationally cheap once trained. Going beyond the task of forecasting, we show that our method can be used to generate reliable ensembles for probabilistic forecasting as well as to learn effective model closure in multi-scale systems.
77 - Stefan Schmitt 2016
A selection of unfolding methods commonly used in High Energy Physics is compared. The methods discussed here are: bin-by-bin correction factors, matrix inversion, template fit, Tikhonov regularisation and two examples of iterative methods. Two proce dures to choose the strength of the regularisation are tested, namely the L-curve scan and a scan of global correlation coefficients. The advantages and disadvantages of the unfolding methods and choices of the regularisation strength are discussed using a toy example.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا