Do you want to publish a course? Click here

Methods for detection and characterization of signals in noisy data with the Hilbert-Huang Transform

332   0   0.0 ( 0 )
 Added by Alexander Stroeer
 Publication date 2009
and research's language is English




Ask ChatGPT about the research

The Hilbert-Huang Transform is a novel, adaptive approach to time series analysis that does not make assumptions about the data form. Its adaptive, local character allows the decomposition of non-stationary signals with hightime-frequency resolution but also renders it susceptible to degradation from noise. We show that complementing the HHT with techniques such as zero-phase filtering, kernel density estimation and Fourier analysis allows it to be used effectively to detect and characterize signals with low signal to noise ratio.



rate research

Read More

The Ninja data analysis challenge allowed the study of the sensitivity of data analysis pipelines to binary black hole numerical relativity waveforms in simulated Gaussian noise at the design level of the LIGO observatory and the VIRGO observatory. We analyzed NINJA data with a pipeline based on the Hilbert Huang Transform, utilizing a detection stage and a characterization stage: detection is performed by triggering on excess instantaneous power, characterization is performed by displaying the kernel density enhanced (KD) time-frequency trace of the signal. Using the simulated data based on the two LIGO detectors, we were able to detect 77 signals out of 126 above SNR 5 in coincidence, with 43 missed events characterized by signal to noise ratio SNR less than 10. Characterization of the detected signals revealed the merger part of the waveform in high time and frequency resolution, free from time-frequency uncertainty. We estimated the timelag of the signals between the detectors based on the optimal overlap of the individual KD time-frequency maps, yielding estimates accurate within a fraction of a millisecond for half of the events. A coherent addition of the data sets according to the estimated timelag eventually was used in a characterization of the event.
Since Bandt and Pompes seminal work, permutation entropy has been used in several applications and is now an essential tool for time series analysis. Beyond becoming a popular and successful technique, permutation entropy inspired a framework for mapping time series into symbolic sequences that triggered the development of many other tools, including an approach for creating networks from time series known as ordinal networks. Despite the increasing popularity, the computational development of these methods is fragmented, and there were still no efforts focusing on creating a unified software package. Here we present ordpy, a simple and open-source Python module that implements permutation entropy and several of the principal methods related to Bandt and Pompes framework to analyze time series and two-dimensional data. In particular, ordpy implements permutation entropy, Tsallis and Renyi permutation entropies, complexity-entropy plane, complexity-entropy curves, missing ordinal patterns, ordinal networks, and missing ordinal transitions for one-dimensional (time series) and two-dimensional (images) data as well as their multiscale generalizations. We review some theoretical aspects of these tools and illustrate the use of ordpy by replicating several literature results.
We present a new event trigger generator based on the Hilbert-Huang transform, named EtaGen ($eta$Gen). It decomposes a time-series data into several adaptive modes without imposing a priori bases on the data. The adaptive modes are used to find transients (excesses) in the background noises. A clustering algorithm is used to gather excesses corresponding to a single event and to reconstruct its waveform. The performance of EtaGen is evaluated by how many injections in the LIGO simulated data are found. EtaGen is viable as an event trigger generator when compared directly with the performance of Omicron, which is currently the best event trigger generator used in the LIGO Scientific Collaboration and Virgo Collaboration.
Data-driven prediction and physics-agnostic machine-learning methods have attracted increased interest in recent years achieving forecast horizons going well beyond those to be expected for chaotic dynamical systems. In a separate strand of research data-assimilation has been successfully used to optimally combine forecast models and their inherent uncertainty with incoming noisy observations. The key idea in our work here is to achieve increased forecast capabilities by judiciously combining machine-learning algorithms and data assimilation. We combine the physics-agnostic data-driven approach of random feature maps as a forecast model within an ensemble Kalman filter data assimilation procedure. The machine-learning model is learned sequentially by incorporating incoming noisy observations. We show that the obtained forecast model has remarkably good forecast skill while being computationally cheap once trained. Going beyond the task of forecasting, we show that our method can be used to generate reliable ensembles for probabilistic forecasting as well as to learn effective model closure in multi-scale systems.
77 - Stefan Schmitt 2016
A selection of unfolding methods commonly used in High Energy Physics is compared. The methods discussed here are: bin-by-bin correction factors, matrix inversion, template fit, Tikhonov regularisation and two examples of iterative methods. Two procedures to choose the strength of the regularisation are tested, namely the L-curve scan and a scan of global correlation coefficients. The advantages and disadvantages of the unfolding methods and choices of the regularisation strength are discussed using a toy example.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا