ترغب بنشر مسار تعليمي؟ اضغط هنا

Extraction of instantaneous frequencies and amplitudes in nonstationary time-series data

84   0   0.0 ( 0 )
 نشر من قبل Daniel Shea
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Time-series analysis is critical for a diversity of applications in science and engineering. By leveraging the strengths of modern gradient descent algorithms, the Fourier transform, multi-resolution analysis, and Bayesian spectral analysis, we propose a data-driven approach to time-frequency analysis that circumvents many of the shortcomings of classic approaches, including the extraction of nonstationary signals with discontinuities in their behavior. The method introduced is equivalent to a {em nonstationary Fourier mode decomposition} (NFMD) for nonstationary and nonlinear temporal signals, allowing for the accurate identification of instantaneous frequencies and their amplitudes. The method is demonstrated on a diversity of time-series data, including on data from cantilever-based electrostatic force microscopy to quantify the time-dependent evolution of charging dynamics at the nanoscale.



قيم البحث

اقرأ أيضاً

Time series data compression is emerging as an important problem with the growth in IoT devices and sensors. Due to the presence of noise in these datasets, lossy compression can often provide significant compression gains without impacting the perfo rmance of downstream applications. In this work, we propose an error-bounded lossy compressor, LFZip, for multivariate floating-point time series data that provides guaranteed reconstruction up to user-specified maximum absolute error. The compressor is based on the prediction-quantization-entropy coder framework and benefits from improved prediction using linear models and neural networks. We evaluate the compressor on several time series datasets where it outperforms the existing state-of-the-art error-bounded lossy compressors. The code and data are available at https://github.com/shubhamchandak94/LFZip
Sufficient high-quality traffic data are a crucial component of various Intelligent Transportation System (ITS) applications and research related to congestion prediction, speed prediction, incident detection, and other traffic operation tasks. Nonet heless, missing traffic data are a common issue in sensor data which is inevitable due to several reasons, such as malfunctioning, poor maintenance or calibration, and intermittent communications. Such missing data issues often make data analysis and decision-making complicated and challenging. In this study, we have developed a generative adversarial network (GAN) based traffic sensor data imputation framework (TSDIGAN) to efficiently reconstruct the missing data by generating realistic synthetic data. In recent years, GANs have shown impressive success in image data generation. However, generating traffic data by taking advantage of GAN based modeling is a challenging task, since traffic data have strong time dependency. To address this problem, we propose a novel time-dependent encoding method called the Gramian Angular Summation Field (GASF) that converts the problem of traffic time-series data generation into that of image generation. We have evaluated and tested our proposed model using the benchmark dataset provided by Caltrans Performance Management Systems (PeMS). This study shows that the proposed model can significantly improve the traffic data imputation accuracy in terms of Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) compared to state-of-the-art models on the benchmark dataset. Further, the model achieves reasonably high accuracy in imputation tasks even under a very high missing data rate ($>$ 50%), which shows the robustness and efficiency of the proposed model.
Objective: Mixtures of temporally nonstationary signals are very common in biomedical applications. The nonstationarity of the source signals can be used as a discriminative property for signal separation. Herein, a semi-blind source separation algor ithm is proposed for the extraction of temporally nonstationary components from linear multichannel mixtures of signals and noises. Methods: A hypothesis test is proposed for the detection and fusion of temporally nonstationary events, by using ad hoc indexes for monitoring the first and second order statistics of the innovation process. As proof of concept, the general framework is customized and tested over noninvasive fetal cardiac recordings acquired from the maternal abdomen, over publicly available datasets, using two types of nonstationarity detectors: 1) a local power variations detector, and 2) a model-deviations detector using the innovation process properties of an extended Kalman filter. Results: The performance of the proposed method is assessed in presence of white and colored noise, in different signal-to-noise ratios. Conclusion and Significance: The proposed scheme is general and it can be used for the extraction of nonstationary events and sample deviations from a presumed model in multivariate data, which is a recurrent problem in many machine learning applications.
We develop a method for the multifractal characterization of nonstationary time series, which is based on a generalization of the detrended fluctuation analysis (DFA). We relate our multifractal DFA method to the standard partition function-based mul tifractal formalism, and prove that both approaches are equivalent for stationary signals with compact support. By analyzing several examples we show that the new method can reliably determine the multifractal scaling behavior of time series. By comparing the multifractal DFA results for original series to those for shuffled series we can distinguish multifractality due to long-range correlations from multifractality due to a broad probability density function. We also compare our results with the wavelet transform modulus maxima (WTMM) method, and show that the results are equivalent.
The growing popularity of wearable sensors has generated large quantities of temporal physiological and activity data. Ability to analyze this data offers new opportunities for real-time health monitoring and forecasting. However, temporal physiologi cal data presents many analytic challenges: the data is noisy, contains many missing values, and each series has a different length. Most methods proposed for time series analysis and classification do not handle datasets with these characteristics nor do they offer interpretability and explainability, a critical requirement in the health domain. We propose an unsupervised method for learning representations of time series based on common patterns identified within them. The patterns are, interpretable, variable in length, and extracted using Byte Pair Encoding compression technique. In this way the method can capture both long-term and short-term dependencies present in the data. We show that this method applies to both univariate and multivariate time series and beats state-of-the-art approaches on a real world dataset collected from wearable sensors.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا