Do you want to publish a course? Click here

Extracting correlations in earthquake time series using visibility graph analysis

63   0   0.0 ( 0 )
 Added by Sumanta Kundu
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

Recent observation studies have revealed that earthquakes are classified into several different categories. Each category might be characterized by the unique statistical feature in the time series, but the present understanding is still limited due to their nonlinear and nonstationary nature. Here we utilize complex network theory to shed new light on the statistical properties of earthquake time series. We investigate two kinds of time series, which are magnitude and inter-event time (IET), for three different categories of earthquakes: regular earthquakes, earthquake swarms, and tectonic tremors. Following the criterion of visibility graph, earthquake time series are mapped into a complex network by considering each seismic event as a node and determining the links. As opposed to the current common belief, it is found that the magnitude time series are not statistically equivalent to random time series. The IET series exhibit correlations similar to fractional Brownian motion for all the categories of earthquakes. Furthermore, we show that the time series of three different categories of earthquakes can be distinguished by the topology of the associated visibility graph. Analysis on the assortativity coefficient also reveals that the swarms are more intermittent than the tremors.



rate research

Read More

An article for the Springer Encyclopedia of Complexity and System Science
Earthquakes can be detected by matching spatial patterns or phase properties from 1-D seismic waves. Current earthquake detection methods, such as waveform correlation and template matching, have difficulty detecting anomalous earthquakes that are not similar to other earthquakes. In recent years, machine-learning techniques for earthquake detection have been emerging as a new active research direction. In this paper, we develop a novel earthquake detection method based on dictionary learning. Our detection method first generates rich features via signal processing and statistical methods and further employs feature selection techniques to choose features that carry the most significant information. Based on these selected features, we build a dictionary for classifying earthquake events from non-earthquake events. To evaluate the performance of our dictionary-based detection methods, we test our method on a labquake dataset from Penn State University, which contains 3,357,566 time series data points with a 400 MHz sampling rate. 1,000 earthquake events are manually labeled in total, and the length of these earthquake events varies from 74 to 7151 data points. Through comparison to other detection methods, we show that our feature selection and dictionary learning incorporated earthquake detection method achieves an 80.1% prediction accuracy and outperforms the baseline methods in earthquake detection, including Template Matching (TM) and Support Vector Machine (SVM).
A likely source of earthquake clustering is static stress transfer between individual events. Previous attempts to quantify the role of static stress for earthquake triggering generally considered only the stress changes caused by large events, and often discarded data uncertainties. We conducted a robust two-fold empirical test of the static stress change hypothesis by accounting for all events of magnitude M>=2.5 and their location and focal mechanism uncertainties provided by catalogs for Southern California between 1981 and 2010, first after resolving the focal plane ambiguity and second after randomly choosing one of the two nodal planes. For both cases, we find compelling evidence supporting the static triggering with stronger evidence after resolving the focal plane ambiguity above significantly small (about 10 Pa) but consistently observed stress thresholds. The evidence for the static triggering hypothesis is robust with respect to the choice of the friction coefficient, Skemptons coefficient and magnitude threshold. Weak correlations between the Coulomb Index (fraction of earthquakes that received positive Coulomb stress change) and the coefficient of friction indicate that the role of normal stress in triggering is rather limited. Last but not the least, we determined that the characteristic time for the loss of the stress change memory of a single event is nearly independent of the amplitude of the Coulomb stress change and varies between ~95 and ~180 days implying that forecasts based on static stress changes will have poor predictive skills beyond times that are larger than a few hundred days on average.
In this paper we employ methods from Statistical Mechanics to model temporal correlations in time series. We put forward a methodology based on the Maximum Entropy principle to generate ensembles of time series constrained to preserve part of the temporal structure of an empirical time series of interest. We show that a constraint on the lag-one autocorrelation can be fully handled analytically, and corresponds to the well known Spherical Model of a ferromagnet. We then extend such a model to include constraints on more complex temporal correlations by means of perturbation theory, showing that this leads to substantial improvements in capturing the lag-one autocorrelation in the variance. We apply our approach on synthetic data, and illustrate how it can be used to formulate expectations on the future values of a data generating process.
Visibility Graph (VG) transforms time series into graphs, facilitating signal processing by advanced graph data mining algorithms. In this paper, based on the classic Limited Penetrable Visibility Graph (LPVG) method, we propose a novel nonlinear mapping method named Circular Limited Penetrable Visibility Graph (CLPVG). The testing on degree distribution and clustering coefficient on the generated graphs of typical time series validates that our CLPVG is able to effectively capture the important features of time series and has better anti-noise ability than traditional LPVG. The experiments on real-world time-series datasets of radio signal and electroencephalogram (EEG) also suggest that the structural features provided by CLPVG, rather than LPVG, are more useful for time-series classification, leading to higher accuracy. And this classification performance can be further enhanced through structural feature expansion by adopting Subgraph Networks (SGN). All of these results validate the effectiveness of our CLPVG model.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا