Do you want to publish a course? Click here

Turbulent-Like Behavior of Seismic Time Series

118   0   0.0 ( 0 )
 Publication date 2009
  fields Physics
and research's language is English




Ask ChatGPT about the research

We report on a novel stochastic analysis of seismic time series for the Earths vertical velocity, by using methods originally developed for complex hierarchical systems, and in particular for turbulent flows. Analysis of the fluctuations of the detrended increments of the series reveals a pronounced change of the shapes of the probability density functions (PDF) of the series increments. Before and close to an earthquake the shape of the PDF and the long-range correlation in the increments both manifest significant changes. For a moderate or large-size earthquake the typical time at which the PDF undergoes the transition from a Gaussian to a non-Gaussian is about 5-10 hours. Thus, the transition represents a new precursor for detecting such earthquakes.

rate research

Read More

Electric signals have been recently recorded at the Earths surface with amplitudes appreciably larger than those hitherto reported. Their entropy in natural time is smaller than that, $S_u$, of a ``uniform distribution. The same holds for their entropy upon time-reversal. This behavior, as supported by numerical simulations in fBm time series and in an on-off intermittency model, stems from infinitely ranged long range temporal correlations and hence these signals are probably Seismic Electric Signals (critical dynamics). The entropy fluctuations are found to increase upon approaching bursting, which reminds the behavior identifying sudden cardiac death individuals when analysing their electrocardiograms.
87 - A. Saichev 2004
We report an empirical determination of the probability density functions P(r) of the number r of earthquakes in finite space-time windows for the California catalog, over fixed spatial boxes 5 x 5 km^2 and time intervals dt =1, 10, 100 and 1000 days. We find a stable power law tail P(r) ~ 1/r^{1+mu} with exponent mu approx 1.6 for all time intervals. These observations are explained by a simple stochastic branching process previously studied by many authors, the ETAS (epidemic-type aftershock sequence) model which assumes that each earthquake can trigger other earthquakes (``aftershocks). An aftershock sequence results in this model from the cascade of aftershocks of each past earthquake. We develop the full theory in terms of generating functions for describing the space-time organization of earthquake sequences and develop several approximations to solve the equations. The calibration of the theory to the empirical observations shows that it is essential to augment the ETAS model by taking account of the pre-existing frozen heterogeneity of spontaneous earthquake sources. This seems natural in view of the complex multi-scale nature of fault networks, on which earthquakes nucleate. Our extended theory is able to account for the empirical observation satisfactorily. In particular, the adjustable parameters are determined by fitting the largest time window $dt=1000$ days and are then used as frozen in the formulas for other time scales, with very good agreement with the empirical data.
78 - Lingchen Zhu , Entao Liu , 2017
Seismic data quality is vital to geophysical applications, so methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the dataset without introducing pseudo-Gibbs artifacts when compared to other directional multiscale transform methods such as curvelets.
We introduce the concept of time series motifs for time series analysis. Time series motifs consider not only the spatial information of mutual visibility but also the temporal information of relative magnitude between the data points. We study the profiles of the six triadic time series. The six motif occurrence frequencies are derived for uncorrelated time series, which are approximately linear functions of the length of the time series. The corresponding motif profile thus converges to a constant vector $(0.2,0.2,0.1,0.2,0.1,0.2)$. These analytical results have been verified by numerical simulations. For fractional Gaussian noises, numerical simulations unveil the nonlinear dependence of motif occurrence frequencies on the Hurst exponent. Applications of the time series motif analysis uncover that the motif occurrence frequency distributions are able to capture the different dynamics in the heartbeat rates of healthy subjects, congestive heart failure (CHF) subjects, and atrial fibrillation (AF) subjects and in the price fluctuations of bullish and bearish markets. Our method shows its potential power to classify different types of time series and test the time irreversibility of time series.
A likely source of earthquake clustering is static stress transfer between individual events. Previous attempts to quantify the role of static stress for earthquake triggering generally considered only the stress changes caused by large events, and often discarded data uncertainties. We conducted a robust two-fold empirical test of the static stress change hypothesis by accounting for all events of magnitude M>=2.5 and their location and focal mechanism uncertainties provided by catalogs for Southern California between 1981 and 2010, first after resolving the focal plane ambiguity and second after randomly choosing one of the two nodal planes. For both cases, we find compelling evidence supporting the static triggering with stronger evidence after resolving the focal plane ambiguity above significantly small (about 10 Pa) but consistently observed stress thresholds. The evidence for the static triggering hypothesis is robust with respect to the choice of the friction coefficient, Skemptons coefficient and magnitude threshold. Weak correlations between the Coulomb Index (fraction of earthquakes that received positive Coulomb stress change) and the coefficient of friction indicate that the role of normal stress in triggering is rather limited. Last but not the least, we determined that the characteristic time for the loss of the stress change memory of a single event is nearly independent of the amplitude of the Coulomb stress change and varies between ~95 and ~180 days implying that forecasts based on static stress changes will have poor predictive skills beyond times that are larger than a few hundred days on average.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا