ترغب بنشر مسار تعليمي؟ اضغط هنا

Statistics of seismic cluster durations

353   0   0.0 ( 0 )
 نشر من قبل Didier Sornette
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Using the standard ETAS model of triggered seismicity, we present a rigorous theoretical analysis of the main statistical properties of temporal clusters, defined as the group of events triggered by a given main shock of fixed magnitude m that occurred at the origin of time, at times larger than some present time t. Using the technology of generating probability function (GPF), we derive the explicit expressions for the GPF of the number of future offsprings in a given temporal seismic cluster, defining, in particular, the statistics of the clusters duration and the clusters offsprings maximal magnitudes. We find the remarkable result that the magnitude difference between the largest and second largest event in the future temporal cluster is distributed according to the regular Gutenberg-Richer law that controls the unconditional distribution of earthquake magnitudes. For earthquakes obeying the Omori-Utsu law for the distribution of waiting times between triggering and triggered events, we show that the distribution of the durations of temporal clusters of events of magnitudes above some detection threshold u has a power law tail that is fatter in the non-critical regime $n<1$ than in the critical case n=1. This paradoxical behavior can be rationalised from the fact that generations of all orders cascade very fast in the critical regime and accelerate the temporal decay of the cluster dynamics.

قيم البحث

اقرأ أيضاً

231 - A. Saichev 2004
We report an empirical determination of the probability density functions $P_{text{data}}(r)$ of the number $r$ of earthquakes in finite space-time windows for the California catalog. We find a stable power law tail $P_{text{data}}(r) sim 1/r^{1+mu}$ with exponent $mu approx 1.6$ for all space ($5 times 5$ to $20 times 20$ km$^2$) and time intervals (0.1 to 1000 days). These observations, as well as the non-universal dependence on space-time windows for all different space-time windows simultaneously, are explained by solving one of the most used reference model in seismology (ETAS), which assumes that each earthquake can trigger other earthquakes. The data imposes that active seismic regions are Cauchy-like fractals, whose exponent $delta =0.1 pm 0.1$ is well-constrained by the seismic rate data.
We report on a novel stochastic analysis of seismic time series for the Earths vertical velocity, by using methods originally developed for complex hierarchical systems, and in particular for turbulent flows. Analysis of the fluctuations of the detre nded increments of the series reveals a pronounced change of the shapes of the probability density functions (PDF) of the series increments. Before and close to an earthquake the shape of the PDF and the long-range correlation in the increments both manifest significant changes. For a moderate or large-size earthquake the typical time at which the PDF undergoes the transition from a Gaussian to a non-Gaussian is about 5-10 hours. Thus, the transition represents a new precursor for detecting such earthquakes.
In this paper, we present an analysis of seismic spectra that were calculated from all broadband channels (BH?) made available through IRIS, NIED F-net and Orfeus servers covering the past five years and beyond. A general characterization of the data is given in terms of spectral histograms and data-availability plots. We show that the spectral information can easily be categorized in time and regions. Spectral histograms indicate that seismic stations exist in Africa, Australia and Antarctica that measure spectra significantly below the global low-noise models above 1 Hz. We investigate world-wide coherence between the seismic spectra and other data sets like proximity to cities, station elevation, earthquake frequency, and wind speeds. Elevation of seismic stations in the US is strongly anti-correlated with seismic noise near 0.2 Hz and again above 1.5 Hz. Urban settlements are shown to produce excess noise above 1 Hz, but correlation curves look very different depending on the region. It is shown that wind speeds can be strongly correlated with seismic noise above 0.1 Hz, whereas earthquakes produce seismic noise that shows most clearly in correlation around 80 mHz.
125 - I. Loris , H. Douma , G. Nolet 2010
The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, $ell_2$ penalties are compared to so-called sparsity promoting $ell_1$ and $ell_0$ penalties, and a total variati on penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an $ell_2$ norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer $ell_1$ damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple $ell_2$ minimization (`Tikhonov regularization) which should be avoided. In some of our examples, the $ell_0$ method produced notable artifacts. In addition we show how nonlinear $ell_1$ methods for finding sparse models can be competitive in speed with the widely used $ell_2$ methods, certainly under noisy conditions, so that there is no need to shun $ell_1$ penalizations.
This paper introduces novel deep recurrent neural network architectures for Velocity Model Building (VMB), which is beyond what Araya-Polo et al 2018 pioneered with the Machine Learning-based seismic tomography built with convolutional non-recurrent neural network. Our investigation includes the utilization of basic recurrent neural network (RNN) cells, as well as Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) cells. Performance evaluation reveals that salt bodies are consistently predicted more accurately by GRU and LSTM-based architectures, as compared to non-recurrent architectures. The results take us a step closer to the final goal of a reliable fully Machine Learning-based tomography from pre-stack data, which when achieved will reduce the VMB turnaround from weeks to days.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا