ترغب بنشر مسار تعليمي؟ اضغط هنا

Systematic Assessment of the Static Stress-Triggering Hypothesis using Inter-earthquake Time Statistics

102   0   0.0 ( 0 )
 نشر من قبل Shyam Nandan
 تاريخ النشر 2016
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

A likely source of earthquake clustering is static stress transfer between individual events. Previous attempts to quantify the role of static stress for earthquake triggering generally considered only the stress changes caused by large events, and often discarded data uncertainties. We conducted a robust two-fold empirical test of the static stress change hypothesis by accounting for all events of magnitude M>=2.5 and their location and focal mechanism uncertainties provided by catalogs for Southern California between 1981 and 2010, first after resolving the focal plane ambiguity and second after randomly choosing one of the two nodal planes. For both cases, we find compelling evidence supporting the static triggering with stronger evidence after resolving the focal plane ambiguity above significantly small (about 10 Pa) but consistently observed stress thresholds. The evidence for the static triggering hypothesis is robust with respect to the choice of the friction coefficient, Skemptons coefficient and magnitude threshold. Weak correlations between the Coulomb Index (fraction of earthquakes that received positive Coulomb stress change) and the coefficient of friction indicate that the role of normal stress in triggering is rather limited. Last but not the least, we determined that the characteristic time for the loss of the stress change memory of a single event is nearly independent of the amplitude of the Coulomb stress change and varies between ~95 and ~180 days implying that forecasts based on static stress changes will have poor predictive skills beyond times that are larger than a few hundred days on average.



قيم البحث

اقرأ أيضاً

In line of the intermediate-term monitoring of seismic activity aimed at prediction of the world largest earthquakes the seismic dynamics of the Earths lithosphere is analysed as a single whole, which is the ultimate scale of the complex hierarchical non-linear system. The present study demonstrates that the lithosphere does behave, at least in intermediate-term scale, as non-linear dynamic system that reveals classical symptoms of instability at the approach of catastrophe, i.e., mega-earthquake. These are: (i) transformation of magnitude distribution, (ii) spatial redistribution of seismic activity, (iii) rise and acceleration of activity, (iv) change of dependencies across magnitudes of different types, and other patterns of collective behaviour. The observed global scale seismic behaviour implies the state of criticality of the Earth lithosphere in the last decade.
Fundamentally related to the UV divergence problem in Physics, conventional wisdom in seismology is that the smallest earthquakes, which are numerous and often go undetected, dominate the triggering of major earthquakes, making the prediction of the latter difficult if not inherently impossible. By developing a rigorous validation procedure, we show that, in fact, large earthquakes (above magnitude 6.3 in California) are preferentially triggered by large events. Because of the magnitude correlations intrinsic in the validated model, we further rationalize the existence of earthquake doublets. These findings have far-reaching implications for short-term and medium-term seismic risk assessment, as well as for the development of a deeper theory without UV cut-off that is locally self-similar.
We propose a new type of earthquake precursor based on the analysis of correlation dynamics between geophysical signals of different nature. The precursor is found using a two-parameter cross-correlation function introduced within the framework of fl icker-noise spectroscopy, a general statistical physics approach to the analysis of time series. We consider an example of cross-correlation analysis for water salinity time series, an integral characteristic of the chemical composition of groundwater, and geoacoustic emissions recorded at the G-1 borehole on the Kamchatka peninsula in the time frame from 2001 to 2003, which is characterized by a sequence of three groups of significant seismic events. We found that cross-correlation precursors took place 27, 31, and 35 days ahead of the strongest earthquakes for each group of seismic events, respectively. At the same time, precursory anomalies in the signals themselves were observed only in the geoacoustic emissions for one group of earthquakes.
We present the condensation method that exploits the heterogeneity of the probability distribution functions (PDF) of event locations to improve the spatial information content of seismic catalogs. The method reduces the size of seismic catalogs whil e improving the access to the spatial information content of seismic catalogs. The PDFs of events are first ranked by decreasing location errors and then successively condensed onto better located and lower variance event PDFs. The obtained condensed catalog attributes different weights to each event, providing an optimal spatial representation with respect to the spatially varying location capability of the seismic network. Synthetic tests on fractal distributions perturbed with realistic location errors show that condensation improves spatial information content of the original catalog. Applied to Southern California seismicity, the new condensed catalog highlights major mapped fault traces and reveals possible additional structures while reducing the catalog length by ~25%. The condensation method allows us to account for location error information within a point based spatial analysis. We demonstrate this by comparing the multifractal properties of the condensed catalog locations with those of the original catalog. We evidence different spatial scaling regimes characterized by distinct multifractal spectra and separated by transition scales. We interpret the upper scale as to agree with the thickness of the brittle crust, while the lower scale (2.5km) might depend on the relocation procedure. Accounting for these new results, the Epidemic Type Aftershock Model formulation suggests that, contrary to previous studies, large earthquakes dominate the earthquake triggering process. This implies that the limited capability of detecting small magnitude events cannot be used to argue that earthquakes are unpredictable in general.
Natural earthquake fault systems are highly non-homogeneous. The inhomogeneities occur be- cause the earth is made of a variety of materials which hold and dissipate stress differently. In this work, we study scaling in earthquake fault models which are variations of the Olami-Feder- Christensen (OFC) and Rundle-Jackson-Brown (RJB) models. We use the scaling to explore the effect of spatial inhomogeneities due to damage and inhomogeneous stress dissipation in the earthquake-fault-like systems when the stress transfer range is long, but not necessarily longer than the length scale associated with the inhomogeneities of the system. We find that the scaling depends not only on the amount of damage, but also on the spatial distribution of that damage.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا