Do you want to publish a course? Click here

Triggering of large earthquakes is driven by their twins

150   0   0.0 ( 0 )
 Added by Shyam Nandan
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Fundamentally related to the UV divergence problem in Physics, conventional wisdom in seismology is that the smallest earthquakes, which are numerous and often go undetected, dominate the triggering of major earthquakes, making the prediction of the latter difficult if not inherently impossible. By developing a rigorous validation procedure, we show that, in fact, large earthquakes (above magnitude 6.3 in California) are preferentially triggered by large events. Because of the magnitude correlations intrinsic in the validated model, we further rationalize the existence of earthquake doublets. These findings have far-reaching implications for short-term and medium-term seismic risk assessment, as well as for the development of a deeper theory without UV cut-off that is locally self-similar.



rate research

Read More

The driving concept behind one of the most successful statistical forecasting models, the ETAS model, has been that the seismicity is driven by spontaneously occurring background earthquakes that cascade into multitudes of triggered earthquakes. In nearly all generalizations of the ETAS model, the magnitudes of the background and the triggered earthquakes are assumed to follow Gutenberg-Richter law with the same exponent (b{eta}-value). Furthermore, the magnitudes of the triggered earthquakes are always assumed to be independent of the magnitude of the triggering earthquake. Using an EM algorithm applied to the Californian earthquake catalogue, we show that the distribution of earthquake magnitudes exhibits three distinct b{eta}-values: b{eta}_b for background events; b{eta}_a-{delta} and b{eta}_a+{delta}, respectively, for triggered events below and above the magnitude of the triggering earthquake; the two last values express a correlation between the magnitudes of triggered events with that of the triggering earthquake, a feature so far absent in all proposed operational generalizations of the ETAS model. The ETAS model incorporating this kinked magnitude distribution provides by far the best description of seismic catalogs and could thus have the best forecasting potential. We speculate that the kinked magnitude distribution may result from the system tending to restore the symmetry of the regional displacement gradient tensor that has been broken by the initiating event. The general emerging concept could be that while the background events occur primarily to accommodate the symmetric stress tensor at the boundaries of the system, the triggered earthquakes are quasi-Goldstone fluctuations of a self-organized critical deformation state.
A likely source of earthquake clustering is static stress transfer between individual events. Previous attempts to quantify the role of static stress for earthquake triggering generally considered only the stress changes caused by large events, and often discarded data uncertainties. We conducted a robust two-fold empirical test of the static stress change hypothesis by accounting for all events of magnitude M>=2.5 and their location and focal mechanism uncertainties provided by catalogs for Southern California between 1981 and 2010, first after resolving the focal plane ambiguity and second after randomly choosing one of the two nodal planes. For both cases, we find compelling evidence supporting the static triggering with stronger evidence after resolving the focal plane ambiguity above significantly small (about 10 Pa) but consistently observed stress thresholds. The evidence for the static triggering hypothesis is robust with respect to the choice of the friction coefficient, Skemptons coefficient and magnitude threshold. Weak correlations between the Coulomb Index (fraction of earthquakes that received positive Coulomb stress change) and the coefficient of friction indicate that the role of normal stress in triggering is rather limited. Last but not the least, we determined that the characteristic time for the loss of the stress change memory of a single event is nearly independent of the amplitude of the Coulomb stress change and varies between ~95 and ~180 days implying that forecasts based on static stress changes will have poor predictive skills beyond times that are larger than a few hundred days on average.
105 - Pablo Jensen 2008
We have developed a method to obtain robust quantitative bibliometric indicators for several thousand scientists. This allows us to study the dependence of bibliometric indicators (such as number of publications, number of citations, Hirsch index...) on the age, position, etc. of CNRS scientists. Our data suggests that the normalized h index (h divided by the career length) is not constant for scientists with the same productivity but differents ages. We also compare the predictions of several bibliometric indicators on the promotions of about 600 CNRS researchers. Contrary to previous publications, our study encompasses most disciplines, and shows that no single indicator is the best predictor for all disciplines. Overall, however, the Hirsch index h provides the least bad correlations, followed by the number of papers published. It is important to realize however that even h is able to recover only half of the actual promotions. The number of citations or the mean number of citations per paper are definitely not good predictors of promotion.
92 - D. Sornette 2005
In models of triggered seismicity and in their inversion with empirical data, the detection threshold m_d is commonly equated to the magnitude m_0 of the smallest triggering earthquake. This unjustified assumption neglects the possibility of shocks below the detection threshold triggering observable events. We introduce a formalism that distinguishes between the detection threshold m_d and the minimum triggering earthquake m_0 < m_d. By considering the branching structure of one complete cascade of triggered events, we derive the apparent branching ratio n_a (which is the apparent fraction of aftershocks in a given catalog) and the apparent background source S_a that are observed when only the structure above the detection threshold m_d is known due to the presence of smaller undetected events that are capable of triggering larger events. If earthquake triggering is controlled in large part by the smallest magnitudes as several recent analyses have shown, this implies that previous estimates of the clustering parameters may significantly underestimate the true values: for instance, an observed fraction of 55% of aftershocks is renormalized into a true value of 75% of triggered events.
Low-frequency earthquakes are a particular class of slow earthquakes that provide a unique source of information on the mechanical properties of a subduction zone during the preparation of large earthquakes. Despite increasing detection of these events in recent years, their source mechanisms are still poorly characterised, and the relation between their magnitude and size remains controversial. Here, we present the source characterisation of more than 10,000 low-frequency earthquakes that occurred during tremor sequences in 2012-2016 along the Nankai subduction zone in western Shikoku, Japan. We show that the seismic moment versus corner frequency scaling for these events is compatible with an inverse of the cube law, as widely observed for regular earthquakes. Our result is thus consistent with shear rupture as the source mechanism for low-frequency earthquakes, and suggests that they obey to a similar physics of regular earthquakes, with self-similar rupture process and constant stress drop. Furthermore, when investigating the dependence of the stress drop value on the rupture speed, we found that low-frequency earthquakes might propagate at lower rupture velocity than regular earthquakes, releasing smaller stress drop.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا