Do you want to publish a course? Click here

Forecasting the full distribution of earthquake numbers is fair, robust and better

70   0   0.0 ( 0 )
 Added by Shyam Nandan
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

Forecasting the full distribution of the number of earthquakes is revealed to be inherently superior to forecasting their mean. Forecasting the full distribution of earthquake numbers is also shown to yield robust projections in the presence of surprise large earthquakes, which in the past have strongly deteriorated the scores of existing models. We show this with pseudo-prospective experiments on synthetic as well as real data from the Advanced National Seismic System (ANSS) database for California, with earthquakes with magnitude larger than 2.95 that occurred between the period 1971-2016. Our results call in question the testing methodology of the Collaboratory for the study of earthquake predictability (CSEP), which amounts to assuming a Poisson distribution of earthquake numbers, which is known to be a poor representation of the heavy-tailed distribution of earthquake numbers. Using a spatially varying ETAS model, we demonstrate a remarkable stability of the forecasting performance, when using the full distribution of earthquake numbers for the forecasts, even in the presence of large earthquakes such as Mw 7.1 Hector Mine, Mw 7.2 El Mayor-Cucapah, Mw 6.6 Sam Simeon earthquakes, or in the presence of intense swarm activity in Northwest Nevada in 2014. While our results have been derived for ETAS type models, we propose that all earthquake forecasting models of any type should embrace the full distribution of earthquake numbers, such that their true forecasting potential is revealed.



rate research

Read More

We propose two new methods to calibrate the parameters of the Epidemic-Type Aftershock Sequence (ETAS) model based on expectation maximization (EM) while accounting for temporal variation of catalog completeness. The first method allows for model calibration on earthquake catalogs with long history, featuring temporal variation of the magnitude of completeness, $m_c$. This extended calibration technique is beneficial for long-term Probabilistic Seismic Hazard Assessment (PSHA), which is often based on a mixture of instrumental and historical catalogs. The second method jointly estimates ETAS parameters and high-frequency detection incompleteness to address the potential biases in parameter calibration due to short-term aftershock incompleteness. For this, we generalize the concept of completeness magnitude and consider a rate- and magnitude-dependent detection probability $-$ embracing incompleteness instead of avoiding it. Using synthetic tests, we show that both methods can accurately invert the parameters of simulated catalogs. We then use them to estimate ETAS parameters for California using the earthquake catalog since 1932. To explore how the newly gained information from the second method affects earthquakes predictability, we conduct pseudo-prospective forecasting experiments for California. Our proposed model significantly outperforms the base ETAS model, and we find that the ability to include small earthquakes for simulation of future scenarios is the main driver of the improvement. Our results point towards a preference of earthquakes to trigger similarly sized aftershocks, which has potentially major implications for our understanding of earthquake interaction mechanisms and for the future of seismicity forecasting.
Currently, one of the best performing and most popular earthquake forecasting models rely on the working hypothesis that: locations of past background earthquakes reveal the probable location of future seismicity. As an alternative, we present a class of smoothed seismicity models (SSMs) based on the principles of the Epidemic Type Aftershock Sequence (ETAS) model, which forecast the location, time and magnitude of all future earthquakes using the estimates of the background seismicity rate and the rates of future aftershocks of all generations. Using the Californian earthquake catalog, we formulate six controlled pseudo-prospective experiments with different combination of three target magnitude thresholds: 2.95, 3.95 or 4.95 and two forecasting time horizons: 1 or 5 year. In these experiments, we compare the performance of:(1) ETAS model with spatially homogenous parameters or GETAS (2) ETAS model with spatially variable parameters or SVETAS (3) three declustering based SSMs (4) a simple SSM based on undeclustered data and (5) a model based on strain rate data, in forecasting the location and magnitude of all (undeclustered) target earthquakes during many testing periods. In all conducted experiments, the SVETAS model comes out with consistent superiority compared to all the competing models. Consistently better performance of SVETAS model with respect to declustering based SSMs highlights the importance of forecasting the future aftershocks of all generations for developing better earthquake forecasting models. Among the two ETAS models themselves, accounting for the optimal spatial variation of the parameters leads to strong and statistically significant improvements in forecasting performance.
75 - Arnaud Mignan 2018
The complete part of the earthquake frequency-magnitude distribution (FMD), above completeness magnitude mc, is well described by the Gutenberg-Richter law. The parameter mc however varies in space due to the seismic network configuration, yielding a convoluted FMD shape below max(mc). This paper investigates the shape of the generalized FMD (GFMD), which may be described as a mixture of elemental FMDs (eFMDs) defined as asymmetric Laplace distributions of mode mc [Mignan, 2012, https://doi.org/10.1029/2012JB009347]. An asymmetric Laplace mixture model (GFMD- ALMM) is thus proposed with its parameters (detection parameter kappa, Gutenberg-Richter beta-value, mc distribution, as well as number K and weight w of eFMD components) estimated using a semi-supervised hard expectation maximization approach including BIC penalties for model complexity. The performance of the proposed method is analysed, with encouraging results obtained: kappa, beta, and the mc distribution range are retrieved for different GFMD shapes in simulations, as well as in regional catalogues (southern and northern California, Nevada, Taiwan, France), in a global catalogue, and in an aftershock sequence (Christchurch, New Zealand). We find max(mc) to be conservative compared to other methods, kappa = k/log(10) = 3 in most catalogues (compared to beta = b/log(10) = 1), but also that biases in kappa and beta may occur when rounding errors are present below completeness. The GFMD-ALMM, by modelling different FMD shapes in an autonomous manner, opens the door to new statistical analyses in the realm of incomplete seismicity data, which could in theory improve earthquake forecasting by considering c. ten times more events.
In countries with a moderate seismic hazard, the classical methods developed for strong motion prone countries to estimate the seismic behaviour and subsequent vulnerability of existing buildings are often inadequate and not financially realistic. The main goals of this paper are to show how the modal analysis can contribute to the understanding of the seismic building response and the good relevancy of a modal model based on ambient vibrations for estimating the structural deformation under moderate earthquakes. We describe the application of an enhanced modal analysis technique (Frequency Domain Decomposition) to process ambient vibration recordings taken at the Grenoble City Hall building (France). The frequencies of ambient vibrations are compared with those of weak earthquakes recorded by the French permanent accelerometric network (RAP) that was installed to monitor the building. The frequency variations of the building under moderate earthquakes are shown to be slight (~2%) and therefore ambient vibration frequencies are relevant over the elastic domain of the building. The modal parameters extracted from ambient vibrations are then used to determine the 1D lumped-mass model in order to reproduce the inter-storey drift under weak earthquakes and to fix a 3D numerical model that could be used for strong earthquakes. The correlation coefficients between data and synthetic motion are close to 80% and 90% in horizontal directions, for the 1D and 3D modelling, respectively.
An article for the Springer Encyclopedia of Complexity and System Science
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا