Do you want to publish a course? Click here

Neural Network-Based Equations for Predicting PGA and PGV in Texas, Oklahoma, and Kansas

105   0   0.0 ( 0 )
 Added by Farid Khosravikia
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Parts of Texas, Oklahoma, and Kansas have experienced increased rates of seismicity in recent years, providing new datasets of earthquake recordings to develop ground motion prediction models for this particular region of the Central and Eastern North America (CENA). This paper outlines a framework for using Artificial Neural Networks (ANNs) to develop attenuation models from the ground motion recordings in this region. While attenuation models exist for the CENA, concerns over the increased rate of seismicity in this region necessitate investigation of ground motions prediction models particular to these states. To do so, an ANN-based framework is proposed to predict peak ground acceleration (PGA) and peak ground velocity (PGV) given magnitude, earthquake source-to-site distance, and shear wave velocity. In this framework, approximately 4,500 ground motions with magnitude greater than 3.0 recorded in these three states (Texas, Oklahoma, and Kansas) since 2005 are considered. Results from this study suggest that existing ground motion prediction models developed for CENA do not accurately predict the ground motion intensity measures for earthquakes in this region, especially for those with low source-to-site distances or on very soft soil conditions. The proposed ANN models provide much more accurate prediction of the ground motion intensity measures at all distances and magnitudes. The proposed ANN models are also converted to relatively simple mathematical equations so that engineers can easily use them to predict the ground motion intensity measures for future events. Finally, through a sensitivity analysis, the contributions of the predictive parameters to the prediction of the considered intensity measures are investigated.



rate research

Read More

We show experimentally that the accuracy of a trained neural network can be predicted surprisingly well by looking only at its weights, without evaluating it on input data. We motivate this task and introduce a formal setting for it. Even when using simple statistics of the weights, the predictors are able to rank neural networks by their performance with very high accuracy (R2 score more than 0.98). Furthermore, the predictors are able to rank networks trained on different, unobserved datasets and with different architectures. We release a collection of 120k convolutional neural networks trained on four different datasets to encourage further research in this area, with the goal of understanding network training and performance better.
Seismic inverse modeling is a common method in reservoir prediction and it plays a vital role in the exploration and development of oil and gas. Conventional seismic inversion method is difficult to combine with complicated and abstract knowledge on geological mode and its uncertainty is difficult to be assessed. The paper proposes an inversion modeling method based on GAN consistent with geology, well logs, seismic data. GAN is a the most promising generation model algorithm that extracts spatial structure and abstract features of training images. The trained GAN can reproduce the models with specific mode. In our test, 1000 models were generated in 1 second. Based on the trained GAN after assessment, the optimal result of models can be calculated through Bayesian inversion frame. Results show that inversion models conform to observation data and have a low uncertainty under the premise of fast generation. This seismic inverse modeling method increases the efficiency and quality of inversion iteration. It is worthy of studying and applying in fusion of seismic data and geological knowledge.
This study proposes a novel Graph Convolutional Neural Network with Data-driven Graph Filter (GCNN-DDGF) model that can learn hidden heterogeneous pairwise correlations between stations to predict station-level hourly demand in a large-scale bike-sharing network. Two architectures of the GCNN-DDGF model are explored; GCNNreg-DDGF is a regular GCNN-DDGF model which contains the convolution and feedforward blocks, and GCNNrec-DDGF additionally contains a recurrent block from the Long Short-term Memory neural network architecture to capture temporal dependencies in the bike-sharing demand series. Furthermore, four types of GCNN models are proposed whose adjacency matrices are based on various bike-sharing system data, including Spatial Distance matrix (SD), Demand matrix (DE), Average Trip Duration matrix (ATD), and Demand Correlation matrix (DC). These six types of GCNN models and seven other benchmark models are built and compared on a Citi Bike dataset from New York City which includes 272 stations and over 28 million transactions from 2013 to 2016. Results show that the GCNNrec-DDGF performs the best in terms of the Root Mean Square Error, the Mean Absolute Error and the coefficient of determination (R2), followed by the GCNNreg-DDGF. They outperform the other models. Through a more detailed graph network analysis based on the learned DDGF, insights are obtained on the black box of the GCNN-DDGF model. It is found to capture some information similar to details embedded in the SD, DE and DC matrices. More importantly, it also uncovers hidden heterogeneous pairwise correlations between stations that are not revealed by any of those matrices.
220 - Ali Siahkoohi , Gabrio Rizzuti , 2020
Uncertainty quantification is essential when dealing with ill-conditioned inverse problems due to the inherent nonuniqueness of the solution. Bayesian approaches allow us to determine how likely an estimation of the unknown parameters is via formulating the posterior distribution. Unfortunately, it is often not possible to formulate a prior distribution that precisely encodes our prior knowledge about the unknown. Furthermore, adherence to handcrafted priors may greatly bias the outcome of the Bayesian analysis. To address this issue, we propose to use the functional form of a randomly initialized convolutional neural network as an implicit structured prior, which is shown to promote natural images and excludes images with unnatural noise. In order to incorporate the model uncertainty into the final estimate, we sample the posterior distribution using stochastic gradient Langevin dynamics and perform Bayesian model averaging on the obtained samples. Our synthetic numerical experiment verifies that deep priors combined with Bayesian model averaging are able to partially circumvent imaging artifacts and reduce the risk of overfitting in the presence of extreme noise. Finally, we present pointwise variance of the estimates as a measure of uncertainty, which coincides with regions that are more difficult to image.
Ongoing developments in neural network models are continually advancing the state of the art in terms of system accuracy. However, the predicted labels should not be regarded as the only core output; also important is a well-calibrated estimate of the prediction uncertainty. Such estimates and their calibration are critical in many practical applications. Despite their obvious aforementioned advantage in relation to accuracy, contemporary neural networks can, generally, be regarded as poorly calibrated and as such do not produce reliable output probability estimates. Further, while post-processing calibration solutions can be found in the relevant literature, these tend to be for systems performing classification. In this regard, we herein present two novel methods for acquiring calibrated predictions intervals for neural network regressors: empirical calibration and temperature scaling. In experiments using different regression tasks from the audio and computer vision domains, we find that both our proposed methods are indeed capable of producing calibrated prediction intervals for neural network regressors with any desired confidence level, a finding that is consistent across all datasets and neural network architectures we experimented with. In addition, we derive an additional practical recommendation for producing more accurate calibrated prediction intervals. We release the source code implementing our proposed methods for computing calibrated predicted intervals. The code for computing calibrated predicted intervals is publicly available.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا