ترغب بنشر مسار تعليمي؟ اضغط هنا

Neural Network Applications in Earthquake Prediction (1994-2019): Meta-Analytic Insight on their Limitations

115   0   0.0 ( 0 )
 نشر من قبل Arnaud Mignan
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In the last few years, deep learning has solved seemingly intractable problems, boosting the hope to find approximate solutions to problems that now are considered unsolvable. Earthquake prediction, the Grail of Seismology, is, in this context of continuous exciting discoveries, an obvious choice for deep learning exploration. We review the entire literature of artificial neural network (ANN) applications for earthquake prediction (77 articles, 1994-2019 period) and find two emerging trends: an increasing interest in this domain, and a complexification of ANN models over time, towards deep learning. Despite apparent positive results observed in this corpus, we demonstrate that simpler models seem to offer similar predictive powers, if not better ones. Due to the structured, tabulated nature of earthquake catalogues, and the limited number of features so far considered, simpler and more transparent machine learning models seem preferable at the present stage of research. Those baseline models follow first physical principles and are consistent with the known empirical laws of Statistical Seismology, which have minimal abilities to predict large earthquakes.



قيم البحث

اقرأ أيضاً

187 - Mohsen Moradi 2017
In this study we determined neural network weights and biases by Imperialist Competitive Algorithm (ICA) in order to train network for predicting earthquake intensity in Richter. For this reason, we used dependent parameters like earthquake occurrenc e time, epicenters latitude and longitude in degree, focal depth in kilometer, and the seismological center distance from epicenter and earthquake focal center in kilometer which has been provided by Berkeley data base. The studied neural network has two hidden layer: its first layer has 16 neurons and the second layer has 24 neurons. By using ICA algorithm, average error for testing data is 0.0007 with a variance equal to 0.318. The earthquake prediction error in Richter by MSE criteria for ICA algorithm is 0.101, but by using GA, the MSE value is 0.115.
This survey samples from the ever-growing family of adaptive resonance theory (ART) neural network models used to perform the three primary machine learning modalities, namely, unsupervised, supervised and reinforcement learning. It comprises a repre sentative list from classic to modern ART models, thereby painting a general picture of the architectures developed by researchers over the past 30 years. The learning dynamics of these ART models are briefly described, and their distinctive characteristics such as code representation, long-term memory and corresponding geometric interpretation are discussed. Useful engineering properties of ART (speed, configurability, explainability, parallelization and hardware implementation) are examined along with current challenges. Finally, a compilation of online software libraries is provided. It is expected that this overview will be helpful to new and seasoned ART researchers.
This paper is concerned with the utilization of deterministically modeled chemical reaction networks for the implementation of (feed-forward) neural networks. We develop a general mathematical framework and prove that the ordinary differential equati ons (ODEs) associated with certain reaction network implementations of neural networks have desirable properties including (i) existence of unique positive fixed points that are smooth in the parameters of the model (necessary for gradient descent), and (ii) fast convergence to the fixed point regardless of initial condition (necessary for efficient implementation). We do so by first making a connection between neural networks and fixed points for systems of ODEs, and then by constructing reaction networks with the correct associated set of ODEs. We demonstrate the theory by constructing a reaction network that implements a neural network with a smoothed ReLU activation function, though we also demonstrate how to generalize the construction to allow for other activation functions (each with the desirable properties listed previously). As there are multiple types of networks utilized in this paper, we also give a careful introduction to both reaction networks and neural networks, in order to disambiguate the overlapping vocabulary in the two settings and to clearly highlight the role of each networks properties.
129 - Thierry Dumas 2018
This paper describes a set of neural network architectures, called Prediction Neural Networks Set (PNNS), based on both fully-connected and convolutional neural networks, for intra image prediction. The choice of neural network for predicting a given image block depends on the block size, hence does not need to be signalled to the decoder. It is shown that, while fully-connected neural networks give good performance for small block sizes, convolutional neural networks provide better predictions in large blocks with complex textures. Thanks to the use of masks of random sizes during training, the neural networks of PNNS well adapt to the available context that may vary, depending on the position of the image block to be predicted. When integrating PNNS into a H.265 codec, PSNR-rate performance gains going from 1.46% to 5.20% are obtained. These gains are on average 0.99% larger than those of prior neural network based methods. Unlike the H.265 intra prediction modes, which are each specialized in predicting a specific texture, the proposed PNNS can model a large set of complex textures.
71 - Yanfei Li , Tong Geng , Ang Li 2021
Binarized neural networks, or BNNs, show great promise in edge-side applications with resource limited hardware, but raise the concerns of reduced accuracy. Motivated by the complex neural networks, in this paper we introduce complex representation i nto the BNNs and propose Binary complex neural network -- a novel network design that processes binary complex inputs and weights through complex convolution, but still can harvest the extraordinary computation efficiency of BNNs. To ensure fast convergence rate, we propose novel BCNN based batch normalization function and weight initialization function. Experimental results on Cifar10 and ImageNet using state-of-the-art network models (e.g., ResNet, ResNetE and NIN) show that BCNN can achieve better accuracy compared to the original BNN models. BCNN improves BNN by strengthening its learning capability through complex representation and extending its applicability to complex-valued input data. The source code of BCNN will be released on GitHub.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا