ترغب بنشر مسار تعليمي؟ اضغط هنا

Physically Interpretable Neural Networks for the Geosciences: Applications to Earth System Variability

90   0   0.0 ( 0 )
 نشر من قبل Benjamin Toms
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Neural networks have become increasingly prevalent within the geosciences, although a common limitation of their usage has been a lack of methods to interpret what the networks learn and how they make decisions. As such, neural networks have often been used within the geosciences to most accurately identify a desired output given a set of inputs, with the interpretation of what the network learns used as a secondary metric to ensure the network is making the right decision for the right reason. Neural network interpretation techniques have become more advanced in recent years, however, and we therefore propose that the ultimate objective of using a neural network can also be the interpretation of what the network has learned rather than the output itself. We show that the interpretation of neural networks can enable the discovery of scientifically meaningful connections within geoscientific data. In particular, we use two methods for neural network interpretation called backwards optimization and layerwise relevance propagation, both of which project the decision pathways of a network back onto the original input dimensions. To the best of our knowledge, LRP has not yet been applied to geoscientific research, and we believe it has great potential in this area. We show how these interpretation techniques can be used to reliably infer scientifically meaningful information from neural networks by applying them to common climate patterns. These results suggest that combining interpretable neural networks with novel scientific hypotheses will open the door to many new avenues in neural network-related geoscience research.



قيم البحث

اقرأ أيضاً

A simple method for adding uncertainty to neural network regression tasks via estimation of a general probability distribution is described. The methodology supports estimation of heteroscedastic, asymmetric uncertainties by a simple modification of the network output and loss function. Method performance is demonstrated with a simple one dimensional data set and then applied to a more complex regression task using synthetic climate data.
The atmosphere is chaotic. This fundamental property of the climate system makes forecasting weather incredibly challenging: its impossible to expect weather models to ever provide perfect predictions of the Earth system beyond timescales of approxim ately 2 weeks. Instead, atmospheric scientists look for specific states of the climate system that lead to more predictable behaviour than others. Here, we demonstrate how neural networks can be used, not only to leverage these states to make skillful predictions, but moreover to identify the climatic conditions that lead to enhanced predictability. Furthermore, we employ a neural network interpretability method called ``layer-wise relevance propagation to create heatmaps of the regions in the input most relevant for a networks output. For Earth scientists, these relevant regions for the neural networks prediction are by far the most important product of our study: they provide scientific insight into the physical mechanisms that lead to enhanced weather predictability. While we demonstrate our approach for the atmospheric science domain, this methodology is applicable to a large range of geoscientific problems.
Multi-model ensembles provide a pragmatic approach to the representation of model uncertainty in climate prediction. However, such representations are inherently ad hoc, and, as shown, probability distributions of climate variables based on current-g eneration multi-model ensembles, are not accurate. Results from seasonal re-forecast studies suggest that climate model ensembles based on stochastic-dynamic parametrisation are beginning to outperform multi-model ensembles, and have the potential to become significantly more skilful than multi-model ensembles. The case is made for stochastic representations of model uncertainty in future-generation climate prediction models. Firstly, a guiding characteristic of the scientific method is an ability to characterise and predict uncertainty; individual climate models are not currently able to do this. Secondly, through the effects of noise-induced rectification, stochastic-dynamic parametrisation may provide a (poor mans) surrogate to high resolution. Thirdly, stochastic-dynamic parametrisations may be able to take advantage of the inherent stochasticity of electron flow through certain types of low-energy computer chips, currently under development. These arguments have particular resonance for next-generation Earth-System models, which purport to be comprehensive numerical representations of climate, and where integrations at high resolution may be unaffordable.
A promising approach to improve climate-model simulations is to replace traditional subgrid parameterizations based on simplified physical models by machine learning algorithms that are data-driven. However, neural networks (NNs) often lead to instab ilities and climate drift when coupled to an atmospheric model. Here we learn an NN parameterization from a high-resolution atmospheric simulation in an idealized domain by coarse graining the model equations and output. The NN parameterization has a structure that ensures physical constraints are respected, and it leads to stable simulations that replicate the climate of the high-resolution simulation with similar accuracy to a successful random-forest parameterization while needing far less memory. We find that the simulations are stable for a variety of NN architectures and horizontal resolutions, and that an NN with substantially reduced numerical precision could decrease computational costs without affecting the quality of simulations.
Time series models with recurrent neural networks (RNNs) can have high accuracy but are unfortunately difficult to interpret as a result of feature-interactions, temporal-interactions, and non-linear transformations. Interpretability is important in domains like healthcare where constructing models that provide insight into the relationships they have learned are required to validate and trust model predictions. We want accurate time series models where users can understand the contribution of individual input features. We present the Interpretable-RNN (I-RNN) that balances model complexity and accuracy by forcing the relationship between variables in the model to be additive. Interactions are restricted between hidden states of the RNN and additively combined at the final step. I-RNN specifically captures the unique characteristics of clinical time series, which are unevenly sampled in time, asynchronously acquired, and have missing data. Importantly, the hidden state activations represent feature coefficients that correlate with the prediction target and can be visualized as risk curves that capture the global relationship between individual input features and the outcome. We evaluate the I-RNN model on the Physionet 2012 Challenge dataset to predict in-hospital mortality, and on a real-world clinical decision support task: predicting hemodynamic interventions in the intensive care unit. I-RNN provides explanations in the form of global and local feature importances comparable to highly intelligible models like decision trees trained on hand-engineered features while significantly outperforming them. I-RNN remains intelligible while providing accuracy comparable to state-of-the-art decay-based and interpolation-based recurrent time series models. The experimental results on real-world clinical datasets refute the myth that there is a tradeoff between accuracy and interpretability.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا