ﻻ يوجد ملخص باللغة العربية
Neural networks have become increasingly prevalent within the geosciences, although a common limitation of their usage has been a lack of methods to interpret what the networks learn and how they make decisions. As such, neural networks have often been used within the geosciences to most accurately identify a desired output given a set of inputs, with the interpretation of what the network learns used as a secondary metric to ensure the network is making the right decision for the right reason. Neural network interpretation techniques have become more advanced in recent years, however, and we therefore propose that the ultimate objective of using a neural network can also be the interpretation of what the network has learned rather than the output itself. We show that the interpretation of neural networks can enable the discovery of scientifically meaningful connections within geoscientific data. In particular, we use two methods for neural network interpretation called backwards optimization and layerwise relevance propagation, both of which project the decision pathways of a network back onto the original input dimensions. To the best of our knowledge, LRP has not yet been applied to geoscientific research, and we believe it has great potential in this area. We show how these interpretation techniques can be used to reliably infer scientifically meaningful information from neural networks by applying them to common climate patterns. These results suggest that combining interpretable neural networks with novel scientific hypotheses will open the door to many new avenues in neural network-related geoscience research.
A simple method for adding uncertainty to neural network regression tasks via estimation of a general probability distribution is described. The methodology supports estimation of heteroscedastic, asymmetric uncertainties by a simple modification of
The atmosphere is chaotic. This fundamental property of the climate system makes forecasting weather incredibly challenging: its impossible to expect weather models to ever provide perfect predictions of the Earth system beyond timescales of approxim
Multi-model ensembles provide a pragmatic approach to the representation of model uncertainty in climate prediction. However, such representations are inherently ad hoc, and, as shown, probability distributions of climate variables based on current-g
A promising approach to improve climate-model simulations is to replace traditional subgrid parameterizations based on simplified physical models by machine learning algorithms that are data-driven. However, neural networks (NNs) often lead to instab
Time series models with recurrent neural networks (RNNs) can have high accuracy but are unfortunately difficult to interpret as a result of feature-interactions, temporal-interactions, and non-linear transformations. Interpretability is important in