ﻻ يوجد ملخص باللغة العربية
The behaviors and skills of models in many geosciences, e.g., hydrology and ecosystem sciences, strongly depend on spatially varying parameters that need calibration. Here we propose a novel differentiable parameter learning (dPL) framework that solves a pattern recognition problem and learns a more robust, universal mapping. Crucially, dPL exhibits virtuous scaling curves not previously demonstrated to geoscientists: as training data collectively increases, dPL achieves better performance, more physical coherence, and better generalization, all with orders-of-magnitude lower computational cost. We demonstrate examples of calibrating models to soil moisture and streamflow, where dPL drastically outperformed state-of-the-art evolutionary and regionalization methods, or requires ~12.5% the training data to achieve the similar performance. The generic scheme promotes the integration of deep learning and process-based models, without mandating reimplementation.
There is significant interest in learning and optimizing a complex system composed of multiple sub-components, where these components may be agents or autonomous sensors. Among the rich literature on this topic, agent-based and domain-specific simula
The information content of crystalline materials becomes astronomical when collective electronic behavior and their fluctuations are taken into account. In the past decade, improvements in source brightness and detector technology at modern x-ray fac
This paper formulates and studies a novel algorithm for federated learning from large collections of local datasets. This algorithm capitalizes on an intrinsic network structure that relates the local datasets via an undirected empirical graph. We mo
We present a supervised learning method to learn the propagator map of a dynamical system from partial and noisy observations. In our computationally cheap and easy-to-implement framework a neural network consisting of random feature maps is trained
Regression problems that have closed-form solutions are well understood and can be easily implemented when the dataset is small enough to be all loaded into the RAM. Challenges arise when data is too big to be stored in RAM to compute the closed form