ترغب بنشر مسار تعليمي؟ اضغط هنا

Stable machine-learning parameterization of subgrid processes for climate modeling at a range of resolutions

74   0   0.0 ( 0 )
 نشر من قبل Janni Yuval
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Global climate models represent small-scale processes such as clouds and convection using quasi-empirical models known as parameterizations, and these parameterizations are a leading cause of uncertainty in climate projections. A promising alternative approach is to use machine learning to build new parameterizations directly from high-resolution model output. However, parameterizations learned from three-dimensional model output have not yet been successfully used for simulations of climate. Here we use a random forest to learn a parameterization of subgrid processes from output of a three-dimensional high-resolution atmospheric model. Integrating this parameterization into the atmospheric model leads to stable simulations at coarse resolution that replicate the climate of the high-resolution simulation. The parameterization obeys physical constraints and captures important statistics such as precipitation extremes. The ability to learn from a fully three-dimensional simulation presents an opportunity for learning parameterizations from the wide range of global high-resolution simulations that are now emerging.



قيم البحث

اقرأ أيضاً

A promising approach to improve climate-model simulations is to replace traditional subgrid parameterizations based on simplified physical models by machine learning algorithms that are data-driven. However, neural networks (NNs) often lead to instab ilities and climate drift when coupled to an atmospheric model. Here we learn an NN parameterization from a high-resolution atmospheric simulation in an idealized domain by coarse graining the model equations and output. The NN parameterization has a structure that ensures physical constraints are respected, and it leads to stable simulations that replicate the climate of the high-resolution simulation with similar accuracy to a successful random-forest parameterization while needing far less memory. We find that the simulations are stable for a variety of NN architectures and horizontal resolutions, and that an NN with substantially reduced numerical precision could decrease computational costs without affecting the quality of simulations.
A stochastic subgrid-scale parameterization based on the Ruelles response theory and proposed in Wouters and Lucarini [2012] is tested in the context of a low-order coupled ocean-atmosphere model for which a part of the atmospheric modes are consider ed as unresolved. A natural separation of the phase-space into an invariant set and its complement allows for an analytical derivation of the different terms involved in the parameterization, namely the average, the fluctuation and the long memory terms. In this case, the fluctuation term is an additive stochastic noise. Its application to the low-order system reveals that a considerable correction of the low-frequency variability along the invariant subset can be obtained, provided that the coupling is sufficiently weak. This new approach of scale separation opens new avenues of subgrid-scale parameterizations in multiscale systems used for climate forecasts.
Modern weather and climate models share a common heritage, and often even components, however they are used in different ways to answer fundamentally different questions. As such, attempts to emulate them using machine learning should reflect this. W hile the use of machine learning to emulate weather forecast models is a relatively new endeavour there is a rich history of climate model emulation. This is primarily because while weather modelling is an initial condition problem which intimately depends on the current state of the atmosphere, climate modelling is predominantly a boundary condition problem. In order to emulate the response of the climate to different drivers therefore, representation of the full dynamical evolution of the atmosphere is neither necessary, or in many cases, desirable. Climate scientists are typically interested in different questions also. Indeed emulating the steady-state climate response has been possible for many years and provides significant speed increases that allow solving inverse problems for e.g. parameter estimation. Nevertheless, the large datasets, non-linear relationships and limited training data make Climate a domain which is rich in interesting machine learning challenges. Here I seek to set out the current state of climate model emulation and demonstrate how, despite some challenges, recent advances in machine learning provide new opportunities for creating useful statistical models of the climate.
We review some recent methods of subgrid-scale parameterization used in the context of climate modeling. These methods are developed to take into account (subgrid) processes playing an important role in the correct representation of the atmospheric a nd climate variability. We illustrate these methods on a simple stochastic triad system relevant for the atmospheric and climate dynamics, and we show in particular that the stability properties of the underlying dynamics of the subgrid processes has a considerable impact on their performances.
Climate models are complicated software systems that approximate atmospheric and oceanic fluid mechanics at a coarse spatial resolution. Typical climate forecasts only explicitly resolve processes larger than 100 km and approximate any process occurr ing below this scale (e.g. thunderstorms) using so-called parametrizations. Machine learning could improve upon the accuracy of some traditional physical parametrizations by learning from so-called global cloud-resolving models. We compare the performance of two machine learning models, random forests (RF) and neural networks (NNs), at parametrizing the aggregate effect of moist physics in a 3 km resolution global simulation with an atmospheric model. The NN outperforms the RF when evaluated offline on a testing dataset. However, when the ML models are coupled to an atmospheric model run at 200 km resolution, the NN-assisted simulation crashes with 7 days, while the RF-assisted simulations remain stable. Both runs produce more accurate weather forecasts than a baseline configuration, but globally averaged climate variables drift over longer timescales.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا