Do you want to publish a course? Click here

Deep learning to represent sub-grid processes in climate models

129   0   0.0 ( 0 )
 Added by Stephan Rasp
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

The representation of nonlinear sub-grid processes, especially clouds, has been a major source of uncertainty in climate models for decades. Cloud-resolving models better represent many of these processes and can now be run globally but only for short-term simulations of at most a few years because of computational limitations. Here we demonstrate that deep learning can be used to capture many advantages of cloud-resolving modeling at a fraction of the computational cost. We train a deep neural network to represent all atmospheric sub-grid processes in a climate model by learning from a multi-scale model in which convection is treated explicitly. The trained neural network then replaces the traditional sub-grid parameterizations in a global general circulation model in which it freely interacts with the resolved dynamics and the surface-flux scheme. The prognostic multi-year simulations are stable and closely reproduce not only the mean climate of the cloud-resolving simulation but also key aspects of variability, including precipitation extremes and the equatorial wave spectrum. Furthermore, the neural network approximately conserves energy despite not being explicitly instructed to. Finally, we show that the neural network parameterization generalizes to new surface forcing patterns but struggles to cope with temperatures far outside its training manifold. Our results show the feasibility of using deep learning for climate model parameterization. In a broader context, we anticipate that data-driven Earth System Model development could play a key role in reducing climate prediction uncertainty in the coming decade.



rate research

Read More

We introduce the problem of learning distributed representations of edits. By combining a neural editor with an edit encoder, our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem.
Global climate models represent small-scale processes such as clouds and convection using quasi-empirical models known as parameterizations, and these parameterizations are a leading cause of uncertainty in climate projections. A promising alternative approach is to use machine learning to build new parameterizations directly from high-resolution model output. However, parameterizations learned from three-dimensional model output have not yet been successfully used for simulations of climate. Here we use a random forest to learn a parameterization of subgrid processes from output of a three-dimensional high-resolution atmospheric model. Integrating this parameterization into the atmospheric model leads to stable simulations at coarse resolution that replicate the climate of the high-resolution simulation. The parameterization obeys physical constraints and captures important statistics such as precipitation extremes. The ability to learn from a fully three-dimensional simulation presents an opportunity for learning parameterizations from the wide range of global high-resolution simulations that are now emerging.
Palaeo data have been frequently used to determine the equilibrium (Charney) climate sensitivity $S^a$, and - if slow feedback processes (e.g. land ice-albedo) are adequately taken into account - they indicate a similar range as estimates based on instrumental data and climate model results. Most studies implicitly assume the (fast) feedback processes to be independent of the background climate state, e.g., equally strong during warm and cold periods. Here we assess the dependency of the fast feedback processes on the background climate state using data of the last 800 kyr and a conceptual climate model for interpretation. Applying a new method to account for background state dependency, we find $S^a=0.61pm0.06$ K(Wm$^{-2}$)$^{-1}$ using the latest LGM temperature reconstruction and significantly lower climate sensitivity during glacial climates. Due to uncertainties in reconstructing the LGM temperature anomaly, $S^a$ is estimated in the range $S^a=0.55-0.95$ K(Wm$^{-2}$)$^{-1}$.
Artificial neural-networks have the potential to emulate cloud processes with higher accuracy than the semi-empirical emulators currently used in climate models. However, neural-network models do not intrinsically conserve energy and mass, which is an obstacle to using them for long-term climate predictions. Here, we propose two methods to enforce linear conservation laws in neural-network emulators of physical models: Constraining (1) the loss function or (2) the architecture of the network itself. Applied to the emulation of explicitly-resolved cloud processes in a prototype multi-scale climate model, we show that architecture constraints can enforce conservation laws to satisfactory numerical precision, while all constraints help the neural-network better generalize to conditions outside of its training set, such as global warming.
In this article we detail the use of machine learning for spatiotemporally dynamic turbulence model classification and hybridization for the large eddy simulations (LES) of turbulence. Our predictive framework is devised around the determination of local conditional probabilities for turbulence models that have varying underlying hypotheses. As a first deployment of this learning, we classify a point on our computational grid as that which requires the functional hypothesis, the structural hypothesis or no modeling at all. This ensures that the appropriate model is specified from emph{a priori} knowledge and an efficient balance of model characteristics is obtained in a particular flow computation. In addition, we also utilize the conditional probability predictions of the same machine learning to blend turbulence models for another hybrid closure. Our test-case for the demonstration of this concept is given by Kraichnan turbulence which exhibits a strong interplay of enstrophy and energy cascades in the wave number domain. Our results indicate that the proposed methods lead to robust and stable closure and may potentially be used to combine the strengths of various models for complex flow phenomena prediction.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا