ترغب بنشر مسار تعليمي؟ اضغط هنا

Physics-informed generative neural network: an application to troposphere temperature prediction

193   0   0.0 ( 0 )
 نشر من قبل Jie Gao
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

The troposphere is one of the atmospheric layers where most weather phenomena occur. Temperature variations in the troposphere, especially at 500 hPa, a typical level of the middle troposphere, are significant indicators of future weather changes. Numerical weather prediction is effective for temperature prediction, but its computational complexity hinders a timely response. This paper proposes a novel temperature prediction approach in framework ofphysics-informed deep learning. The new model, called PGnet, builds upon a generative neural network with a mask matrix. The mask is designed to distinguish the low-quality predicted regions generated by the first physical stage. The generative neural network takes the mask as prior for the second-stage refined predictions. A mask-loss and a jump pattern strategy are developed to train the generative neural network without accumulating errors during making time-series predictions. Experiments on ERA5 demonstrate that PGnet can generate more refined temperature predictions than the state-of-the-art.



قيم البحث

اقرأ أيضاً

Data assimilation for parameter and state estimation in subsurface transport problems remains a significant challenge due to the sparsity of measurements, the heterogeneity of porous media, and the high computational cost of forward numerical models. We present a physics-informed deep neural networks (DNNs) machine learning method for estimating space-dependent hydraulic conductivity, hydraulic head, and concentration fields from sparse measurements. In this approach, we employ individual DNNs to approximate the unknown parameters (e.g., hydraulic conductivity) and states (e.g., hydraulic head and concentration) of a physical system, and jointly train these DNNs by minimizing the loss function that consists of the governing equations residuals in addition to the error with respect to measurement data. We apply this approach to assimilate conductivity, hydraulic head, and concentration measurements for joint inversion of the conductivity, hydraulic head, and concentration fields in a steady-state advection--dispersion problem. We study the accuracy of the physics-informed DNN approach with respect to data size, number of variables (conductivity and head versus conductivity, head, and concentration), DNNs size, and DNN initialization during training. We demonstrate that the physics-informed DNNs are significantly more accurate than standard data-driven DNNs when the training set consists of sparse data. We also show that the accuracy of parameter estimation increases as additional variables are inverted jointly.
90 - Wei Peng , Jun Zhang , Weien Zhou 2021
Physics Informed Neural Network (PINN) is a scientific computing framework used to solve both forward and inverse problems modeled by Partial Differential Equations (PDEs). This paper introduces IDRLnet, a Python toolbox for modeling and solving prob lems through PINN systematically. IDRLnet constructs the framework for a wide range of PINN algorithms and applications. It provides a structured way to incorporate geometric objects, data sources, artificial neural networks, loss metrics, and optimizers within Python. Furthermore, it provides functionality to solve noisy inverse problems, variational minimization, and integral differential equations. New PINN variants can be integrated into the framework easily. Source code, tutorials, and documentation are available at url{https://github.com/idrl-lab/idrlnet}.
This work investigates the framework and performance issues of the composite neural network, which is composed of a collection of pre-trained and non-instantiated neural network models connected as a rooted directed acyclic graph for solving complica ted applications. A pre-trained neural network model is generally well trained, targeted to approximate a specific function. Despite a general belief that a composite neural network may perform better than a single component, the overall performance characteristics are not clear. In this work, we construct the framework of a composite network, and prove that a composite neural network performs better than any of its pre-trained components with a high probability bound. In addition, if an extra pre-trained component is added to a composite network, with high probability, the overall performance will not be degraded. In the study, we explore a complicated application -- PM2.5 prediction -- to illustrate the correctness of the proposed composite network theory. In the empirical evaluations of PM2.5 prediction, the constructed composite neural network models support the proposed theory and perform better than other machine learning models, demonstrate the advantages of the proposed framework.
Deep learning-based surrogate modeling is becoming a promising approach for learning and simulating dynamical systems. Deep-learning methods, however, find very challenging learning stiff dynamics. In this paper, we develop DAE-PINN, the first effect ive deep-learning framework for learning and simulating the solution trajectories of nonlinear differential-algebraic equations (DAE), which present a form of infinite stiffness and describe, for example, the dynamics of power networks. Our DAE-PINN bases its effectiveness on the synergy between implicit Runge-Kutta time-stepping schemes (designed specifically for solving DAEs) and physics-informed neural networks (PINN) (deep neural networks that we train to satisfy the dynamics of the underlying problem). Furthermore, our framework (i) enforces the neural network to satisfy the DAEs as (approximate) hard constraints using a penalty-based method and (ii) enables simulating DAEs for long-time horizons. We showcase the effectiveness and accuracy of DAE-PINN by learning and simulating the solution trajectories of a three-bus power network.
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems. The concept of PINNs is expanded to learn not only the solution of one particular differential equation but the solutions to a class of problems. We demonstrate this idea by estimating the coercive field of permanent magnets which depends on the width and strength of local defects. When the neural network incorporates the physics of magnetization reversal, training can be achieved in an unsupervised way. There is no need to generate labeled training data. The presented test cases have been rigorously studied in the past. Thus, a detailed and easy comparison with analytical solutions is made. We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا