ترغب بنشر مسار تعليمي؟ اضغط هنا

Physics-based Deep Learning

336   0   0.0 ( 0 )
 نشر من قبل Nils Thuerey
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

This digital book contains a practical and comprehensive introduction of everything related to deep learning in the context of physical simulations. As much as possible, all topics come with hands-on code examples in the form of Jupyter notebooks to quickly get started. Beyond standard supervised learning from data, well look at physical loss constraints, more tightly coupled learning algorithms with differentiable simulations, as well as reinforcement learning and uncertainty modeling. We live in exciting times: these methods have a huge potential to fundamentally change what computer simulations can achieve.

قيم البحث

اقرأ أيضاً

The free energy plays a fundamental role in descriptions of many systems in continuum physics. Notably, in multiphysics applications, it encodes thermodynamic coupling between different fields. It thereby gives rise to driving forces on the dynamics of interaction between the constituent phenomena. In mechano-chemically interacting materials systems, even consideration of only compositions, order parameters and strains can render the free energy to be reasonably high-dimensional. In proposing the free energy as a paradigm for scale bridging, we have previously exploited neural networks for their representation of such high-dimensional functions. Specifically, we have developed an integrable deep neural network (IDNN) that can be trained to free energy derivative data obtained from atomic scale models and statistical mechanics, then analytically integrated to recover a free energy density function. The motivation comes from the statistical mechanics formalism, in which certain free energy derivatives are accessible for control of the system, rather than the free energy itself. Our current work combines the IDNN with an active learning workflow to improve sampling of the free energy derivative data in a high-dimensional input space. Treated as input-output maps, machine learning accommodates role reversals between independent and dependent quantities as the mathematical descriptions change with scale bridging. As a prototypical system we focus on Ni-Al. Phase field simulations using the resulting IDNN representation for the free energy density of Ni-Al demonstrate that the appropriate physics of the material have been learned. To the best of our knowledge, this represents the most complete treatment of scale bridging, using the free energy for a practical materials system, that starts with electronic structure calculations and proceeds through statistical mechanics to continuum physics.
Here we present a machine learning framework and model implementation that can learn to simulate a wide variety of challenging physical domains, involving fluids, rigid solids, and deformable materials interacting with one another. Our framework---wh ich we term Graph Network-based Simulators (GNS)---represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing. Our results show that our model can generalize from single-timestep predictions with thousands of particles during training, to different initial conditions, thousands of timesteps, and at least an order of magnitude more particles at test time. Our model was robust to hyperparameter choices across various evaluation metrics: the main determinants of long-term performance were the number of message-passing steps, and mitigating the accumulation of error by corrupting the training data with noise. Our GNS framework advances the state-of-the-art in learned physical simulation, and holds promise for solving a wide range of complex forward and inverse problems.
Data modeling and reduction for in situ is important. Feature-driven methods for in situ data analysis and reduction are a priority for future exascale machines as there are currently very few such methods. We investigate a deep-learning based workfl ow that targets in situ data processing using autoencoders. We propose a Residual Autoencoder integrated Residual in Residual Dense Block (RRDB) to obtain better performance. Our proposed framework compressed our test data into 66 KB from 2.1 MB per 3D volume timestep.
433 - Didier Lucor 2021
Recent works have explored the potential of machine learning as data-driven turbulence closures for RANS and LES techniques. Beyond these advances, the high expressivity and agility of physics-informed neural networks (PINNs) make them promising cand idates for full fluid flow PDE modeling. An important question is whether this new paradigm, exempt from the traditional notion of discretization of the underlying operators very much connected to the flow scales resolution, is capable of sustaining high levels of turbulence characterized by multi-scale features? We investigate the use of PINNs surrogate modeling for turbulent Rayleigh-B{e}nard (RB) convection flows in rough and smooth rectangular cavities, mainly relying on DNS temperature data from the fluid bulk. We carefully quantify the computational requirements under which the formulation is capable of accurately recovering the flow hidden quantities. We then propose a new padding technique to distribute some of the scattered coordinates-at which PDE residuals are minimized-around the region of labeled data acquisition. We show how it comes to play as a regularization close to the training boundaries which are zones of poor accuracy for standard PINNs and results in a noticeable global accuracy improvement at iso-budget. Finally, we propose for the first time to relax the incompressibility condition in such a way that it drastically benefits the optimization search and results in a much improved convergence of the composite loss function. The RB results obtained at high Rayleigh number Ra = 2 $bullet$ 10 9 are particularly impressive: the predictive accuracy of the surrogate over the entire half a billion DNS coordinates yields errors for all flow variables ranging between [0.3% -- 4%] in the relative L 2 norm, with a training relying only on 1.6% of the DNS data points.
Model-based reinforcement learning (MBRL) is believed to have much higher sample efficiency compared to model-free algorithms by learning a predictive model of the environment. However, the performance of MBRL highly relies on the quality of the lear ned model, which is usually built in a black-box manner and may have poor predictive accuracy outside of the data distribution. The deficiencies of the learned model may prevent the policy from being fully optimized. Although some uncertainty analysis-based remedies have been proposed to alleviate this issue, model bias still poses a great challenge for MBRL. In this work, we propose to leverage the prior knowledge of underlying physics of the environment, where the governing laws are (partially) known. In particular, we developed a physics-informed MBRL framework, where governing equations and physical constraints are utilized to inform the model learning and policy search. By incorporating the prior information of the environment, the quality of the learned model can be notably improved, while the required interactions with the environment are significantly reduced, leading to better sample efficiency and learning performance. The effectiveness and merit have been demonstrated over a handful of classic control problems, where the environments are governed by canonical ordinary/partial differential equations.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا