ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine Learning-Based Optimal Mesh Generation in Computational Fluid Dynamics

66   0   0.0 ( 0 )
 نشر من قبل Friedrich Menhorn
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Computational Fluid Dynamics (CFD) is a major sub-field of engineering. Corresponding flow simulations are typically characterized by heavy computational resource requirements. Often, very fine and complex meshes are required to resolve physical effects in an appropriate manner. Since all CFD algorithms scale at least linearly with the size of the underlying mesh discretization, finding an optimal mesh is key for computational efficiency. One methodology used to find optimal meshes is goal-oriented adaptive mesh refinement. However, this is typically computationally demanding and only available in a limited number of tools. Within this contribution, we adopt a machine learning approach to identify optimal mesh densities. We generate optimized meshes using classical methodologies and propose to train a convolutional network predicting optimal mesh densities given arbitrary geometries. The proposed concept is validated along 2d wind tunnel simulations with more than 60,000 simulations. Using a training set of 20,000 simulations we achieve accuracies of more than 98.7%. Corresponding predictions of optimal meshes can be used as input for any mesh generation and CFD tool. Thus without complex computations, any CFD engineer can start his predictions from a high quality mesh.



قيم البحث

اقرأ أيضاً

Numerical simulation of fluids plays an essential role in modeling many physical phenomena, such as weather, climate, aerodynamics and plasma physics. Fluids are well described by the Navier-Stokes equations, but solving these equations at scale rema ins daunting, limited by the computational cost of resolving the smallest spatiotemporal features. This leads to unfavorable trade-offs between accuracy and tractability. Here we use end-to-end deep learning to improve approximations inside computational fluid dynamics for modeling two-dimensional turbulent flows. For both direct numerical simulation of turbulence and large eddy simulation, our results are as accurate as baseline solvers with 8-10x finer resolution in each spatial dimension, resulting in 40-80x fold computational speedups. Our method remains stable during long simulations, and generalizes to forcing functions and Reynolds numbers outside of the flows where it is trained, in contrast to black box machine learning approaches. Our approach exemplifies how scientific computing can leverage machine learning and hardware accelerators to improve simulations without sacrificing accuracy or generalization.
Mesh-based simulations are central to modeling complex physical systems in many disciplines across science and engineering. Mesh representations support powerful numerical integration methods and their resolution can be adapted to strike favorable tr ade-offs between accuracy and efficiency. However, high-dimensional scientific simulations are very expensive to run, and solvers and parameters must often be tuned individually to each system studied. Here we introduce MeshGraphNets, a framework for learning mesh-based simulations using graph neural networks. Our model can be trained to pass messages on a mesh graph and to adapt the mesh discretization during forward simulation. Our results show it can accurately predict the dynamics of a wide range of physical systems, including aerodynamics, structural mechanics, and cloth. The models adaptivity supports learning resolution-independent dynamics and can scale to more complex state spaces at test time. Our method is also highly efficient, running 1-2 orders of magnitude faster than the simulation on which it is trained. Our approach broadens the range of problems on which neural network simulators can operate and promises to improve the efficiency of complex, scientific modeling tasks.
161 - Kun Xu , Quanhua Sun , 2010
Due to the limited cell resolution in the representation of flow variables, a piecewise continuous initial reconstruction with discontinuous jump at a cell interface is usually used in modern computational fluid dynamics methods. Starting from the di scontinuity, a Riemann problem in the Godunov method is solved for the flux evaluation across the cell interface in a finite volume scheme. With the increasing of Mach number in the CFD simulations, the adaptation of the Riemann solver seems introduce intrinsically a mechanism to develop instabilities in strong shock regions. Theoretically, the Riemann solution of the Euler equations are based on the equilibrium assumption, which may not be valid in the non-equilibrium shock layer. In order to clarify the flow physics from a discontinuity, the unsteady flow behavior of one-dimensional contact and shock wave is studied on a time scale of (0~10000) times of the particle collision time. In the study of the non-equilibrium flow behavior from a discontinuity, the collision-less Boltzmann equation is first used for the time scale within one particle collision time, then the direct simulation Monte Carlo (DSMC) method will be adapted to get the further evolution solution. The transition from the free particle transport to the dissipative Navier-Stokes (NS) solutions are obtained as an increasing of time. The exact Riemann solution becomes a limiting solution with infinite number of particle collisions. For the high Mach number flow simulations, the points in the shock transition region, even though the region is enlarged numerically to the mesh size, should be considered as the points inside a highly non-equilibrium shock layer.
A mesh is a graph that divides physical space into regularly-shaped regions. Meshes computations form the basis of many applications, e.g. finite-element methods, image rendering, and collision detection. In one important mesh primitive, called a mes h update, each mesh vertex stores a value and repeatedly updates this value based on the values stored in all neighboring vertices. The performance of a mesh update depends on the layout of the mesh in memory. This paper shows how to find a memory layout that guarantees that the mesh update has asymptotically optimal memory performance for any set of memory parameters. Such a memory layout is called cache-oblivious. Formally, for a $d$-dimensional mesh $G$, block size $B$, and cache size $M$ (where $M=Omega(B^d)$), the mesh update of $G$ uses $O(1+|G|/B)$ memory transfers. The paper also shows how the mesh-update performance degrades for smaller caches, where $M=o(B^d)$. The paper then gives two algorithms for finding cache-oblivious mesh layouts. The first layout algorithm runs in time $O(|G|log^2|G|)$ both in expectation and with high probability on a RAM. It uses $O(1+|G|log^2(|G|/M)/B)$ memory transfers in expectation and $O(1+(|G|/B)(log^2(|G|/M) + log|G|))$ memory transfers with high probability in the cache-oblivious and disk-access machine (DAM) models. The layout is obtained by finding a fully balanced decomposition tree of $G$ and then performing an in-order traversal of the leaves of the tree. The second algorithm runs faster by almost a $log|G|/loglog|G|$ factor in all three memory models, both in expectation and with high probability. The layout obtained by finding a relax-balanced decomposition tree of $G$ and then performing an in-order traversal of the leaves of the tree.
Physics-Informed Neural Networks (PINNs) have recently shown great promise as a way of incorporating physics-based domain knowledge, including fundamental governing equations, into neural network models for many complex engineering systems. They have been particularly effective in the area of inverse problems, where boundary conditions may be ill-defined, and data-absent scenarios, where typical supervised learning approaches will fail. Here, we further explore the use of this modeling methodology to surrogate modeling of a fluid dynamical system, and demonstrate additional undiscussed and interesting advantages of such a modeling methodology over conventional data-driven approaches: 1) improving the models predictive performance even with incomplete description of the underlying physics; 2) improving the robustness of the model to noise in the dataset; 3) reduced effort to convergence during optimization for a new, previously unseen scenario by transfer optimization of a pre-existing model. Hence, we noticed the inclusion of a physics-based regularization term can substantially improve the equivalent data-driven surrogate model in many substantive ways, including an order of magnitude improvement in test error when the dataset is very noisy, and a 2-3x improvement when only partial physics is included. In addition, we propose a novel transfer optimization scheme for use in such surrogate modeling scenarios and demonstrate an approximately 3x improvement in speed to convergence and an order of magnitude improvement in predictive performance over conventional Xavier initialization for training of new scenarios.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا