ترغب بنشر مسار تعليمي؟ اضغط هنا

NVIDIA SimNet^{TM}: an AI-accelerated multi-physics simulation framework

59   0   0.0 ( 0 )
 نشر من قبل Zhiwei Fang
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We present SimNet, an AI-driven multi-physics simulation framework, to accelerate simulations across a wide range of disciplines in science and engineering. Compared to traditional numerical solvers, SimNet addresses a wide range of use cases - coupled forward simulations without any training data, inverse and data assimilation problems. SimNet offers fast turnaround time by enabling parameterized system representation that solves for multiple configurations simultaneously, as opposed to the traditional solvers that solve for one configuration at a time. SimNet is integrated with parameterized constructive solid geometry as well as STL modules to generate point clouds. Furthermore, it is customizable with APIs that enable user extensions to geometry, physics and network architecture. It has advanced network architectures that are optimized for high-performance GPU computing, and offers scalable performance for multi-GPU and multi-Node implementation with accelerated linear algebra as well as FP32, FP64 and TF32 computations. In this paper we review the neural network solver methodology, the SimNet architecture, and the various features that are needed for effective solution of the PDEs. We present real-world use cases that range from challenging forward multi-physics simulations with turbulence and complex 3D geometries, to industrial design optimization and inverse problems that are not addressed efficiently by the traditional solvers. Extensive comparisons of SimNet results with open source and commercial solvers show good correlation.



قيم البحث

اقرأ أيضاً

While cycle-accurate simulators are essential tools for architecture research, design, and development, their practicality is limited by an extremely long time-to-solution for realistic problems under investigation. This work describes a concerted ef fort, where machine learning (ML) is used to accelerate discrete-event simulation. First, an ML-based instruction latency prediction framework that accounts for both static instruction/architecture properties and dynamic execution context is constructed. Then, a GPU-accelerated parallel simulator is implemented based on the proposed instruction latency predictor, and its simulation accuracy and throughput are validated and evaluated against a state-of-the-art simulator. Leveraging modern GPUs, the ML-based simulator outperforms traditional simulators significantly.
Numerical simulation of fluids plays an essential role in modeling many physical phenomena, such as weather, climate, aerodynamics and plasma physics. Fluids are well described by the Navier-Stokes equations, but solving these equations at scale rema ins daunting, limited by the computational cost of resolving the smallest spatiotemporal features. This leads to unfavorable trade-offs between accuracy and tractability. Here we use end-to-end deep learning to improve approximations inside computational fluid dynamics for modeling two-dimensional turbulent flows. For both direct numerical simulation of turbulence and large eddy simulation, our results are as accurate as baseline solvers with 8-10x finer resolution in each spatial dimension, resulting in 40-80x fold computational speedups. Our method remains stable during long simulations, and generalizes to forcing functions and Reynolds numbers outside of the flows where it is trained, in contrast to black box machine learning approaches. Our approach exemplifies how scientific computing can leverage machine learning and hardware accelerators to improve simulations without sacrificing accuracy or generalization.
294 - Chao Jiang 2021
Despite a cost-effective option in practical engineering, Reynolds-averaged Navier-Stokes simulations are facing the ever-growing demand for more accurate turbulence models. Recently, emerging machine learning techniques are making promising impact i n turbulence modeling, but in their infancy for widespread industrial adoption. Towards this end, this work proposes a universal, inherently interpretable machine learning framework of turbulence modeling, which mainly consists of two parallel machine-learning-based modules to respectively infer the integrity basis and closure coefficients. At every phase of the model development, both data representing the evolution dynamics of turbulence and domain-knowledge representing prior physical considerations are properly fed and reasonably converted into modeling knowledge. Thus, the developed model is both data- and knowledge-driven. Specifically, a version with pre-constrained integrity basis is provided to demonstrate detailedly how to integrate domain-knowledge, how to design a fair and robust training strategy, and how to evaluate the data-driven model. Plain neural network and residual neural network as the building blocks in each module are compared. Emphases are made on three-fold: (i) a compact input feature parameterizing the newly-proposed turbulent timescale is introduced to release nonunique mappings between conventional input arguments and output Reynolds stress; (ii) the realizability limiter is developed to overcome under-constraint of modeled stress; and (iii) constraints of fairness and noisy-sensitivity are first included in the training procedure. In such endeavors, an invariant, realizable, unbiased and robust data-driven turbulence model is achieved, and does gain good generalization across channel flows at different Reynolds numbers and duct flows with various aspect ratios.
Despite the significant progress over the last 50 years in simulating flow problems using numerical discretization of the Navier-Stokes equations (NSE), we still cannot incorporate seamlessly noisy data into existing algorithms, mesh-generation is co mplex, and we cannot tackle high-dimensional problems governed by parametrized NSE. Moreover, solving inverse flow problems is often prohibitively expensive and requires complex and expensive formulations and new computer codes. Here, we review flow physics-informed learning, integrating seamlessly data and mathematical models, and implementing them using physics-informed neural networks (PINNs). We demonstrate the effectiveness of PINNs for inverse problems related to three-dimensional wake flows, supersonic flows, and biomedical flows.
The Green Nagdhi equations are frequently used as a model of the wave-like behaviour of the free surface of a fluid, or the interface between two homogeneous fluids of differing densities. Here we show that their multilayer extension arises naturally from a framework based on the Euler Poincare theory under an ansatz of columnar motion. The framework also extends to the travelling wave solutions of the equations. We present numerical solutions of the travelling wave problem in a number of flow regimes. We find that the free surface and multilayer waves can exhibit intriguing differences compared to the results of single layer or rigid lid models.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا