ترغب بنشر مسار تعليمي؟ اضغط هنا

Data-Driven Modeling of Coarse Mesh Turbulence for Reactor Transient Analysis Using Convolutional Recurrent Neural Networks

146   0   0.0 ( 0 )
 نشر من قبل Yang Liu
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Advanced nuclear reactors often exhibit complex thermal-fluid phenomena during transients. To accurately capture such phenomena, a coarse-mesh three-dimensional (3-D) modeling capability is desired for modern nuclear-system code. In the coarse-mesh 3-D modeling of advanced-reactor transients that involve flow and heat transfer, accurately predicting the turbulent viscosity is a challenging task that requires an accurate and computationally efficient model to capture the unresolved fine-scale turbulence. In this paper, we propose a data-driven coarse-mesh turbulence model based on local flow features for the transient analysis of thermal mixing and stratification in a sodium-cooled fast reactor. The model has a coarse-mesh setup to ensure computational efficiency, while it is trained by fine-mesh computational fluid dynamics (CFD) data to ensure accuracy. A novel neural network architecture, combining a densely connected convolutional network and a long-short-term-memory network, is developed that can efficiently learn from the spatial-temporal CFD transient simulation results. The neural network model was trained and optimized on a loss-of-flow transient and demonstrated high accuracy in predicting the turbulent viscosity field during the whole transient. The trained models generalization capability was also investigated on two other transients with different inlet conditions. The study demonstrates the potential of applying the proposed data-driven approach to support the coarse-mesh multi-dimensional modeling of advanced reactors.



قيم البحث

اقرأ أيضاً

Generalizability of machine-learning (ML) based turbulence closures to accurately predict unseen practical flows remains an important challenge. It is well recognized that the ML neural network architecture and training protocol profoundly influence the generalizability characteristics. The objective of this work is to identify the unique challenges in finding the ML closure network hyperparameters that arise due to the inherent complexity of turbulence. Three proxy-physics turbulence surrogates of different degrees of complexity (yet significantly simpler than turbulence physics) are employed. The proxy-physics models mimic some of the key features of turbulence and provide training/testing data at low computational expense. The focus is on the following turbulence features: high dimensionality of flow physics parameter space, non-linearity effects and bifurcations in emergent behavior. A standard fully-connected neural network is used to reproduce the data of simplified proxy-physics turbulence surrogates. Lacking a rigorous procedure to find globally optimal ML neural network hyperparameters, a brute-force parameter-space sweep is performed to examine the existence of locally optimal solution. Even for this simple case, it is demonstrated that the choice of the optimal hyperparameters for a fully-connected neural network is not straightforward when it is trained with the partially available data in parameter space. Overall, specific issues to be addressed are identified, and the findings provide a realistic perspective on the utility of ML turbulence closures for practical applications.
Recently, physics-driven deep learning methods have shown particular promise for the prediction of physical fields, especially to reduce the dependency on large amounts of pre-computed training data. In this work, we target the physics-driven learnin g of complex flow fields with high resolutions. We propose the use of emph{Convolutional neural networks} (CNN) based U-net architectures to efficiently represent and reconstruct the input and output fields, respectively. By introducing Navier-Stokes equations and boundary conditions into loss functions, the physics-driven CNN is designed to predict corresponding steady flow fields directly. In particular, this prevents many of the difficulties associated with approaches employing fully connected neural networks. Several numerical experiments are conducted to investigate the behavior of the CNN approach, and the results indicate that a first-order accuracy has been achieved. Specifically for the case of a flow around a cylinder, different flow regimes can be learned and the adhered twin-vortices are predicted correctly. The numerical results also show that the training for multiple cases is accelerated significantly, especially for the difficult cases at low Reynolds numbers, and when limited reference solutions are used as supplementary learning targets.
There is a growing interest in developing data-driven subgrid-scale (SGS) models for large-eddy simulation (LES) using machine learning (ML). In a priori (offline) tests, some recent studies have found ML-based data-driven SGS models that are trained on high-fidelity data (e.g., from direct numerical simulation, DNS) to outperform baseline physics-based models and accurately capture the inter-scale transfers, both forward (diffusion) and backscatter. While promising, instabilities in a posteriori (online) tests and inabilities to generalize to a different flow (e.g., with a higher Reynolds number, Re) remain as major obstacles in broadening the applications of such data-driven SGS models. For example, many of the same aforementioned studies have found instabilities that required often ad-hoc remedies to stabilize the LES at the expense of reducing accuracy. Here, using 2D decaying turbulence as the testbed, we show that deep fully convolutional neural networks (CNNs) can accurately predict the SGS forcing terms and the inter-scale transfers in a priori tests, and if trained with enough samples, lead to stable and accurate a posteriori LES-CNN. Further analysis attributes these instabilities to the disproportionally lower accuracy of the CNNs in capturing backscattering when the training set is small. We also show that transfer learning, which involves re-training the CNN with a small amount of data (e.g., 1%) from the new flow, enables accurate and stable a posteriori LES-CNN for flows with 16x higher Re (as well as higher grid resolution if needed). These results show the promise of CNNs with transfer learning to provide stable, accurate, and generalizable LES for practical use.
There are two main strategies for improving the projection-based reduced order model (ROM) accuracy: (i) improving the ROM, i.e., adding new terms to the standard ROM; and (ii) improving the ROM basis, i.e., constructing ROM bases that yield more acc urate ROMs. In this paper, we use the latter. We propose new Lagrangian inner products that we use together with Eulerian and Lagrangian data to construct new Lagrangian ROMs. We show that the new Lagrangian ROMs are orders of magnitude more accurate than the standard Eulerian ROMs, i.e., ROMs that use standard Eulerian inner product and data to construct the ROM basis. Specifically, for the quasi-geostrophic equations, we show that the new Lagrangian ROMs are more accurate than the standard Eulerian ROMs in approximating not only Lagrangian fields (e.g., the finite time Lyapunov exponent (FTLE)), but also Eulerian fields (e.g., the streamfunction). We emphasize that the new Lagrangian ROMs do not employ any closure modeling to model the effect of discarded modes (which is standard procedure for low-dimensional ROMs of complex nonlinear systems). Thus, the dramatic increase in the new Lagrangian ROMs accuracy is entirely due to the novel Lagrangian inner products used to build the Lagrangian ROM basis.
This article deals with approximating steady-state particle-resolved fluid flow around a fixed particle of interest under the influence of randomly distributed stationary particles in a dispersed multiphase setup using Convolutional Neural Network (C NN). The considered problem involves rotational symmetry about the mean velocity (streamwise) direction. Thus, this work enforces this symmetry using $mathbf{textbf{SE(3)-equivariant}}$, special Euclidean group of dimension 3, CNN architecture, which is translation and three-dimensional rotation equivariant. This study mainly explores the generalization capabilities and benefits of SE(3)-equivariant network. Accurate synthetic flow fields for Reynolds number and particle volume fraction combinations spanning over a range of [86.22, 172.96] and [0.11, 0.45] respectively are produced with careful application of symmetry-aware data-driven approach.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا